title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: update the Series.between docstring | diff --git a/pandas/core/series.py b/pandas/core/series.py
index e4801242073a2..1348304afafa2 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3511,19 +3511,68 @@ def isin(self, values):
def between(self, left, right, inclusive=True):
"""
- Return boolean Series equivalent to left <= series <= right. NA values
- will be treated as False
+ Return boolean Series equivalent to left <= series <= right.
+
+ This function returns a boolean vector containing `True` wherever the
+ corresponding Series element is between the boundary values `left` and
+ `right`. NA values are treated as `False`.
Parameters
----------
left : scalar
- Left boundary
+ Left boundary.
right : scalar
- Right boundary
+ Right boundary.
+ inclusive : bool, default True
+ Include boundaries.
Returns
-------
- is_between : Series
+ Series
+ Each element will be a boolean.
+
+ Notes
+ -----
+ This function is equivalent to ``(left <= ser) & (ser <= right)``
+
+ See Also
+ --------
+ pandas.Series.gt : Greater than of series and other
+ pandas.Series.lt : Less than of series and other
+
+ Examples
+ --------
+ >>> s = pd.Series([2, 0, 4, 8, np.nan])
+
+ Boundary values are included by default:
+
+ >>> s.between(1, 4)
+ 0 True
+ 1 False
+ 2 True
+ 3 False
+ 4 False
+ dtype: bool
+
+ With `inclusive` set to ``False`` boundary values are excluded:
+
+ >>> s.between(1, 4, inclusive=False)
+ 0 True
+ 1 False
+ 2 False
+ 3 False
+ 4 False
+ dtype: bool
+
+ `left` and `right` can be any scalar value:
+
+ >>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve'])
+ >>> s.between('Anna', 'Daniel')
+ 0 False
+ 1 True
+ 2 True
+ 3 False
+ dtype: bool
"""
if inclusive:
lmask = self >= left
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
###################### Docstring (pandas.Series.between) ######################
################################################################################
Return boolean Series equivalent to left <= series <= right.
This function returns a boolean vector containing `True` wherever the
corresponding Series element is between the boundary values `left` and
`right`. NA values are treated as `False`.
Parameters
----------
left : scalar
Left boundary.
right : scalar
Right boundary.
inclusive : bool, default True
Include boundaries.
Returns
-------
is_between : Series
Examples
--------
>>> s = pd.Series([2, 0, 4, 8, np.nan])
Boundary values are included by default:
>>> s.between(1, 4)
0 True
1 False
2 True
3 False
4 False
dtype: bool
With `inclusive` set to `False` boundary values are excluded:
>>> s.between(1, 4, inclusive=False)
0 True
1 False
2 False
3 False
4 False
dtype: bool
`left` and `right` can be any scalar value:
>>> s = pd.Series(['Alice', 'Bob', 'Carol', 'Eve'])
>>> s.between('Anna', 'Daniel')
0 False
1 True
2 True
3 False
dtype: bool
See Also
--------
DataFrame.query : Query the columns of a frame with a boolean
expression.
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Series.between" correct. :)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20443 | 2018-03-21T22:23:33Z | 2018-03-28T16:37:16Z | 2018-03-28T16:37:16Z | 2018-03-28T16:37:32Z |
BUG: Patch for skipping seek() when loading Excel files from url | diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 0f9df845117db..5bce37b9d7735 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -10,6 +10,7 @@
import abc
import warnings
import numpy as np
+from io import UnsupportedOperation
from pandas.core.dtypes.common import (
is_integer, is_float,
@@ -388,8 +389,13 @@ def __init__(self, io, **kwds):
elif not isinstance(io, xlrd.Book) and hasattr(io, "read"):
# N.B. xlrd.Book has a read attribute too
if hasattr(io, 'seek'):
- # GH 19779
- io.seek(0)
+ try:
+ # GH 19779
+ io.seek(0)
+ except UnsupportedOperation:
+ # HTTPResponse does not support seek()
+ # GH 20434
+ pass
data = io.read()
self.book = xlrd.open_workbook(file_contents=data)
| - [x] closes #20434
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is my first PR and I tried to follow the guideline. Sorry, if I overlooked something.
I don't have to add anything in whatsnew, because the bug isn't present in the current stable version 0.22, correct? | https://api.github.com/repos/pandas-dev/pandas/pulls/20437 | 2018-03-21T11:31:13Z | 2018-04-03T19:23:39Z | 2018-04-03T19:23:38Z | 2018-04-03T19:42:18Z |
Add link to "Craft Minimal Bug Report" blogpost | diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index f7fc9575566b7..ff0aa8af611db 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -32,8 +32,9 @@ Bug reports and enhancement requests
Bug reports are an important part of making *pandas* more stable. Having a complete bug report
will allow others to reproduce the bug and provide insight into fixing. See
-`this stackoverflow article <https://stackoverflow.com/help/mcve>`_ for tips on
-writing a good bug report.
+`this stackoverflow article <https://stackoverflow.com/help/mcve>`_ and
+`this blogpost <http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports>`_
+for tips on writing a good bug report.
Trying the bug-producing code out on the *master* branch is often a worthwhile exercise
to confirm the bug still exists. It is also worth searching existing bug reports and pull requests
| As requested in https://twitter.com/jreback/status/976242019864588294
This blogpost extends the MCVE documentation currently linked to in this
docpage with some do's and don'ts counter-examples. | https://api.github.com/repos/pandas-dev/pandas/pulls/20431 | 2018-03-20T23:49:59Z | 2018-03-20T23:57:46Z | 2018-03-20T23:57:46Z | 2018-03-20T23:57:54Z |
DOC: update the DataFrame.stack docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index efb002474f876..51394c47468cd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5145,36 +5145,166 @@ def pivot_table(self, values=None, index=None, columns=None,
def stack(self, level=-1, dropna=True):
"""
- Pivot a level of the (possibly hierarchical) column labels, returning a
- DataFrame (or Series in the case of an object with a single level of
- column labels) having a hierarchical index with a new inner-most level
- of row labels.
- The level involved will automatically get sorted.
+ Stack the prescribed level(s) from columns to index.
+
+ Return a reshaped DataFrame or Series having a multi-level
+ index with one or more new inner-most levels compared to the current
+ DataFrame. The new inner-most levels are created by pivoting the
+ columns of the current dataframe:
+
+ - if the columns have a single level, the output is a Series;
+ - if the columns have multiple levels, the new index
+ level(s) is (are) taken from the prescribed level(s) and
+ the output is a DataFrame.
+
+ The new index levels are sorted.
Parameters
----------
- level : int, string, or list of these, default last level
- Level(s) to stack, can pass level name
- dropna : boolean, default True
- Whether to drop rows in the resulting Frame/Series with no valid
- values
+ level : int, str, list, default -1
+ Level(s) to stack from the column axis onto the index
+ axis, defined as one index or label, or a list of indices
+ or labels.
+ dropna : bool, default True
+ Whether to drop rows in the resulting Frame/Series with
+ missing values. Stacking a column level onto the index
+ axis can create combinations of index and column values
+ that are missing from the original dataframe. See Examples
+ section.
+
+ Returns
+ -------
+ DataFrame or Series
+ Stacked dataframe or series.
+
+ See Also
+ --------
+ DataFrame.unstack : Unstack prescribed level(s) from index axis
+ onto column axis.
+ DataFrame.pivot : Reshape dataframe from long format to wide
+ format.
+ DataFrame.pivot_table : Create a spreadsheet-style pivot table
+ as a DataFrame.
+
+ Notes
+ -----
+ The function is named by analogy with a collection of books
+ being re-organised from being side by side on a horizontal
+ position (the columns of the dataframe) to being stacked
+ vertically on top of of each other (in the index of the
+ dataframe).
Examples
- ----------
- >>> s
- a b
- one 1. 2.
- two 3. 4.
+ --------
+ **Single level columns**
+
+ >>> df_single_level_cols = pd.DataFrame([[0, 1], [2, 3]],
+ ... index=['cat', 'dog'],
+ ... columns=['weight', 'height'])
+
+ Stacking a dataframe with a single level column axis returns a Series:
+
+ >>> df_single_level_cols
+ weight height
+ cat 0 1
+ dog 2 3
+ >>> df_single_level_cols.stack()
+ cat weight 0
+ height 1
+ dog weight 2
+ height 3
+ dtype: int64
- >>> s.stack()
- one a 1
- b 2
- two a 3
- b 4
+ **Multi level columns: simple case**
+
+ >>> multicol1 = pd.MultiIndex.from_tuples([('weight', 'kg'),
+ ... ('weight', 'pounds')])
+ >>> df_multi_level_cols1 = pd.DataFrame([[1, 2], [2, 4]],
+ ... index=['cat', 'dog'],
+ ... columns=multicol1)
+
+ Stacking a dataframe with a multi-level column axis:
+
+ >>> df_multi_level_cols1
+ weight
+ kg pounds
+ cat 1 2
+ dog 2 4
+ >>> df_multi_level_cols1.stack()
+ weight
+ cat kg 1
+ pounds 2
+ dog kg 2
+ pounds 4
+
+ **Missing values**
+
+ >>> multicol2 = pd.MultiIndex.from_tuples([('weight', 'kg'),
+ ... ('height', 'm')])
+ >>> df_multi_level_cols2 = pd.DataFrame([[1.0, 2.0], [3.0, 4.0]],
+ ... index=['cat', 'dog'],
+ ... columns=multicol2)
+
+ It is common to have missing values when stacking a dataframe
+ with multi-level columns, as the stacked dataframe typically
+ has more values than the original dataframe. Missing values
+ are filled with NaNs:
+
+ >>> df_multi_level_cols2
+ weight height
+ kg m
+ cat 1.0 2.0
+ dog 3.0 4.0
+ >>> df_multi_level_cols2.stack()
+ height weight
+ cat kg NaN 1.0
+ m 2.0 NaN
+ dog kg NaN 3.0
+ m 4.0 NaN
+
+ **Prescribing the level(s) to be stacked**
+
+ The first parameter controls which level or levels are stacked:
+
+ >>> df_multi_level_cols2.stack(0)
+ kg m
+ cat height NaN 2.0
+ weight 1.0 NaN
+ dog height NaN 4.0
+ weight 3.0 NaN
+ >>> df_multi_level_cols2.stack([0, 1])
+ cat height m 2.0
+ weight kg 1.0
+ dog height m 4.0
+ weight kg 3.0
+ dtype: float64
- Returns
- -------
- stacked : DataFrame or Series
+ **Dropping missing values**
+
+ >>> df_multi_level_cols3 = pd.DataFrame([[None, 1.0], [2.0, 3.0]],
+ ... index=['cat', 'dog'],
+ ... columns=multicol2)
+
+ Note that rows where all values are missing are dropped by
+ default but this behaviour can be controlled via the dropna
+ keyword parameter:
+
+ >>> df_multi_level_cols3
+ weight height
+ kg m
+ cat NaN 1.0
+ dog 2.0 3.0
+ >>> df_multi_level_cols3.stack(dropna=False)
+ height weight
+ cat kg NaN NaN
+ m 1.0 NaN
+ dog kg NaN 2.0
+ m 3.0 NaN
+ >>> df_multi_level_cols3.stack(dropna=True)
+ height weight
+ cat m 1.0 NaN
+ dog kg NaN 2.0
+ m 3.0 NaN
"""
from pandas.core.reshape.reshape import stack, stack_multiple
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the DataFrame.stack docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
# paste output of "scripts/validate_docstrings.py <your-function-or-method>" here
# between the "```" (remove this comment, but keep the "```")
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20430 | 2018-03-20T22:44:01Z | 2018-03-26T13:06:16Z | 2018-03-26T13:06:16Z | 2018-03-26T13:06:25Z |
read_excel with dtype=str converts empty cells to np.nan | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index fb63dc16249b2..00ba6cc242d8b 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1099,6 +1099,7 @@ I/O
- Bug in :meth:`pandas.io.json.json_normalize` where subrecords are not properly normalized if any subrecords values are NoneType (:issue:`20030`)
- Bug in ``usecols`` parameter in :func:`pandas.io.read_csv` and :func:`pandas.io.read_table` where error is not raised correctly when passing a string. (:issue:`20529`)
- Bug in :func:`HDFStore.keys` when reading a file with a softlink causes exception (:issue:`20523`)
+- Bug in :func:`read_excel` and :func:`read_csv` where missing values turned to ``'nan'`` with ``dtype=str`` and ``na_filter=True``. Now, these missing values are converted to the string missing indicator, ``np.nan``. (:issue `20377`)
Plotting
^^^^^^^^
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 30521760327b4..540fc55a2f818 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -461,11 +461,17 @@ cpdef ndarray[object] astype_unicode(ndarray arr):
cdef:
Py_ssize_t i, n = arr.size
ndarray[object] result = np.empty(n, dtype=object)
+ object arr_i
for i in range(n):
# we can use the unsafe version because we know `result` is mutable
# since it was created from `np.empty`
- util.set_value_at_unsafe(result, i, unicode(arr[i]))
+ arr_i = arr[i]
+ util.set_value_at_unsafe(
+ result,
+ i,
+ unicode(arr_i) if not checknull(arr_i) else np.nan
+ )
return result
@@ -474,11 +480,17 @@ cpdef ndarray[object] astype_str(ndarray arr):
cdef:
Py_ssize_t i, n = arr.size
ndarray[object] result = np.empty(n, dtype=object)
+ object arr_i
for i in range(n):
# we can use the unsafe version because we know `result` is mutable
# since it was created from `np.empty`
- util.set_value_at_unsafe(result, i, str(arr[i]))
+ arr_i = arr[i]
+ util.set_value_at_unsafe(
+ result,
+ i,
+ str(arr_i) if not checknull(arr_i) else np.nan
+ )
return result
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index a24e2cdd99f6f..da00a09f5bd16 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -1218,6 +1218,7 @@ cdef class TextReader:
# treat as a regular string parsing
return self._string_convert(i, start, end, na_filter,
na_hashset)
+
elif dtype.kind == 'U':
width = dtype.itemsize
if width > 0:
@@ -1227,6 +1228,7 @@ cdef class TextReader:
# unicode variable width
return self._string_convert(i, start, end, na_filter,
na_hashset)
+
elif is_categorical_dtype(dtype):
# TODO: I suspect that _categorical_convert could be
# optimized when dtype is an instance of CategoricalDtype
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1d6f770d92795..9131aa446396a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4153,4 +4153,8 @@ def _try_cast(arr, take_fast_path):
data = np.array(data, dtype=dtype, copy=False)
subarr = np.array(data, dtype=object, copy=copy)
+ # GH 20377
+ # Turn all 'nan' to np.nan
+ subarr[subarr == 'nan'] = np.nan
+
return subarr
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 152159965036d..fb0dd4a0e343a 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -529,7 +529,7 @@ def test_astype_str(self):
# consistency in astype(str)
for tt in set([str, compat.text_type]):
result = DataFrame([np.NaN]).astype(tt)
- expected = DataFrame(['nan'])
+ expected = DataFrame([np.NaN], dtype=object)
assert_frame_equal(result, expected)
result = DataFrame([1.12345678901234567890]).astype(tt)
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index d2c3f82e95c4d..6cbc8cd752d50 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -369,3 +369,27 @@ def test_no_na_filter_on_index(self):
expected = DataFrame({"a": [1, 4], "c": [3, 6]},
index=Index([np.nan, 5.0], name="b"))
tm.assert_frame_equal(out, expected)
+
+ def test_na_values_with_dtype_str_and_na_filter_true(self):
+ # see gh-20377
+ data = "a,b,c\n1,,3\n4,5,6"
+
+ out = self.read_csv(StringIO(data), na_filter=True, dtype=str)
+
+ # missing data turn to np.nan, which stays as it is after dtype=str
+ expected = DataFrame({"a": ["1", "4"],
+ "b": [np.nan, "5"],
+ "c": ["3", "6"]})
+ tm.assert_frame_equal(out, expected)
+
+ def test_na_values_with_dtype_str_and_na_filter_false(self):
+ # see gh-20377
+ data = "a,b,c\n1,,3\n4,5,6"
+
+ out = self.read_csv(StringIO(data), na_filter=False, dtype=str)
+
+ # missing data turn to empty string
+ expected = DataFrame({"a": ["1", "4"],
+ "b": ["", "5"],
+ "c": ["3", "6"]})
+ tm.assert_frame_equal(out, expected)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index 5ef6dc07a5c22..190edf1950cc7 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -360,6 +360,33 @@ def test_reader_dtype(self, ext):
with pytest.raises(ValueError):
actual = self.get_exceldf(basename, ext, dtype={'d': 'int64'})
+ def test_reader_dtype_str(self, ext):
+ # GH 20377
+ basename = 'testdtype'
+ actual = self.get_exceldf(basename, ext)
+
+ expected = DataFrame({
+ 'a': [1, 2, 3, 4],
+ 'b': [2.5, 3.5, 4.5, 5.5],
+ 'c': [1, 2, 3, 4],
+ 'd': [1.0, 2.0, np.nan, 4.0]}).reindex(
+ columns=['a', 'b', 'c', 'd'])
+
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.get_exceldf(basename, ext,
+ dtype={'a': 'float64',
+ 'b': 'float32',
+ 'c': str,
+ 'd': str})
+
+ expected['a'] = expected['a'].astype('float64')
+ expected['b'] = expected['b'].astype('float32')
+ expected['c'] = ['001', '002', '003', '004']
+ expected['d'] = ['1', '2', np.nan, '4']
+
+ tm.assert_frame_equal(actual, expected)
+
def test_reading_all_sheets(self, ext):
# Test reading all sheetnames by setting sheetname to None,
# Ensure a dict is returned.
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index dd1b623f0f7ff..19894673e07c8 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -149,6 +149,7 @@ def test_astype_str_map(self, dtype, series):
# see gh-4405
result = series.astype(dtype)
expected = series.map(compat.text_type)
+ expected = expected.replace('nan', np.nan) # see gh-20377
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("dtype", [str, compat.text_type])
| Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [x] closes #20377
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20429 | 2018-03-20T20:59:44Z | 2018-10-11T01:48:55Z | null | 2018-10-11T01:48:55Z |
parameterize tests in scalar/timedelta | diff --git a/pandas/tests/scalar/timedelta/test_construction.py b/pandas/tests/scalar/timedelta/test_construction.py
index 5ccad9e6b4e3c..d648140aa7347 100644
--- a/pandas/tests/scalar/timedelta/test_construction.py
+++ b/pandas/tests/scalar/timedelta/test_construction.py
@@ -195,28 +195,18 @@ def test_iso_constructor_raises(fmt):
Timedelta(fmt)
-def test_td_constructor_on_nanoseconds():
+@pytest.mark.parametrize('constructed_td, conversion', [
+ (Timedelta(nanoseconds=100), '100ns'),
+ (Timedelta(days=1, hours=1, minutes=1, weeks=1, seconds=1, milliseconds=1,
+ microseconds=1, nanoseconds=1), 694861001001001),
+ (Timedelta(microseconds=1) + Timedelta(nanoseconds=1), '1us1ns'),
+ (Timedelta(microseconds=1) - Timedelta(nanoseconds=1), '999ns'),
+ (Timedelta(microseconds=1) + 5 * Timedelta(nanoseconds=-2), '990ns')])
+def test_td_constructor_on_nanoseconds(constructed_td, conversion):
# GH#9273
- result = Timedelta(nanoseconds=100)
- expected = Timedelta('100ns')
- assert result == expected
-
- result = Timedelta(days=1, hours=1, minutes=1, weeks=1, seconds=1,
- milliseconds=1, microseconds=1, nanoseconds=1)
- expected = Timedelta(694861001001001)
- assert result == expected
-
- result = Timedelta(microseconds=1) + Timedelta(nanoseconds=1)
- expected = Timedelta('1us1ns')
- assert result == expected
-
- result = Timedelta(microseconds=1) - Timedelta(nanoseconds=1)
- expected = Timedelta('999ns')
- assert result == expected
+ assert constructed_td == Timedelta(conversion)
- result = Timedelta(microseconds=1) + 5 * Timedelta(nanoseconds=-2)
- expected = Timedelta('990ns')
- assert result == expected
+def test_td_constructor_value_error():
with pytest.raises(TypeError):
Timedelta(nanoseconds='abc')
diff --git a/pandas/tests/scalar/timedelta/test_formats.py b/pandas/tests/scalar/timedelta/test_formats.py
index 8a877c7d1c0fa..0d0b24f192f96 100644
--- a/pandas/tests/scalar/timedelta/test_formats.py
+++ b/pandas/tests/scalar/timedelta/test_formats.py
@@ -1,48 +1,28 @@
# -*- coding: utf-8 -*-
-from pandas import Timedelta
-
-
-def test_repr():
- assert (repr(Timedelta(10, unit='d')) ==
- "Timedelta('10 days 00:00:00')")
- assert (repr(Timedelta(10, unit='s')) ==
- "Timedelta('0 days 00:00:10')")
- assert (repr(Timedelta(10, unit='ms')) ==
- "Timedelta('0 days 00:00:00.010000')")
- assert (repr(Timedelta(-10, unit='ms')) ==
- "Timedelta('-1 days +23:59:59.990000')")
+import pytest
+from pandas import Timedelta
-def test_isoformat():
- td = Timedelta(days=6, minutes=50, seconds=3,
- milliseconds=10, microseconds=10, nanoseconds=12)
- expected = 'P6DT0H50M3.010010012S'
- result = td.isoformat()
- assert result == expected
- td = Timedelta(days=4, hours=12, minutes=30, seconds=5)
- result = td.isoformat()
- expected = 'P4DT12H30M5S'
- assert result == expected
+@pytest.mark.parametrize('td, expected_repr', [
+ (Timedelta(10, unit='d'), "Timedelta('10 days 00:00:00')"),
+ (Timedelta(10, unit='s'), "Timedelta('0 days 00:00:10')"),
+ (Timedelta(10, unit='ms'), "Timedelta('0 days 00:00:00.010000')"),
+ (Timedelta(-10, unit='ms'), "Timedelta('-1 days +23:59:59.990000')")])
+def test_repr(td, expected_repr):
+ assert repr(td) == expected_repr
- td = Timedelta(nanoseconds=123)
- result = td.isoformat()
- expected = 'P0DT0H0M0.000000123S'
- assert result == expected
+@pytest.mark.parametrize('td, expected_iso', [
+ (Timedelta(days=6, minutes=50, seconds=3, milliseconds=10, microseconds=10,
+ nanoseconds=12), 'P6DT0H50M3.010010012S'),
+ (Timedelta(days=4, hours=12, minutes=30, seconds=5), 'P4DT12H30M5S'),
+ (Timedelta(nanoseconds=123), 'P0DT0H0M0.000000123S'),
# trim nano
- td = Timedelta(microseconds=10)
- result = td.isoformat()
- expected = 'P0DT0H0M0.00001S'
- assert result == expected
-
+ (Timedelta(microseconds=10), 'P0DT0H0M0.00001S'),
# trim micro
- td = Timedelta(milliseconds=1)
- result = td.isoformat()
- expected = 'P0DT0H0M0.001S'
- assert result == expected
-
+ (Timedelta(milliseconds=1), 'P0DT0H0M0.001S'),
# don't strip every 0
- result = Timedelta(minutes=1).isoformat()
- expected = 'P0DT0H1M0S'
- assert result == expected
+ (Timedelta(minutes=1), 'P0DT0H1M0S')])
+def test_isoformat(td, expected_iso):
+ assert td.isoformat() == expected_iso
| - [x] closes https://github.com/pandas-dev/pandas/issues/20426
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/20428 | 2018-03-20T20:53:40Z | 2018-03-22T10:40:39Z | 2018-03-22T10:40:39Z | 2018-03-23T19:57:50Z |
DOC: Update missing_data.rst | diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index ee0e2c7462f66..3950e4c80749b 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -75,7 +75,7 @@ arise and we wish to also consider that "missing" or "not available" or "NA".
To make detecting missing values easier (and across different array dtypes),
pandas provides the :func:`isna` and
:func:`notna` functions, which are also methods on
-``Series`` and ``DataFrame`` objects:
+Series and DataFrame objects:
.. ipython:: python
@@ -170,9 +170,8 @@ The descriptive statistics and computational methods discussed in the
account for missing data. For example:
* When summing data, NA (missing) values will be treated as zero.
-* If the data are all NA, the result will be NA.
-* Methods like **cumsum** and **cumprod** ignore NA values, but preserve them
- in the resulting arrays.
+* If the data are all NA, the result will be 0.
+* Cumulative methods like :meth:`~DataFrame.cumsum` and :meth:`~DataFrame.cumprod` ignore NA values by default, but preserve them in the resulting arrays. To override this behaviour and include NA values, use ``skipna=False``.
.. ipython:: python
@@ -180,6 +179,7 @@ account for missing data. For example:
df['one'].sum()
df.mean(1)
df.cumsum()
+ df.cumsum(skipna=False)
.. _missing_data.numeric_sum:
@@ -189,33 +189,24 @@ Sum/Prod of Empties/Nans
.. warning::
- This behavior is now standard as of v0.21.0; previously sum/prod would give different
- results if the ``bottleneck`` package was installed.
- See the :ref:`v0.21.0 whatsnew <whatsnew_0210.api_breaking.bottleneck>`.
+ This behavior is now standard as of v0.22.0 and is consistent with the default in ``numpy``; previously sum/prod of all-NA or empty Series/DataFrames would return NaN.
+ See :ref:`v0.22.0 whatsnew <whatsnew_0220>` for more.
-With ``sum`` or ``prod`` on an empty or all-``NaN`` ``Series``, or columns of a ``DataFrame``, the result will be all-``NaN``.
-
-.. ipython:: python
-
- s = pd.Series([np.nan])
-
- s.sum()
-
-Summing over an empty ``Series`` will return ``NaN``:
+The sum of an empty or all-NA Series or column of a DataFrame is 0.
.. ipython:: python
+ pd.Series([np.nan]).sum()
+
pd.Series([]).sum()
-.. warning::
+The product of an empty or all-NA Series or column of a DataFrame is 1.
- These behaviors differ from the default in ``numpy`` where an empty sum returns zero.
-
- .. ipython:: python
-
- np.nansum(np.array([np.nan]))
- np.nansum(np.array([]))
+.. ipython:: python
+ pd.Series([np.nan]).prod()
+
+ pd.Series([]).prod()
NA values in GroupBy
@@ -242,7 +233,7 @@ with missing data.
Filling missing values: fillna
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The **fillna** function can "fill in" NA values with non-NA data in a couple
+:meth:`~DataFrame.fillna` can "fill in" NA values with non-NA data in a couple
of ways, which we illustrate:
**Replace NA with a scalar value**
@@ -292,8 +283,8 @@ To remind you, these are the available filling methods:
With time series data, using pad/ffill is extremely common so that the "last
known value" is available at every time point.
-The ``ffill()`` function is equivalent to ``fillna(method='ffill')``
-and ``bfill()`` is equivalent to ``fillna(method='bfill')``
+:meth:`~DataFrame.ffill` is equivalent to ``fillna(method='ffill')``
+and :meth:`~DataFrame.bfill` is equivalent to ``fillna(method='bfill')``
.. _missing_data.PandasObject:
@@ -329,7 +320,7 @@ Dropping axis labels with missing data: dropna
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You may wish to simply exclude labels from a data set which refer to missing
-data. To do this, use the :meth:`~DataFrame.dropna` method:
+data. To do this, use :meth:`~DataFrame.dropna`:
.. ipython:: python
:suppress:
@@ -344,7 +335,7 @@ data. To do this, use the :meth:`~DataFrame.dropna` method:
df.dropna(axis=1)
df['one'].dropna()
-An equivalent :meth:`~Series.dropna` method is available for Series.
+An equivalent :meth:`~Series.dropna` is available for Series.
DataFrame.dropna has considerably more options than Series.dropna, which can be
examined :ref:`in the API <api.dataframe.missing>`.
@@ -357,7 +348,7 @@ Interpolation
The ``limit_area`` keyword argument was added.
-Both Series and DataFrame objects have an :meth:`~DataFrame.interpolate` method
+Both Series and DataFrame objects have :meth:`~DataFrame.interpolate`
that, by default, performs linear interpolation at missing datapoints.
.. ipython:: python
@@ -486,7 +477,7 @@ at the new values.
Interpolation Limits
^^^^^^^^^^^^^^^^^^^^
-Like other pandas fill methods, ``interpolate`` accepts a ``limit`` keyword
+Like other pandas fill methods, :meth:`~DataFrame.interpolate` accepts a ``limit`` keyword
argument. Use this argument to limit the number of consecutive ``NaN`` values
filled since the last valid observation:
@@ -533,8 +524,9 @@ the ``limit_area`` parameter restricts filling to either inside or outside value
Replacing Generic Values
~~~~~~~~~~~~~~~~~~~~~~~~
-Often times we want to replace arbitrary values with other values. The
-``replace`` method in Series/DataFrame provides an efficient yet
+Often times we want to replace arbitrary values with other values.
+
+:meth:`~Series.replace` in Series and :meth:`~DataFrame.replace` in DataFrame provides an efficient yet
flexible way to perform such replacements.
For a Series, you can replace a single value or a list of values by another
@@ -674,7 +666,7 @@ want to use a regular expression.
Numeric Replacement
~~~~~~~~~~~~~~~~~~~
-The :meth:`~DataFrame.replace` method is similar to :meth:`~DataFrame.fillna`.
+:meth:`~DataFrame.replace` is similar to :meth:`~DataFrame.fillna`.
.. ipython:: python
@@ -763,7 +755,7 @@ contains NAs, an exception will be generated:
reindexed = s.reindex(list(range(8))).fillna(0)
reindexed[crit]
-However, these can be filled in using **fillna** and it will work fine:
+However, these can be filled in using :meth:`~DataFrame.fillna` and it will work fine:
.. ipython:: python
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
# paste output of "scripts/validate_docstrings.py <your-function-or-method>" here
# between the "```" (remove this comment, but keep the "```")
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [x] closes #20398
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/20424 | 2018-03-20T18:55:20Z | 2018-03-29T16:23:37Z | 2018-03-29T16:23:37Z | 2018-03-29T16:23:43Z |
BUG: dropna() on single column timezone-aware values (#13407) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 9159c03edee2e..58132fd9395c9 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -903,11 +903,12 @@ Timezones
- :func:`Timestamp.replace` will now handle Daylight Savings transitions gracefully (:issue:`18319`)
- Bug in tz-aware :class:`DatetimeIndex` where addition/subtraction with a :class:`TimedeltaIndex` or array with ``dtype='timedelta64[ns]'`` was incorrect (:issue:`17558`)
- Bug in :func:`DatetimeIndex.insert` where inserting ``NaT`` into a timezone-aware index incorrectly raised (:issue:`16357`)
-- Bug in the :class:`DataFrame` constructor, where tz-aware Datetimeindex and a given column name will result in an empty ``DataFrame`` (:issue:`19157`)
+- Bug in :class:`DataFrame` constructor, where tz-aware Datetimeindex and a given column name will result in an empty ``DataFrame`` (:issue:`19157`)
- Bug in :func:`Timestamp.tz_localize` where localizing a timestamp near the minimum or maximum valid values could overflow and return a timestamp with an incorrect nanosecond value (:issue:`12677`)
- Bug when iterating over :class:`DatetimeIndex` that was localized with fixed timezone offset that rounded nanosecond precision to microseconds (:issue:`19603`)
- Bug in :func:`DataFrame.diff` that raised an ``IndexError`` with tz-aware values (:issue:`18578`)
- Bug in :func:`melt` that converted tz-aware dtypes to tz-naive (:issue:`15785`)
+- Bug in :func:`Dataframe.count` that raised an ``ValueError`` if .dropna() method is invoked for single column timezone-aware values. (:issue:`13407`)
Offsets
^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9b09c87689762..b1217bec69655 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6578,7 +6578,9 @@ def count(self, axis=0, level=None, numeric_only=False):
# column frames with an extension array
result = notna(frame).sum(axis=axis)
else:
- counts = notna(frame.values).sum(axis=axis)
+ # GH13407
+ series_counts = notna(frame).sum(axis=axis)
+ counts = series_counts.values
result = Series(counts, index=frame._get_agg_axis(axis))
return result.astype('int64')
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 2e4e8b9582cf6..668eae21c664f 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -8,6 +8,9 @@
from numpy import nan, random
import numpy as np
+import datetime
+import dateutil
+
from pandas.compat import lrange
from pandas import (DataFrame, Series, Timestamp,
date_range, Categorical)
@@ -183,6 +186,26 @@ def test_dropna_multiple_axes(self):
inp.dropna(how='all', axis=(0, 1), inplace=True)
assert_frame_equal(inp, expected)
+ def test_dropna_tz_aware_datetime(self):
+ # GH13407
+ df = DataFrame()
+ dt1 = datetime.datetime(2015, 1, 1,
+ tzinfo=dateutil.tz.tzutc())
+ dt2 = datetime.datetime(2015, 2, 2,
+ tzinfo=dateutil.tz.tzutc())
+ df['Time'] = [dt1]
+ result = df.dropna(axis=0)
+ expected = DataFrame({'Time': [dt1]})
+ assert_frame_equal(result, expected)
+
+ # Ex2
+ df = DataFrame({'Time': [dt1, None, np.nan, dt2]})
+ result = df.dropna(axis=0)
+ expected = DataFrame([dt1, dt2],
+ columns=['Time'],
+ index=[0, 3])
+ assert_frame_equal(result, expected)
+
def test_fillna(self):
tf = self.tsframe
tf.loc[tf.index[:5], 'A'] = nan
| - [x] closes #13407
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
As mentioned by the title of #13407, DataFrame.values not a 2D-array when constructed from timezone-aware datetimes. Hence, ``notna(frame.values)`` raises ``ValueErorr: 'axis' entry is out of bounds`` because user tries to apply ``dropna`` on a 1D-array.
I mainly include the example mentioned by the original author of #13407. Let me know if further test examples are required for this issue. Thanks.
Edit: I add additional test example to demonstrate the dropping of np.nan and None from the Series having timezone-ware datetimes. | https://api.github.com/repos/pandas-dev/pandas/pulls/20422 | 2018-03-20T16:47:40Z | 2018-03-25T23:55:22Z | 2018-03-25T23:55:21Z | 2018-03-25T23:55:45Z |
DOC: Updating operators docstrings | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 2a21593fab8f5..850c4dd7c45e7 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -235,7 +235,7 @@ def _gen_eval_kwargs(name):
{}
>>> _gen_eval_kwargs("rtruediv")
- {"reversed": True, "truediv": True}
+ {'reversed': True, 'truediv': True}
"""
kwargs = {}
@@ -384,124 +384,21 @@ def _get_op_name(op, special):
# -----------------------------------------------------------------------------
# Docstring Generation and Templates
-_add_example_FRAME = """
->>> a = pd.DataFrame([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'],
-... columns=['one'])
->>> a
- one
-a 1.0
-b 1.0
-c 1.0
-d NaN
->>> b = pd.DataFrame(dict(one=[1, np.nan, 1, np.nan],
-... two=[np.nan, 2, np.nan, 2]),
-... index=['a', 'b', 'd', 'e'])
->>> b
- one two
-a 1.0 NaN
-b NaN 2.0
-d 1.0 NaN
-e NaN 2.0
->>> a.add(b, fill_value=0)
- one two
-a 2.0 NaN
-b 1.0 2.0
-c 1.0 NaN
-d 1.0 NaN
-e NaN 2.0
-"""
-
-_sub_example_FRAME = """
->>> a = pd.DataFrame([2, 1, 1, np.nan], index=['a', 'b', 'c', 'd'],
-... columns=['one'])
->>> a
- one
-a 2.0
-b 1.0
-c 1.0
-d NaN
->>> b = pd.DataFrame(dict(one=[1, np.nan, 1, np.nan],
-... two=[3, 2, np.nan, 2]),
-... index=['a', 'b', 'd', 'e'])
->>> b
- one two
-a 1.0 3.0
-b NaN 2.0
-d 1.0 NaN
-e NaN 2.0
->>> a.sub(b, fill_value=0)
- one two
-a 1.0 -3.0
-b 1.0 -2.0
-c 1.0 NaN
-d -1.0 NaN
-e NaN -2.0
-"""
-
-_mod_example_FRAME = """
-**Using a scalar argument**
-
->>> df = pd.DataFrame([2, 4, np.nan, 6.2], index=["a", "b", "c", "d"],
-... columns=['one'])
->>> df
- one
-a 2.0
-b 4.0
-c NaN
-d 6.2
->>> df.mod(3, fill_value=-1)
- one
-a 2.0
-b 1.0
-c 2.0
-d 0.2
-
-**Using a DataFrame argument**
-
->>> df = pd.DataFrame(dict(one=[np.nan, 2, 3, 14], two=[np.nan, 1, 1, 3]),
-... index=['a', 'b', 'c', 'd'])
->>> df
- one two
-a NaN NaN
-b 2.0 1.0
-c 3.0 1.0
-d 14.0 3.0
->>> other = pd.DataFrame(dict(one=[np.nan, np.nan, 6, np.nan],
-... three=[np.nan, 10, np.nan, -7]),
-... index=['a', 'b', 'd', 'e'])
->>> other
- one three
-a NaN NaN
-b NaN 10.0
-d 6.0 NaN
-e NaN -7.0
->>> df.mod(other, fill_value=3)
- one three two
-a NaN NaN NaN
-b 2.0 3.0 1.0
-c 0.0 NaN 1.0
-d 2.0 NaN 0.0
-e NaN -4.0 NaN
-"""
-
_op_descriptions = {
# Arithmetic Operators
'add': {'op': '+',
'desc': 'Addition',
- 'reverse': 'radd',
- 'df_examples': _add_example_FRAME},
+ 'reverse': 'radd'},
'sub': {'op': '-',
'desc': 'Subtraction',
- 'reverse': 'rsub',
- 'df_examples': _sub_example_FRAME},
+ 'reverse': 'rsub'},
'mul': {'op': '*',
'desc': 'Multiplication',
'reverse': 'rmul',
'df_examples': None},
'mod': {'op': '%',
'desc': 'Modulo',
- 'reverse': 'rmod',
- 'df_examples': _mod_example_FRAME},
+ 'reverse': 'rmod'},
'pow': {'op': '**',
'desc': 'Exponential power',
'reverse': 'rpow',
@@ -522,28 +419,23 @@ def _get_op_name(op, special):
# Comparison Operators
'eq': {'op': '==',
'desc': 'Equal to',
- 'reverse': None,
- 'df_examples': None},
+ 'reverse': None},
'ne': {'op': '!=',
'desc': 'Not equal to',
- 'reverse': None,
- 'df_examples': None},
+ 'reverse': None},
'lt': {'op': '<',
'desc': 'Less than',
- 'reverse': None,
- 'df_examples': None},
+ 'reverse': None},
'le': {'op': '<=',
'desc': 'Less than or equal to',
- 'reverse': None,
- 'df_examples': None},
+ 'reverse': None},
'gt': {'op': '>',
'desc': 'Greater than',
- 'reverse': None,
- 'df_examples': None},
+ 'reverse': None},
'ge': {'op': '>=',
'desc': 'Greater than or equal to',
- 'reverse': None,
- 'df_examples': None}}
+ 'reverse': None}
+}
_op_names = list(_op_descriptions.keys())
for key in _op_names:
@@ -635,38 +527,295 @@ def _get_op_name(op, special):
_flex_doc_FRAME = """
{desc} of dataframe and other, element-wise (binary operator `{op_name}`).
-Equivalent to ``{equiv}``, but with support to substitute a fill_value for
-missing data in one of the inputs.
+Equivalent to ``{equiv}``, but with support to substitute a fill_value
+for missing data in one of the inputs. With reverse version, `{reverse}`.
+
+Among flexible wrappers (`add`, `sub`, `mul`, `div`, `mod`, `pow`) to
+arithmetic operators: `+`, `-`, `*`, `/`, `//`, `%`, `**.
Parameters
----------
-other : Series, DataFrame, or constant
-axis : {{0, 1, 'index', 'columns'}}
- For Series input, axis to match Series index on
-level : int or name
+other : scalar, sequence, Series, or DataFrame
+ Any single or multiple element data structure, or list-like object.
+axis : {{0 or 'index', 1 or 'columns'}}
+ Whether to compare by the index (0 or 'index') or columns
+ (1 or 'columns'). For Series input, axis to match Series index on.
+level : int or label
Broadcast across a level, matching Index values on the
- passed MultiIndex level
-fill_value : None or float value, default None
+ passed MultiIndex level.
+fill_value : float or None, default None
Fill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
- the result will be missing
+ the result will be missing.
Notes
-----
-Mismatched indices will be unioned together
+Mismatched indices will be unioned together.
Returns
-------
-result : DataFrame
+DataFrame
+ Result of the arithmetic operation.
+
+See Also
+--------
+DataFrame.add : Add DataFrames.
+DataFrame.sub : Subtract DataFrames.
+DataFrame.mul : Multiply DataFrames.
+DataFrame.div : Divide DataFrames (float division).
+DataFrame.truediv : Divide DataFrames (float division).
+DataFrame.floordiv : Divide DataFrames (integer division).
+DataFrame.mod : Calculate modulo (remainder after division).
+DataFrame.pow : Calculate exponential power.
Examples
--------
-{df_examples}
+>>> df = pd.DataFrame({{'angles': [0, 3, 4],
+... 'degrees': [360, 180, 360]}},
+... index=['circle', 'triangle', 'rectangle'])
+>>> df
+ angles degrees
+circle 0 360
+triangle 3 180
+rectangle 4 360
+
+Add a scalar with operator version which return the same
+results.
+
+>>> df + 1
+ angles degrees
+circle 1 361
+triangle 4 181
+rectangle 5 361
+
+>>> df.add(1)
+ angles degrees
+circle 1 361
+triangle 4 181
+rectangle 5 361
+
+Divide by constant with reverse version.
+
+>>> df.div(10)
+ angles degrees
+circle 0.0 36.0
+triangle 0.3 18.0
+rectangle 0.4 36.0
+
+>>> df.rdiv(10)
+ angles degrees
+circle inf 0.027778
+triangle 3.333333 0.055556
+rectangle 2.500000 0.027778
+
+Subtract a list and Series by axis with operator version.
+
+>>> df - [1, 2]
+ angles degrees
+circle -1 358
+triangle 2 178
+rectangle 3 358
+
+>>> df.sub([1, 2], axis='columns')
+ angles degrees
+circle -1 358
+triangle 2 178
+rectangle 3 358
+
+>>> df.sub(pd.Series([1, 1, 1], index=['circle', 'triangle', 'rectangle']),
+... axis='index')
+ angles degrees
+circle -1 359
+triangle 2 179
+rectangle 3 359
+
+Multiply a DataFrame of different shape with operator version.
+
+>>> other = pd.DataFrame({{'angles': [0, 3, 4]}},
+... index=['circle', 'triangle', 'rectangle'])
+>>> other
+ angles
+circle 0
+triangle 3
+rectangle 4
+
+>>> df * other
+ angles degrees
+circle 0 NaN
+triangle 9 NaN
+rectangle 16 NaN
+
+>>> df.mul(other, fill_value=0)
+ angles degrees
+circle 0 0.0
+triangle 9 0.0
+rectangle 16 0.0
+
+Divide by a MultiIndex by level.
+
+>>> df_multindex = pd.DataFrame({{'angles': [0, 3, 4, 4, 5, 6],
+... 'degrees': [360, 180, 360, 360, 540, 720]}},
+... index=[['A', 'A', 'A', 'B', 'B', 'B'],
+... ['circle', 'triangle', 'rectangle',
+... 'square', 'pentagon', 'hexagon']])
+>>> df_multindex
+ angles degrees
+A circle 0 360
+ triangle 3 180
+ rectangle 4 360
+B square 4 360
+ pentagon 5 540
+ hexagon 6 720
+
+>>> df.div(df_multindex, level=1, fill_value=0)
+ angles degrees
+A circle NaN 1.0
+ triangle 1.0 1.0
+ rectangle 1.0 1.0
+B square 0.0 0.0
+ pentagon 0.0 0.0
+ hexagon 0.0 0.0
+"""
+
+_flex_comp_doc_FRAME = """
+{desc} of dataframe and other, element-wise (binary operator `{op_name}`).
+
+Among flexible wrappers (`eq`, `ne`, `le`, `lt`, `ge`, `gt`) to comparison
+operators.
+
+Equivalent to `==`, `=!`, `<=`, `<`, `>=`, `>` with support to choose axis
+(rows or columns) and level for comparison.
+
+Parameters
+----------
+other : scalar, sequence, Series, or DataFrame
+ Any single or multiple element data structure, or list-like object.
+axis : {{0 or 'index', 1 or 'columns'}}, default 'columns'
+ Whether to compare by the index (0 or 'index') or columns
+ (1 or 'columns').
+level : int or label
+ Broadcast across a level, matching Index values on the passed
+ MultiIndex level.
+
+Returns
+-------
+DataFrame of bool
+ Result of the comparison.
See Also
--------
-DataFrame.{reverse}
+DataFrame.eq : Compare DataFrames for equality elementwise.
+DataFrame.ne : Compare DataFrames for inequality elementwise.
+DataFrame.le : Compare DataFrames for less than inequality
+ or equality elementwise.
+DataFrame.lt : Compare DataFrames for strictly less than
+ inequality elementwise.
+DataFrame.ge : Compare DataFrames for greater than inequality
+ or equality elementwise.
+DataFrame.gt : Compare DataFrames for strictly greater than
+ inequality elementwise.
+
+Notes
+--------
+Mismatched indices will be unioned together.
+`NaN` values are considered different (i.e. `NaN` != `NaN`).
+
+Examples
+--------
+>>> df = pd.DataFrame({{'cost': [250, 150, 100],
+... 'revenue': [100, 250, 300]}},
+... index=['A', 'B', 'C'])
+>>> df
+ cost revenue
+A 250 100
+B 150 250
+C 100 300
+
+Compare to a scalar and operator version which return the same
+results.
+
+>>> df == 100
+ cost revenue
+A False True
+B False False
+C True False
+
+>>> df.eq(100)
+ cost revenue
+A False True
+B False False
+C True False
+
+Compare to a list and Series by axis and operator version. As shown,
+for list axis is by default 'index', but for Series axis is by
+default 'columns'.
+
+>>> df != [100, 250, 300]
+ cost revenue
+A True False
+B True False
+C True False
+
+>>> df.ne([100, 250, 300], axis='index')
+ cost revenue
+A True False
+B True False
+C True False
+
+>>> df != pd.Series([100, 250, 300])
+ cost revenue 0 1 2
+A True True True True True
+B True True True True True
+C True True True True True
+
+>>> df.ne(pd.Series([100, 250, 300]), axis='columns')
+ cost revenue 0 1 2
+A True True True True True
+B True True True True True
+C True True True True True
+
+Compare to a DataFrame of different shape.
+
+>>> other = pd.DataFrame({{'revenue': [300, 250, 100, 150]}},
+... index=['A', 'B', 'C', 'D'])
+>>> other
+ revenue
+A 300
+B 250
+C 100
+D 150
+
+>>> df.gt(other)
+ cost revenue
+A False False
+B False False
+C False True
+D False False
+
+Compare to a MultiIndex by level.
+
+>>> df_multindex = pd.DataFrame({{'cost': [250, 150, 100, 150, 300, 220],
+... 'revenue': [100, 250, 300, 200, 175, 225]}},
+... index=[['Q1', 'Q1', 'Q1', 'Q2', 'Q2', 'Q2'],
+... ['A', 'B', 'C', 'A', 'B' ,'C']])
+>>> df_multindex
+ cost revenue
+Q1 A 250 100
+ B 150 250
+ C 100 300
+Q2 A 150 200
+ B 300 175
+ C 220 225
+
+>>> df.le(df_multindex, level=1)
+ cost revenue
+Q1 A True True
+ B True True
+ C True True
+Q2 A False True
+ B True False
+ C True False
"""
_flex_doc_PANEL = """
@@ -734,8 +883,7 @@ def _make_flex_doc(op_name, typ):
elif typ == 'dataframe':
base_doc = _flex_doc_FRAME
doc = base_doc.format(desc=op_desc['desc'], op_name=op_name,
- equiv=equiv, reverse=op_desc['reverse'],
- df_examples=op_desc['df_examples'])
+ equiv=equiv, reverse=op_desc['reverse'])
elif typ == 'panel':
base_doc = _flex_doc_PANEL
doc = base_doc.format(desc=op_desc['desc'], op_name=op_name,
@@ -1894,8 +2042,10 @@ def na_op(x, y):
result = mask_cmp_op(x, y, op, (np.ndarray, ABCSeries))
return result
- @Appender('Wrapper for flexible comparison methods {name}'
- .format(name=op_name))
+ doc = _flex_comp_doc_FRAME.format(op_name=op_name,
+ desc=_op_descriptions[op_name]['desc'])
+
+ @Appender(doc)
def f(self, other, axis=default_axis, level=None):
other = _align_method_FRAME(self, other, axis)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
####################### Docstring (pandas.DataFrame.gt) #######################
################################################################################
Wrapper for flexible comparison methods gt
Examples
--------
>>> df1 = pd.DataFrame({'num1': range(1,6),
... 'num2': range(2,11,2),
... 'num3': range(1,20,4)})
>>> df1
num1 num2 num3
0 1 2 1
1 2 4 5
2 3 6 9
3 4 8 13
4 5 10 17
>>> df2 = pd.DataFrame({'num1': range(6,11),
... 'num2': range(1,10,2),
... 'num3': range(1,20,4)})
>>> df2
num1 num2 num3
0 6 1 1
1 7 3 5
2 8 5 9
3 9 7 13
4 10 9 17
>>> df1.gt(df2)
num1 num2 num3
0 False True False
1 False True False
2 False True False
3 False True False
4 False True False
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
Use only one blank line to separate sections or paragraphs
Summary does not end with dot
No extended summary found
Errors in parameters section
Parameters {'other', 'level', 'axis'} not documented
No returns section found
See Also section not found
################################################################################
####################### Docstring (pandas.DataFrame.ge) #######################
################################################################################
Wrapper for flexible comparison methods ge
Examples
--------
>>> df1 = pd.DataFrame({'num1': range(1,6),
... 'num2': range(2,11,2),
... 'num3': range(1,20,4)})
>>> df1
num1 num2 num3
0 1 2 1
1 2 4 5
2 3 6 9
3 4 8 13
4 5 10 17
>>> df2 = pd.DataFrame({'num1': range(6,11),
... 'num2': range(1,10,2),
... 'num3': range(1,20,4)})
>>> df2
num1 num2 num3
0 6 1 1
1 7 3 5
2 8 5 9
3 9 7 13
4 10 9 17
>>> df1.ge(df2)
num1 num2 num3
0 False True True
1 False True True
2 False True True
3 False True True
4 False True True
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
Use only one blank line to separate sections or paragraphs
Summary does not end with dot
No extended summary found
Errors in parameters section
Parameters {'other', 'axis', 'level'} not documented
No returns section found
See Also section not found
```
If the validation script still gives errors, but you think there is a good reason to deviate in this case (and there are certainly such cases), please state this explicitly.
This PR follows up on previous attempt: https://github.com/pandas-dev/pandas/pull/20291. Please see discussion.
Specifically, comparison methods in ops.py including `.gt()` and `.ge()` do not have standard docstring like other methods. They use a generalized wrapper summary line. This pull request simply adds examples by conditionally building docstring with examples. General changes include: adding global string variable, __flex_comp_doc_FRAME_ and `if` logic to the method, *_flex_comp_method_FRAME*.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20415 | 2018-03-20T02:01:36Z | 2018-12-02T23:20:06Z | 2018-12-02T23:20:06Z | 2018-12-03T02:43:11Z |
DOC : update the pandas.Period.weekday docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 5895c1242ac71..59db371833957 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -54,7 +54,6 @@ from offsets import _Tick
cdef bint PY2 = str == bytes
-
cdef extern from "period_helper.h":
int FR_ANN
int FR_QTR
@@ -1380,34 +1379,49 @@ cdef class _Period(object):
@property
def dayofweek(self):
"""
- Return the day of the week.
+ Day of the week the period lies in, with Monday=0 and Sunday=6.
+
+ If the period frequency is lower than daily (e.g. hourly), and the
+ period spans over multiple days, the day at the start of the period is
+ used.
- This attribute returns the day of the week on which the particular
- date for the given period occurs depending on the frequency with
- Monday=0, Sunday=6.
+ If the frequency is higher than daily (e.g. monthly), the last day
+ of the period is used.
Returns
-------
- Int
- Range from 0 to 6 (included).
+ int
+ Day of the week.
- See also
+ See Also
--------
- Period.dayofyear : Return the day of the year.
- Period.daysinmonth : Return the number of days in that month.
+ Period.dayofweek : Day of the week the period lies in.
+ Period.weekday : Alias of Period.dayofweek.
+ Period.day : Day of the month.
+ Period.dayofyear : Day of the year.
Examples
--------
- >>> period1 = pd.Period('2012-1-1 19:00', freq='H')
- >>> period1
- Period('2012-01-01 19:00', 'H')
- >>> period1.dayofweek
+ >>> per = pd.Period('2017-12-31 22:00', 'H')
+ >>> per.dayofweek
+ 6
+
+ For periods that span over multiple days, the day at the beginning of
+ the period is returned.
+
+ >>> per = pd.Period('2017-12-31 22:00', '4H')
+ >>> per.dayofweek
+ 6
+ >>> per.start_time.dayofweek
6
- >>> period2 = pd.Period('2013-1-9 11:00', freq='H')
- >>> period2
- Period('2013-01-09 11:00', 'H')
- >>> period2.dayofweek
+ For periods with a frequency higher than days, the last day of the
+ period is returned.
+
+ >>> per = pd.Period('2018-01', 'M')
+ >>> per.dayofweek
+ 2
+ >>> per.end_time.dayofweek
2
"""
base, mult = get_freq_code(self.freq)
@@ -1415,6 +1429,55 @@ cdef class _Period(object):
@property
def weekday(self):
+ """
+ Day of the week the period lies in, with Monday=0 and Sunday=6.
+
+ If the period frequency is lower than daily (e.g. hourly), and the
+ period spans over multiple days, the day at the start of the period is
+ used.
+
+ If the frequency is higher than daily (e.g. monthly), the last day
+ of the period is used.
+
+ Returns
+ -------
+ int
+ Day of the week.
+
+ See Also
+ --------
+ Period.dayofweek : Day of the week the period lies in.
+ Period.weekday : Alias of Period.dayofweek.
+ Period.day : Day of the month.
+ Period.dayofyear : Day of the year.
+
+ Examples
+ --------
+ >>> per = pd.Period('2017-12-31 22:00', 'H')
+ >>> per.dayofweek
+ 6
+
+ For periods that span over multiple days, the day at the beginning of
+ the period is returned.
+
+ >>> per = pd.Period('2017-12-31 22:00', '4H')
+ >>> per.dayofweek
+ 6
+ >>> per.start_time.dayofweek
+ 6
+
+ For periods with a frequency higher than days, the last day of the
+ period is returned.
+
+ >>> per = pd.Period('2018-01', 'M')
+ >>> per.dayofweek
+ 2
+ >>> per.end_time.dayofweek
+ 2
+ """
+ # Docstring is a duplicate from dayofweek. Reusing docstrings with
+ # Appender doesn't work for properties in Cython files, and setting
+ # the __doc__ attribute is also not possible.
return self.dayofweek
@property
| # Period.weekday
################################################################################
###################### Docstring (pandas.Period.weekday) ######################
################################################################################
Get day of the week that a Period falls on.
Returns
-------
int
See Also
--------
Period.dayofweek : Get the day component of the Period.
Period.week : Get the week of the year on the given Period.
Examples
--------
>>> p = pd.Period("2018-03-11", "H")
>>> p.weekday
6
>>> p = pd.Period("2018-02-01", "D")
>>> p.weekday
3
>>> p = pd.Period("2018-01-06", "D")
>>> p.weekday
5
################################################################################
################################## Validation ##################################
################################################################################ | https://api.github.com/repos/pandas-dev/pandas/pulls/20413 | 2018-03-19T18:47:20Z | 2018-07-18T09:52:37Z | 2018-07-18T09:52:37Z | 2018-07-19T01:17:32Z |
BUG: ExtensionArray.fillna for scalar values | diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index fa565aa802faf..55a72585acbe5 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -261,7 +261,7 @@ def fillna(self, value=None, method=None, limit=None):
-------
filled : ExtensionArray with NA/NaN filled
"""
- from pandas.api.types import is_scalar
+ from pandas.api.types import is_array_like
from pandas.util._validators import validate_fillna_kwargs
from pandas.core.missing import pad_1d, backfill_1d
@@ -269,7 +269,7 @@ def fillna(self, value=None, method=None, limit=None):
mask = self.isna()
- if not is_scalar(value):
+ if is_array_like(value):
if len(value) != len(self):
raise ValueError("Length of 'value' does not match. Got ({}) "
" expected {}".format(len(value), len(self)))
diff --git a/pandas/tests/extension/base/base.py b/pandas/tests/extension/base/base.py
index d29587e635ebd..beb7948f2c14b 100644
--- a/pandas/tests/extension/base/base.py
+++ b/pandas/tests/extension/base/base.py
@@ -4,3 +4,6 @@
class BaseExtensionTests(object):
assert_series_equal = staticmethod(tm.assert_series_equal)
assert_frame_equal = staticmethod(tm.assert_frame_equal)
+ assert_extension_array_equal = staticmethod(
+ tm.assert_extension_array_equal
+ )
diff --git a/pandas/tests/extension/base/missing.py b/pandas/tests/extension/base/missing.py
index bf404ac01bf2b..d3360eb199a89 100644
--- a/pandas/tests/extension/base/missing.py
+++ b/pandas/tests/extension/base/missing.py
@@ -47,6 +47,12 @@ def test_dropna_frame(self, data_missing):
expected = df.iloc[:0]
self.assert_frame_equal(result, expected)
+ def test_fillna_scalar(self, data_missing):
+ valid = data_missing[1]
+ result = data_missing.fillna(valid)
+ expected = data_missing.fillna(valid)
+ self.assert_extension_array_equal(result, expected)
+
def test_fillna_limit_pad(self, data_missing):
arr = data_missing.take([1, 0, 0, 0, 1])
result = pd.Series(arr).fillna(method='ffill', limit=2)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a223e4d8fd23e..a1e9dcff38ec7 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -20,6 +20,7 @@
import numpy as np
import pandas as pd
+from pandas.core.arrays import ExtensionArray
from pandas.core.dtypes.missing import array_equivalent
from pandas.core.dtypes.common import (
is_datetimelike_v_numeric,
@@ -1083,6 +1084,32 @@ def _raise(left, right, err_msg):
return True
+def assert_extension_array_equal(left, right):
+ """Check that left and right ExtensionArrays are equal.
+
+ Parameters
+ ----------
+ left, right : ExtensionArray
+ The two arrays to compare
+
+ Notes
+ -----
+ Missing values are checked separately from valid values.
+ A mask of missing values is computed for each and checked to match.
+ The remaining all-valid values are cast to object dtype and checked.
+ """
+ assert isinstance(left, ExtensionArray)
+ assert left.dtype == right.dtype
+ left_na = left.isna()
+ right_na = right.isna()
+ assert_numpy_array_equal(left_na, right_na)
+
+ left_valid = left[~left_na].astype(object)
+ right_valid = right[~right_na].astype(object)
+
+ assert_numpy_array_equal(left_valid, right_valid)
+
+
# This could be refactored to use the NDFrame.equals method
def assert_series_equal(left, right, check_dtype=True,
check_index_type='equiv',
| Closes #20411 | https://api.github.com/repos/pandas-dev/pandas/pulls/20412 | 2018-03-19T18:05:32Z | 2018-03-19T23:47:59Z | 2018-03-19T23:47:59Z | 2018-05-02T13:10:08Z |
REF: Mock all S3 Tests | diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py
index 3b7fdc6e2d78c..0f5d07f9fac55 100644
--- a/asv_bench/benchmarks/io/csv.py
+++ b/asv_bench/benchmarks/io/csv.py
@@ -118,38 +118,6 @@ def time_read_uint64_na_values(self):
na_values=self.na_values)
-class S3(object):
- # Make sure that we can read part of a file from S3 without
- # needing to download the entire thing. Use the timeit.default_timer
- # to measure wall time instead of CPU time -- we want to see
- # how long it takes to download the data.
- timer = timeit.default_timer
- params = ([None, "gzip", "bz2"], ["python", "c"])
- param_names = ["compression", "engine"]
-
- def setup(self, compression, engine):
- if compression == "bz2" and engine == "c" and PY2:
- # The Python 2 C parser can't read bz2 from open files.
- raise NotImplementedError
- try:
- import s3fs # noqa
- except ImportError:
- # Skip these benchmarks if `boto` is not installed.
- raise NotImplementedError
-
- ext = ""
- if compression == "gzip":
- ext = ".gz"
- elif compression == "bz2":
- ext = ".bz2"
- self.big_fname = "s3://pandas-test/large_random.csv" + ext
-
- def time_read_csv_10_rows(self, compression, engine):
- # Read a small number of rows from a huge (100,000 x 50) table.
- read_csv(self.big_fname, nrows=10, compression=compression,
- engine=engine)
-
-
class ReadCSVThousands(BaseIO):
goal_time = 0.2
diff --git a/pandas/tests/io/parser/test_network.py b/pandas/tests/io/parser/test_network.py
index f16338fda6245..fdf45f307e953 100644
--- a/pandas/tests/io/parser/test_network.py
+++ b/pandas/tests/io/parser/test_network.py
@@ -4,13 +4,16 @@
Tests parsers ability to read and parse non-local files
and hence require a network connection to be read.
"""
+import logging
+
import pytest
+import numpy as np
import pandas.util.testing as tm
import pandas.util._test_decorators as td
from pandas import DataFrame
from pandas.io.parsers import read_csv, read_table
-from pandas.compat import BytesIO
+from pandas.compat import BytesIO, StringIO
@pytest.mark.network
@@ -45,9 +48,9 @@ def check_compressed_urls(salaries_table, compression, extension, mode,
tm.assert_frame_equal(url_table, salaries_table)
+@pytest.mark.usefixtures("s3_resource")
class TestS3(object):
- @tm.network
def test_parse_public_s3_bucket(self):
pytest.importorskip('s3fs')
# more of an integration test due to the not-public contents portion
@@ -66,7 +69,6 @@ def test_parse_public_s3_bucket(self):
assert not df.empty
tm.assert_frame_equal(read_csv(tm.get_data_path('tips.csv')), df)
- @tm.network
def test_parse_public_s3n_bucket(self):
# Read from AWS s3 as "s3n" URL
@@ -76,7 +78,6 @@ def test_parse_public_s3n_bucket(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
- @tm.network
def test_parse_public_s3a_bucket(self):
# Read from AWS s3 as "s3a" URL
df = read_csv('s3a://pandas-test/tips.csv', nrows=10)
@@ -85,7 +86,6 @@ def test_parse_public_s3a_bucket(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
- @tm.network
def test_parse_public_s3_bucket_nrows(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' +
@@ -95,7 +95,6 @@ def test_parse_public_s3_bucket_nrows(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
- @tm.network
def test_parse_public_s3_bucket_chunked(self):
# Read with a chunksize
chunksize = 5
@@ -114,7 +113,6 @@ def test_parse_public_s3_bucket_chunked(self):
chunksize * i_chunk: chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
- @tm.network
def test_parse_public_s3_bucket_chunked_python(self):
# Read with a chunksize using the Python parser
chunksize = 5
@@ -133,7 +131,6 @@ def test_parse_public_s3_bucket_chunked_python(self):
chunksize * i_chunk: chunksize * (i_chunk + 1)]
tm.assert_frame_equal(true_df, df)
- @tm.network
def test_parse_public_s3_bucket_python(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
@@ -143,7 +140,6 @@ def test_parse_public_s3_bucket_python(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
- @tm.network
def test_infer_s3_compression(self):
for ext in ['', '.gz', '.bz2']:
df = read_csv('s3://pandas-test/tips.csv' + ext,
@@ -153,7 +149,6 @@ def test_infer_s3_compression(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')), df)
- @tm.network
def test_parse_public_s3_bucket_nrows_python(self):
for ext, comp in [('', None), ('.gz', 'gzip'), ('.bz2', 'bz2')]:
df = read_csv('s3://pandas-test/tips.csv' + ext, engine='python',
@@ -163,7 +158,6 @@ def test_parse_public_s3_bucket_nrows_python(self):
tm.assert_frame_equal(read_csv(
tm.get_data_path('tips.csv')).iloc[:10], df)
- @tm.network
def test_s3_fails(self):
with pytest.raises(IOError):
read_csv('s3://nyqpug/asdf.csv')
@@ -188,3 +182,22 @@ def test_read_csv_handles_boto_s3_object(self,
expected = read_csv(tips_file)
tm.assert_frame_equal(result, expected)
+
+ def test_read_csv_chunked_download(self, s3_resource, caplog):
+ # 8 MB, S3FS usees 5MB chunks
+ df = DataFrame(np.random.randn(100000, 4), columns=list('abcd'))
+ buf = BytesIO()
+ str_buf = StringIO()
+
+ df.to_csv(str_buf)
+
+ buf = BytesIO(str_buf.getvalue().encode('utf-8'))
+
+ s3_resource.Bucket("pandas-test").put_object(
+ Key="large-file.csv",
+ Body=buf)
+
+ with caplog.at_level(logging.DEBUG, logger='s3fs.core'):
+ read_csv("s3://pandas-test/large-file.csv", nrows=5)
+ # log of fetch_range (start, stop)
+ assert ((0, 5505024) in set(x.args[-2:] for x in caplog.records))
| Closes https://github.com/pandas-dev/pandas/issues/19825 | https://api.github.com/repos/pandas-dev/pandas/pulls/20409 | 2018-03-19T12:19:52Z | 2018-03-23T19:24:39Z | 2018-03-23T19:24:39Z | 2018-03-23T19:24:46Z |
Bug: Allow np.timedelta64 objects to index TimedeltaIndex | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index cfe28edd175b6..eda3fba043be9 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -887,7 +887,8 @@ Timedelta
- Bug in :func:`Timedelta.total_seconds()` causing precision errors i.e. ``Timedelta('30S').total_seconds()==30.000000000000004`` (:issue:`19458`)
- Bug in :func: `Timedelta.__rmod__` where operating with a ``numpy.timedelta64`` returned a ``timedelta64`` object instead of a ``Timedelta`` (:issue:`19820`)
- Multiplication of :class:`TimedeltaIndex` by ``TimedeltaIndex`` will now raise ``TypeError`` instead of raising ``ValueError`` in cases of length mis-match (:issue`19333`)
--
+- Bug in indexing a :class:`TimedeltaIndex` with a ``np.timedelta64`` object which was raising a ``TypeError`` (:issue:`20393`)
+
Timezones
^^^^^^^^^
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index b5a08fc0168e4..9757d775201cc 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -829,7 +829,8 @@ def _maybe_cast_slice_bound(self, label, side, kind):
else:
return (lbound + to_offset(parsed.resolution) -
Timedelta(1, 'ns'))
- elif is_integer(label) or is_float(label):
+ elif ((is_integer(label) or is_float(label)) and
+ not is_timedelta64_dtype(label)):
self._invalid_indexer('slice', label)
return label
diff --git a/pandas/tests/indexing/test_timedelta.py b/pandas/tests/indexing/test_timedelta.py
index 3ad3b771b2ab2..48ea49119356d 100644
--- a/pandas/tests/indexing/test_timedelta.py
+++ b/pandas/tests/indexing/test_timedelta.py
@@ -68,3 +68,15 @@ def test_listlike_setitem(self, value):
series.iloc[0] = value
expected = pd.Series([pd.NaT, 1, 2], dtype='timedelta64[ns]')
tm.assert_series_equal(series, expected)
+
+ @pytest.mark.parametrize('start,stop, expected_slice', [
+ [np.timedelta64(0, 'ns'), None, slice(0, 11)],
+ [np.timedelta64(1, 'D'), np.timedelta64(6, 'D'), slice(1, 7)],
+ [None, np.timedelta64(4, 'D'), slice(0, 5)]])
+ def test_numpy_timedelta_scalar_indexing(self, start, stop,
+ expected_slice):
+ # GH 20393
+ s = pd.Series(range(11), pd.timedelta_range('0 days', '10 days'))
+ result = s.loc[slice(start, stop)]
+ expected = s.iloc[expected_slice]
+ tm.assert_series_equal(result, expected)
| - [x] closes #20393
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
`is_integer(label)` would catch `np.timedelta64` objects due to https://github.com/numpy/numpy/issues/10685 (I believe), so ensure that the label is also not a timedelta64 dtype | https://api.github.com/repos/pandas-dev/pandas/pulls/20408 | 2018-03-19T06:14:32Z | 2018-03-19T10:07:35Z | 2018-03-19T10:07:35Z | 2018-03-20T00:38:13Z |
DOC: Update the examples to DataFrame.mod docstring | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index fa6d88648cc63..1ddf77cf71a11 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -401,6 +401,52 @@ def _get_op_name(op, special):
e NaN -2.0
"""
+_mod_example_FRAME = """
+**Using a scalar argument**
+
+>>> df = pd.DataFrame([2, 4, np.nan, 6.2], index=["a", "b", "c", "d"],
+... columns=['one'])
+>>> df
+ one
+a 2.0
+b 4.0
+c NaN
+d 6.2
+>>> df.mod(3, fill_value=-1)
+ one
+a 2.0
+b 1.0
+c 2.0
+d 0.2
+
+**Using a DataFrame argument**
+
+>>> df = pd.DataFrame(dict(one=[np.nan, 2, 3, 14], two=[np.nan, 1, 1, 3]),
+... index=['a', 'b', 'c', 'd'])
+>>> df
+ one two
+a NaN NaN
+b 2.0 1.0
+c 3.0 1.0
+d 14.0 3.0
+>>> other = pd.DataFrame(dict(one=[np.nan, np.nan, 6, np.nan],
+... three=[np.nan, 10, np.nan, -7]),
+... index=['a', 'b', 'd', 'e'])
+>>> other
+ one three
+a NaN NaN
+b NaN 10.0
+d 6.0 NaN
+e NaN -7.0
+>>> df.mod(other, fill_value=3)
+ one three two
+a NaN NaN NaN
+b 2.0 3.0 1.0
+c 0.0 NaN 1.0
+d 2.0 NaN 0.0
+e NaN -4.0 NaN
+"""
+
_op_descriptions = {
# Arithmetic Operators
'add': {'op': '+',
@@ -418,7 +464,7 @@ def _get_op_name(op, special):
'mod': {'op': '%',
'desc': 'Modulo',
'reverse': 'rmod',
- 'df_examples': None},
+ 'df_examples': _mod_example_FRAME},
'pow': {'op': '**',
'desc': 'Exponential power',
'reverse': 'rpow',
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
####################### Docstring (pandas.DataFrame.mod) #######################
################################################################################
Modulo of dataframe and other, element-wise (binary operator `mod`).
Equivalent to ``dataframe % other``, but with support to substitute a fill_value for
missing data in one of the inputs.
Parameters
----------
other : Series, DataFrame, or constant
axis : {0, 1, 'index', 'columns'}
For Series input, axis to match Series index on
level : int or name
Broadcast across a level, matching Index values on the
passed MultiIndex level
fill_value : None or float value, default None
Fill existing missing (NaN) values, and any new element needed for
successful DataFrame alignment, with this value before computation.
If data in both corresponding DataFrame locations is missing
the result will be missing
Notes
-----
Mismatched indices will be unioned together
Returns
-------
result : DataFrame
Examples
--------
>>> a = pd.DataFrame([2, 4, np.nan, 6.2], index=["a","b","c","d"],
... columns=['one'])
>>> a
one
a 2.0
b 4.0
c NaN
d 6.2
>>> a.mod(3, fill_value=-1)
one
a 2.0
b 1.0
c 2.0
d 0.2
>>> b = pd.DataFrame(dict(one=[np.nan, 2, 3, 14], two=[np.nan, 1, 1, 3]),
... index=['a', 'b', 'c', 'd'])
>>> b
one two
a NaN NaN
b 2.0 1.0
c 3.0 1.0
d 14.0 3.0
>>> c = pd.DataFrame(dict(one=[np.nan, np.nan, 6, np.nan],
... three=[np.nan, 10, np.nan, -7]),
... index=['a', 'b', 'd', 'e'])
>>> c
one three
a NaN NaN
b NaN 10.0
d 6.0 NaN
e NaN -7.0
>>> b.mod(c, fill_value=3)
one three two
a NaN NaN NaN
b 2.0 3.0 1.0
c 0.0 NaN 1.0
d 2.0 NaN 0.0
e NaN -4.0 NaN
See also
--------
DataFrame.rmod
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Use only one blank line to separate sections or paragraphs
Errors in parameters section
Parameter "other" has no description
Parameter "axis" description should finish with "."
Parameter "level" description should finish with "."
Parameter "fill_value" description should finish with "."
Missing description for See Also "DataFrame.rmod" reference
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
I only added the examples. These errors were present before I added the examples.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20406 | 2018-03-19T04:01:51Z | 2018-07-10T06:22:28Z | 2018-07-10T06:22:28Z | 2018-07-10T06:22:37Z |
Cythonized GroupBy Quantile | diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
index 59e43ee22afde..27d279bb90a31 100644
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -14,7 +14,7 @@
method_blacklist = {
'object': {'median', 'prod', 'sem', 'cumsum', 'sum', 'cummin', 'mean',
'max', 'skew', 'cumprod', 'cummax', 'rank', 'pct_change', 'min',
- 'var', 'mad', 'describe', 'std'},
+ 'var', 'mad', 'describe', 'std', 'quantile'},
'datetime': {'median', 'prod', 'sem', 'cumsum', 'sum', 'mean', 'skew',
'cumprod', 'cummax', 'pct_change', 'var', 'mad', 'describe',
'std'}
@@ -316,8 +316,9 @@ class GroupByMethods(object):
['all', 'any', 'bfill', 'count', 'cumcount', 'cummax', 'cummin',
'cumprod', 'cumsum', 'describe', 'ffill', 'first', 'head',
'last', 'mad', 'max', 'min', 'median', 'mean', 'nunique',
- 'pct_change', 'prod', 'rank', 'sem', 'shift', 'size', 'skew',
- 'std', 'sum', 'tail', 'unique', 'value_counts', 'var'],
+ 'pct_change', 'prod', 'quantile', 'rank', 'sem', 'shift',
+ 'size', 'skew', 'std', 'sum', 'tail', 'unique', 'value_counts',
+ 'var'],
['direct', 'transformation']]
def setup(self, dtype, method, application):
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 170e7f14da397..e7389514a7ac5 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -98,6 +98,7 @@ Performance Improvements
- `DataFrame.to_stata()` is now faster when outputting data with any string or non-native endian columns (:issue:`25045`)
- Improved performance of :meth:`Series.searchsorted`. The speedup is especially large when the dtype is
int8/int16/int32 and the searched key is within the integer bounds for the dtype (:issue:`22034`)
+- Improved performance of :meth:`pandas.core.groupby.GroupBy.quantile` (:issue:`20405`)
.. _whatsnew_0250.bug_fixes:
diff --git a/pandas/_libs/groupby.pxd b/pandas/_libs/groupby.pxd
new file mode 100644
index 0000000000000..70ad8a62871e9
--- /dev/null
+++ b/pandas/_libs/groupby.pxd
@@ -0,0 +1,6 @@
+cdef enum InterpolationEnumType:
+ INTERPOLATION_LINEAR,
+ INTERPOLATION_LOWER,
+ INTERPOLATION_HIGHER,
+ INTERPOLATION_NEAREST,
+ INTERPOLATION_MIDPOINT
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index e6b6e2c8a0055..71e25c3955a6d 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -644,5 +644,106 @@ def _group_ohlc(floating[:, :] out,
group_ohlc_float32 = _group_ohlc['float']
group_ohlc_float64 = _group_ohlc['double']
+
+@cython.boundscheck(False)
+@cython.wraparound(False)
+def group_quantile(ndarray[float64_t] out,
+ ndarray[int64_t] labels,
+ numeric[:] values,
+ ndarray[uint8_t] mask,
+ float64_t q,
+ object interpolation):
+ """
+ Calculate the quantile per group.
+
+ Parameters
+ ----------
+ out : ndarray
+ Array of aggregated values that will be written to.
+ labels : ndarray
+ Array containing the unique group labels.
+ values : ndarray
+ Array containing the values to apply the function against.
+ q : float
+ The quantile value to search for.
+
+ Notes
+ -----
+ Rather than explicitly returning a value, this function modifies the
+ provided `out` parameter.
+ """
+ cdef:
+ Py_ssize_t i, N=len(labels), ngroups, grp_sz, non_na_sz
+ Py_ssize_t grp_start=0, idx=0
+ int64_t lab
+ uint8_t interp
+ float64_t q_idx, frac, val, next_val
+ ndarray[int64_t] counts, non_na_counts, sort_arr
+
+ assert values.shape[0] == N
+ inter_methods = {
+ 'linear': INTERPOLATION_LINEAR,
+ 'lower': INTERPOLATION_LOWER,
+ 'higher': INTERPOLATION_HIGHER,
+ 'nearest': INTERPOLATION_NEAREST,
+ 'midpoint': INTERPOLATION_MIDPOINT,
+ }
+ interp = inter_methods[interpolation]
+
+ counts = np.zeros_like(out, dtype=np.int64)
+ non_na_counts = np.zeros_like(out, dtype=np.int64)
+ ngroups = len(counts)
+
+ # First figure out the size of every group
+ with nogil:
+ for i in range(N):
+ lab = labels[i]
+ counts[lab] += 1
+ if not mask[i]:
+ non_na_counts[lab] += 1
+
+ # Get an index of values sorted by labels and then values
+ order = (values, labels)
+ sort_arr = np.lexsort(order).astype(np.int64, copy=False)
+
+ with nogil:
+ for i in range(ngroups):
+ # Figure out how many group elements there are
+ grp_sz = counts[i]
+ non_na_sz = non_na_counts[i]
+
+ if non_na_sz == 0:
+ out[i] = NaN
+ else:
+ # Calculate where to retrieve the desired value
+ # Casting to int will intentionaly truncate result
+ idx = grp_start + <int64_t>(q * <float64_t>(non_na_sz - 1))
+
+ val = values[sort_arr[idx]]
+ # If requested quantile falls evenly on a particular index
+ # then write that index's value out. Otherwise interpolate
+ q_idx = q * (non_na_sz - 1)
+ frac = q_idx % 1
+
+ if frac == 0.0 or interp == INTERPOLATION_LOWER:
+ out[i] = val
+ else:
+ next_val = values[sort_arr[idx + 1]]
+ if interp == INTERPOLATION_LINEAR:
+ out[i] = val + (next_val - val) * frac
+ elif interp == INTERPOLATION_HIGHER:
+ out[i] = next_val
+ elif interp == INTERPOLATION_MIDPOINT:
+ out[i] = (val + next_val) / 2.0
+ elif interp == INTERPOLATION_NEAREST:
+ if frac > .5 or (frac == .5 and q > .5): # Always OK?
+ out[i] = next_val
+ else:
+ out[i] = val
+
+ # Increment the index reference in sorted_arr for the next group
+ grp_start += grp_sz
+
+
# generated from template
include "groupby_helper.pxi"
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index c63bc5164e25b..c364f069bf53d 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -29,6 +29,8 @@ class providing the base-class of operations.
ensure_float, is_extension_array_dtype, is_numeric_dtype, is_scalar)
from pandas.core.dtypes.missing import isna, notna
+from pandas.api.types import (
+ is_datetime64_dtype, is_integer_dtype, is_object_dtype)
import pandas.core.algorithms as algorithms
from pandas.core.base import (
DataError, GroupByError, PandasObject, SelectionMixin, SpecificationError)
@@ -1024,15 +1026,17 @@ def _bool_agg(self, val_test, skipna):
"""
def objs_to_bool(vals):
- try:
- vals = vals.astype(np.bool)
- except ValueError: # for objects
+ # type: np.ndarray -> (np.ndarray, typing.Type)
+ if is_object_dtype(vals):
vals = np.array([bool(x) for x in vals])
+ else:
+ vals = vals.astype(np.bool)
- return vals.view(np.uint8)
+ return vals.view(np.uint8), np.bool
- def result_to_bool(result):
- return result.astype(np.bool, copy=False)
+ def result_to_bool(result, inference):
+ # type: (np.ndarray, typing.Type) -> np.ndarray
+ return result.astype(inference, copy=False)
return self._get_cythonized_result('group_any_all', self.grouper,
aggregate=True,
@@ -1688,6 +1692,75 @@ def nth(self, n, dropna=None):
return result
+ def quantile(self, q=0.5, interpolation='linear'):
+ """
+ Return group values at the given quantile, a la numpy.percentile.
+
+ Parameters
+ ----------
+ q : float or array-like, default 0.5 (50% quantile)
+ Value(s) between 0 and 1 providing the quantile(s) to compute.
+ interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
+ Method to use when the desired quantile falls between two points.
+
+ Returns
+ -------
+ Series or DataFrame
+ Return type determined by caller of GroupBy object.
+
+ See Also
+ --------
+ Series.quantile : Similar method for Series.
+ DataFrame.quantile : Similar method for DataFrame.
+ numpy.percentile : NumPy method to compute qth percentile.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([
+ ... ['a', 1], ['a', 2], ['a', 3],
+ ... ['b', 1], ['b', 3], ['b', 5]
+ ... ], columns=['key', 'val'])
+ >>> df.groupby('key').quantile()
+ val
+ key
+ a 2.0
+ b 3.0
+ """
+
+ def pre_processor(vals):
+ # type: np.ndarray -> (np.ndarray, Optional[typing.Type])
+ if is_object_dtype(vals):
+ raise TypeError("'quantile' cannot be performed against "
+ "'object' dtypes!")
+
+ inference = None
+ if is_integer_dtype(vals):
+ inference = np.int64
+ elif is_datetime64_dtype(vals):
+ inference = 'datetime64[ns]'
+ vals = vals.astype(np.float)
+
+ return vals, inference
+
+ def post_processor(vals, inference):
+ # type: (np.ndarray, Optional[typing.Type]) -> np.ndarray
+ if inference:
+ # Check for edge case
+ if not (is_integer_dtype(inference) and
+ interpolation in {'linear', 'midpoint'}):
+ vals = vals.astype(inference)
+
+ return vals
+
+ return self._get_cythonized_result('group_quantile', self.grouper,
+ aggregate=True,
+ needs_values=True,
+ needs_mask=True,
+ cython_dtype=np.float64,
+ pre_processing=pre_processor,
+ post_processing=post_processor,
+ q=q, interpolation=interpolation)
+
@Substitution(name='groupby')
def ngroup(self, ascending=True):
"""
@@ -1924,10 +1997,16 @@ def _get_cythonized_result(self, how, grouper, aggregate=False,
Whether the result of the Cython operation is an index of
values to be retrieved, instead of the actual values themselves
pre_processing : function, default None
- Function to be applied to `values` prior to passing to Cython
- Raises if `needs_values` is False
+ Function to be applied to `values` prior to passing to Cython.
+ Function should return a tuple where the first element is the
+ values to be passed to Cython and the second element is an optional
+ type which the values should be converted to after being returned
+ by the Cython operation. Raises if `needs_values` is False.
post_processing : function, default None
- Function to be applied to result of Cython function
+ Function to be applied to result of Cython function. Should accept
+ an array of values as the first argument and type inferences as its
+ second argument, i.e. the signature should be
+ (ndarray, typing.Type).
**kwargs : dict
Extra arguments to be passed back to Cython funcs
@@ -1963,10 +2042,12 @@ def _get_cythonized_result(self, how, grouper, aggregate=False,
result = np.zeros(result_sz, dtype=cython_dtype)
func = partial(base_func, result, labels)
+ inferences = None
+
if needs_values:
vals = obj.values
if pre_processing:
- vals = pre_processing(vals)
+ vals, inferences = pre_processing(vals)
func = partial(func, vals)
if needs_mask:
@@ -1982,7 +2063,7 @@ def _get_cythonized_result(self, how, grouper, aggregate=False,
result = algorithms.take_nd(obj.values, result)
if post_processing:
- result = post_processing(result)
+ result = post_processing(result, inferences)
output[name] = result
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index a884a37840f8a..bddbe8d474726 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -1060,6 +1060,55 @@ def test_size(df):
tm.assert_series_equal(df.groupby('A').size(), out)
+# quantile
+# --------------------------------
+@pytest.mark.parametrize("interpolation", [
+ "linear", "lower", "higher", "nearest", "midpoint"])
+@pytest.mark.parametrize("a_vals,b_vals", [
+ # Ints
+ ([1, 2, 3, 4, 5], [5, 4, 3, 2, 1]),
+ ([1, 2, 3, 4], [4, 3, 2, 1]),
+ ([1, 2, 3, 4, 5], [4, 3, 2, 1]),
+ # Floats
+ ([1., 2., 3., 4., 5.], [5., 4., 3., 2., 1.]),
+ # Missing data
+ ([1., np.nan, 3., np.nan, 5.], [5., np.nan, 3., np.nan, 1.]),
+ ([np.nan, 4., np.nan, 2., np.nan], [np.nan, 4., np.nan, 2., np.nan]),
+ # Timestamps
+ ([x for x in pd.date_range('1/1/18', freq='D', periods=5)],
+ [x for x in pd.date_range('1/1/18', freq='D', periods=5)][::-1]),
+ # All NA
+ ([np.nan] * 5, [np.nan] * 5),
+])
+@pytest.mark.parametrize('q', [0, .25, .5, .75, 1])
+def test_quantile(interpolation, a_vals, b_vals, q):
+ if interpolation == 'nearest' and q == 0.5 and b_vals == [4, 3, 2, 1]:
+ pytest.skip("Unclear numpy expectation for nearest result with "
+ "equidistant data")
+
+ a_expected = pd.Series(a_vals).quantile(q, interpolation=interpolation)
+ b_expected = pd.Series(b_vals).quantile(q, interpolation=interpolation)
+
+ df = DataFrame({
+ 'key': ['a'] * len(a_vals) + ['b'] * len(b_vals),
+ 'val': a_vals + b_vals})
+
+ expected = DataFrame([a_expected, b_expected], columns=['val'],
+ index=Index(['a', 'b'], name='key'))
+ result = df.groupby('key').quantile(q, interpolation=interpolation)
+
+ tm.assert_frame_equal(result, expected)
+
+
+def test_quantile_raises():
+ df = pd.DataFrame([
+ ['foo', 'a'], ['foo', 'b'], ['foo', 'c']], columns=['key', 'val'])
+
+ with pytest.raises(TypeError, match="cannot be performed against "
+ "'object' dtypes"):
+ df.groupby('key').quantile()
+
+
# pipe
# --------------------------------
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 12a5d494648fc..6a11f0ae9b44a 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -208,7 +208,7 @@ def f(x, q=None, axis=0):
trans_expected = ts_grouped.transform(g)
assert_series_equal(apply_result, agg_expected)
- assert_series_equal(agg_result, agg_expected, check_names=False)
+ assert_series_equal(agg_result, agg_expected)
assert_series_equal(trans_result, trans_expected)
agg_result = ts_grouped.agg(f, q=80)
@@ -223,13 +223,13 @@ def f(x, q=None, axis=0):
agg_result = df_grouped.agg(np.percentile, 80, axis=0)
apply_result = df_grouped.apply(DataFrame.quantile, .8)
expected = df_grouped.quantile(.8)
- assert_frame_equal(apply_result, expected)
- assert_frame_equal(agg_result, expected, check_names=False)
+ assert_frame_equal(apply_result, expected, check_names=False)
+ assert_frame_equal(agg_result, expected)
agg_result = df_grouped.agg(f, q=80)
apply_result = df_grouped.apply(DataFrame.quantile, q=.8)
- assert_frame_equal(agg_result, expected, check_names=False)
- assert_frame_equal(apply_result, expected)
+ assert_frame_equal(agg_result, expected)
+ assert_frame_equal(apply_result, expected, check_names=False)
def test_len():
| I started this change with the intention of fully Cythonizing the GroupBy `describe` method, but along the way realized it was worth implementing a Cythonized GroupBy `quantile` function first. In theory we could concat together `count`, `mean`, `std`, `min`, `median`, `max`, and two `quantile` calls (one for 25% and the other for 75%) to get `describe`.
I decided to push this up as is because this alone is a decent amount of work and I think is worth getting reviewed before going further.
- [ ] closes #xxxx
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Relevant ASVs provided below:
```bash
before after ratio
[4a43815d] [d7aec3fa]
- 236±6ms 421±40μs 0.00 groupby.GroupByMethods.time_dtype_as_field('float', 'quantile', 'transformation')
- 239±4ms 380±20μs 0.00 groupby.GroupByMethods.time_dtype_as_field('float', 'quantile', 'direct')
- 237±1ms 352±9μs 0.00 groupby.GroupByMethods.time_dtype_as_field('int', 'quantile', 'direct')
- 240±2ms 349±10μs 0.00 groupby.GroupByMethods.time_dtype_as_field('int', 'quantile', 'transformation')
- 348±6ms 372±9μs 0.00 groupby.GroupByMethods.time_dtype_as_group('int', 'quantile', 'transformation')
- 270±4ms 276±20μs 0.00 groupby.GroupByMethods.time_dtype_as_field('datetime', 'quantile', 'transformation')
- 357±6ms 347±0.9μs 0.00 groupby.GroupByMethods.time_dtype_as_group('int', 'quantile', 'direct')
- 271±4ms 263±0.6μs 0.00 groupby.GroupByMethods.time_dtype_as_field('datetime', 'quantile', 'direct')
- 515±9ms 351±2μs 0.00 groupby.GroupByMethods.time_dtype_as_group('datetime', 'quantile', 'transformation')
- 522±6ms 350±5μs 0.00 groupby.GroupByMethods.time_dtype_as_group('datetime', 'quantile', 'direct')
- 539±6ms 354±6μs 0.00 groupby.GroupByMethods.time_dtype_as_group('float', 'quantile', 'direct')
- 548±1ms 354±7μs 0.00 groupby.GroupByMethods.time_dtype_as_group('float', 'quantile', 'transformation')
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/20405 | 2018-03-19T03:55:24Z | 2019-02-28T13:36:58Z | 2019-02-28T13:36:58Z | 2019-03-05T19:16:40Z |
DEPR: deprecate get_ftype_counts (GH18243) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 151ab8456c1d7..dfb7f702f7d5c 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -741,6 +741,7 @@ Deprecations
- ``pandas.tseries.plotting.tsplot`` is deprecated. Use :func:`Series.plot` instead (:issue:`18627`)
- ``Index.summary()`` is deprecated and will be removed in a future version (:issue:`18217`)
+- ``NDFrame.get_ftype_counts()`` is deprecated and will be removed in a future version (:issue:`18243`)
.. _whatsnew_0230.prior_deprecations:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2adc289f98d94..7c9ec16304661 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4677,6 +4677,8 @@ def get_ftype_counts(self):
"""
Return counts of unique ftypes in this object.
+ .. deprecated:: 0.23.0
+
This is useful for SparseDataFrame or for DataFrames containing
sparse arrays.
@@ -4707,6 +4709,10 @@ def get_ftype_counts(self):
object:dense 1
dtype: int64
"""
+ warnings.warn("get_ftype_counts is deprecated and will "
+ "be removed in a future version",
+ FutureWarning, stacklevel=2)
+
from pandas import Series
return Series(self._data.get_ftype_counts())
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 56ff092dd0a27..dd1b623f0f7ff 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -64,8 +64,10 @@ def test_dtype(self):
assert self.ts.ftypes == 'float64:dense'
tm.assert_series_equal(self.ts.get_dtype_counts(),
Series(1, ['float64']))
- tm.assert_series_equal(self.ts.get_ftype_counts(),
- Series(1, ['float64:dense']))
+ # GH18243 - Assert .get_ftype_counts is deprecated
+ with tm.assert_produces_warning(FutureWarning):
+ tm.assert_series_equal(self.ts.get_ftype_counts(),
+ Series(1, ['float64:dense']))
@pytest.mark.parametrize("value", [np.nan, np.inf])
@pytest.mark.parametrize("dtype", [np.int32, np.int64])
| Deprecate NDFrame.get_ftype_counts()
- [X] closes #18243
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/20404 | 2018-03-19T02:01:42Z | 2018-03-27T11:29:08Z | 2018-03-27T11:29:08Z | 2018-03-27T11:52:18Z |
Named objs in asserts messages. | diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index ab7f3c3de2131..82d8c953759fc 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -3,6 +3,7 @@ import numpy as np
from pandas import compat
from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.core.dtypes.common import is_dtype_equal
+from pandas.core.common import _maybe_make_list
cdef NUMERIC_TYPES = (
bool,
@@ -129,9 +130,9 @@ cpdef assert_almost_equal(a, b,
if a_is_ndarray and b_is_ndarray:
na, nb = a.size, b.size
if a.shape != b.shape:
- from pandas.util.testing import raise_assert_detail
+ from pandas.util.testing import raise_assert_detail, join_obj
raise_assert_detail(
- obj, '{0} shapes are different'.format(obj),
+ obj, '{0} shapes are different'.format(join_obj(obj)),
a.shape, b.shape)
if check_dtype and not is_dtype_equal(a, b):
@@ -147,7 +148,7 @@ cpdef assert_almost_equal(a, b,
na, nb = len(a), len(b)
if na != nb:
- from pandas.util.testing import raise_assert_detail
+ from pandas.util.testing import raise_assert_detail, join_obj
# if we have a small diff set, print it
if abs(na - nb) < 10:
@@ -155,8 +156,8 @@ cpdef assert_almost_equal(a, b,
else:
r = None
- raise_assert_detail(obj, '{0} length are different'.format(obj),
- na, nb, r)
+ raise_assert_detail(obj, '{0} length are different'.format(
+ join_obj(obj)), na, nb, r)
for i in xrange(len(a)):
try:
@@ -167,9 +168,10 @@ cpdef assert_almost_equal(a, b,
diff += 1
if is_unequal:
- from pandas.util.testing import raise_assert_detail
+ from pandas.util.testing import raise_assert_detail, join_obj
+
msg = '{0} values are different ({1} %)'.format(
- obj, np.round(diff * 100.0 / na, 5))
+ join_obj(obj), np.round(diff * 100.0 / na, 5))
raise_assert_detail(obj, msg, lobj, robj)
return True
diff --git a/pandas/tests/util/test_testing.py b/pandas/tests/util/test_testing.py
index 1c878604b11a2..580e383e99366 100644
--- a/pandas/tests/util/test_testing.py
+++ b/pandas/tests/util/test_testing.py
@@ -338,6 +338,19 @@ def test_assert_almost_equal_iterable_message(self):
with tm.assert_raises_regex(AssertionError, expected):
assert_almost_equal([1, 2], [1, 3])
+ def test_numpy_array_equal_message_named(self):
+ a = np.array([1, 2, 3])
+ b = np.array([0, 2, 3])
+
+ expected = """a and bee are different
+
+a and bee values are different \\(33.33333 %\\)
+\\[a\\]: \\[1, 2, 3\\]
+\\[bee\\]: \\[0, 2, 3\\]"""
+
+ with tm.assert_raises_regex(AssertionError, expected):
+ assert_numpy_array_equal(a, b, obj=('a', 'bee'))
+
class TestAssertIndexEqual(object):
@@ -356,9 +369,9 @@ def test_index_equal_message(self):
with tm.assert_raises_regex(AssertionError, expected):
assert_index_equal(idx1, idx2, exact=False)
- expected = """MultiIndex level \\[1\\] are different
+ expected = """Index MultiIndex level \\[1\\] are different
-MultiIndex level \\[1\\] values are different \\(25\\.0 %\\)
+Index MultiIndex level \\[1\\] values are different \\(25\\.0 %\\)
\\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\)
\\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)"""
@@ -366,8 +379,10 @@ def test_index_equal_message(self):
('B', 3), ('B', 4)])
idx2 = pd.MultiIndex.from_tuples([('A', 1), ('A', 2),
('B', 3), ('B', 4)])
+
with tm.assert_raises_regex(AssertionError, expected):
assert_index_equal(idx1, idx2)
+
with tm.assert_raises_regex(AssertionError, expected):
assert_index_equal(idx1, idx2, check_exact=False)
@@ -440,9 +455,9 @@ def test_index_equal_message(self):
with tm.assert_raises_regex(AssertionError, expected):
assert_index_equal(idx1, idx2, check_less_precise=True)
- expected = """MultiIndex level \\[1\\] are different
+ expected = """Index MultiIndex level \\[1\\] are different
-MultiIndex level \\[1\\] values are different \\(25\\.0 %\\)
+Index MultiIndex level \\[1\\] values are different \\(25\\.0 %\\)
\\[left\\]: Int64Index\\(\\[2, 2, 3, 4\\], dtype='int64'\\)
\\[right\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)"""
@@ -485,6 +500,19 @@ def test_index_equal_metadata_message(self):
with tm.assert_raises_regex(AssertionError, expected):
assert_index_equal(idx1, idx2)
+ def test_index_equal_message_named(self):
+
+ expected = """Index1 and Idx2 are different
+
+Index1 and Idx2 values are different \\(33.33333 %\\)
+\\[Index1\\]: Int64Index\\(\\[1, 2, 3\\], dtype='int64'\\)
+\\[Idx2\\]: Int64Index\\(\\[0, 2, 3\\], dtype='int64'\\)"""
+
+ idx1 = pd.Index([1, 2, 3])
+ idx2 = pd.Index([0, 2, 3])
+ with tm.assert_raises_regex(AssertionError, expected):
+ assert_index_equal(idx1, idx2, obj=('Index1', 'Idx2'))
+
class TestAssertSeriesEqual(object):
@@ -579,6 +607,18 @@ def test_series_equal_message(self):
assert_series_equal(pd.Series([1, 2, 3]), pd.Series([1, 2, 4]),
check_less_precise=True)
+ def test_series_equal_message_named(self):
+
+ expected = """The Holy Grail and The Life of Brian are different
+
+The Holy Grail and The Life of Brian values are different \\(33.33333 %\\)
+\\[The Holy Grail\\]: \\[1, 2, 3\\]
+\\[The Life of Brian\\]: \\[0, 2, 3\\]"""
+
+ with tm.assert_raises_regex(AssertionError, expected):
+ assert_series_equal(pd.Series([1, 2, 3]), pd.Series([0, 2, 3]),
+ obj=('The Holy Grail', 'The Life of Brian'))
+
class TestAssertFrameEqual(object):
@@ -673,11 +713,30 @@ def test_frame_equal_message(self):
assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}),
pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 7]}))
+ expected = """DataFrame\\.blocks\\.iloc\\[:, 1\\] are different
+
+DataFrame\\.blocks\\.iloc\\[:, 1\\] values are different \\(33\\.33333 %\\)
+\\[left\\]: \\[4, 5, 6\\]
+\\[right\\]: \\[4, 5, 7\\]"""
+
with tm.assert_raises_regex(AssertionError, expected):
assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}),
pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 7]}),
by_blocks=True)
+ def test_frame_equal_message_named(self):
+
+ expected = """Potato and Mushroom are different
+
+Potato and Mushroom shape mismatch
+\\[Potato\\]: \\(3, 2\\)
+\\[Mushroom\\]: \\(3, 1\\)"""
+
+ with tm.assert_raises_regex(AssertionError, expected):
+ assert_frame_equal(pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}),
+ pd.DataFrame({'A': [1, 2, 3]}),
+ obj=('Potato', 'Mushroom'))
+
class TestAssertCategoricalEqual(object):
@@ -716,6 +775,19 @@ def test_categorical_equal_message(self):
with tm.assert_raises_regex(AssertionError, expected):
tm.assert_categorical_equal(a, b)
+ def test_categorical_equal_message_named(self):
+
+ expected = """Blue.categories and Red.categories are different
+
+Blue.categories and Red.categories values are different \\(25.0 %\\)
+\\[Blue.categories\\]: Int64Index\\(\\[1, 2, 3, 4\\], dtype='int64'\\)
+\\[Red.categories\\]: Int64Index\\(\\[1, 2, 3, 5\\], dtype='int64'\\)"""
+
+ a = pd.Categorical([1, 2, 3, 4])
+ b = pd.Categorical([1, 2, 3, 5])
+ with tm.assert_raises_regex(AssertionError, expected):
+ tm.assert_categorical_equal(a, b, obj=('Blue', 'Red'))
+
class TestRNGContext(object):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a223e4d8fd23e..7a74940e0e955 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -47,7 +47,6 @@
from pandas._libs import testing as _testing
from pandas.io.common import urlopen
-
N = 30
K = 4
_RAISE_NETWORK_ERROR_DEFAULT = False
@@ -297,7 +296,6 @@ def _check_isinstance(left, right, cls):
def assert_dict_equal(left, right, compare_keys=True):
-
_check_isinstance(left, right, dict)
return _testing.assert_dict_equal(left, right, compare_keys=compare_keys)
@@ -464,7 +462,7 @@ def get_locales(prefix=None, normalize=True,
return _valid_locales(out_locales, normalize)
found = re.compile('{prefix}.*'.format(prefix=prefix)) \
- .findall('\n'.join(out_locales))
+ .findall('\n'.join(out_locales))
return _valid_locales(found, normalize)
@@ -548,6 +546,7 @@ def _valid_locales(locales, normalize):
return list(filter(_can_set_locale, map(normalizer, locales)))
+
# -----------------------------------------------------------------------------
# Stdout / stderr decorators
@@ -647,6 +646,7 @@ def wrapper(*args, **kwargs):
return wrapper
+
# -----------------------------------------------------------------------------
# Console debugging tools
@@ -676,6 +676,7 @@ def set_trace():
from pdb import Pdb as OldPdb
OldPdb().set_trace(sys._getframe().f_back)
+
# -----------------------------------------------------------------------------
# contextmanager to ensure the file cleanup
@@ -737,6 +738,7 @@ def get_data_path(f=''):
base_dir = os.path.abspath(os.path.dirname(filename))
return os.path.join(base_dir, 'data', f)
+
# -----------------------------------------------------------------------------
# Comparators
@@ -770,11 +772,13 @@ def assert_index_equal(left, right, exact='equiv', check_names=True,
Whether to compare number exactly.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
- obj : str, default 'Index'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'Index'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
"""
+ # obj = com._maybe_make_list(obj)
+
def _check_types(l, r, obj='Index'):
if exact:
assert_class_equal(left, right, exact=exact, obj=obj)
@@ -801,14 +805,14 @@ def _get_ilevel_values(index, level):
# level comparison
if left.nlevels != right.nlevels:
- msg1 = '{obj} levels are different'.format(obj=obj)
+ msg1 = '{obj} levels are different'.format(obj=join_obj(obj))
msg2 = '{nlevels}, {left}'.format(nlevels=left.nlevels, left=left)
msg3 = '{nlevels}, {right}'.format(nlevels=right.nlevels, right=right)
raise_assert_detail(obj, msg1, msg2, msg3)
# length comparison
if len(left) != len(right):
- msg1 = '{obj} length are different'.format(obj=obj)
+ msg1 = '{obj} length are different'.format(obj=join_obj(obj))
msg2 = '{length}, {left}'.format(length=len(left), left=left)
msg3 = '{length}, {right}'.format(length=len(right), right=right)
raise_assert_detail(obj, msg1, msg2, msg3)
@@ -820,7 +824,8 @@ def _get_ilevel_values(index, level):
llevel = _get_ilevel_values(left, level)
rlevel = _get_ilevel_values(right, level)
- lobj = 'MultiIndex level [{level}]'.format(level=level)
+ lobj = map_obj('{obj} MultiIndex level [{level}]', obj,
+ level=level)
assert_index_equal(llevel, rlevel,
exact=exact, check_names=check_names,
check_less_precise=check_less_precise,
@@ -833,7 +838,7 @@ def _get_ilevel_values(index, level):
diff = np.sum((left.values != right.values)
.astype(int)) * 100.0 / len(left)
msg = '{obj} values are different ({pct} %)'.format(
- obj=obj, pct=np.round(diff, 5))
+ obj=join_obj(obj), pct=np.round(diff, 5))
raise_assert_detail(obj, msg, left, right)
else:
_testing.assert_almost_equal(left.values, right.values,
@@ -853,12 +858,25 @@ def _get_ilevel_values(index, level):
if check_categorical:
if is_categorical_dtype(left) or is_categorical_dtype(right):
assert_categorical_equal(left.values, right.values,
- obj='{obj} category'.format(obj=obj))
+ obj=map_obj('{obj} category', obj))
+
+
+def join_obj(obj):
+ obj = com._maybe_make_list(obj)
+ return ' and '.join(obj)
+
+
+def map_obj(form, objs, **kwargs):
+ assert isinstance(form, str)
+ objs = com._maybe_make_list(objs)
+ return list(map(lambda obj: form.format(obj=obj, **kwargs), objs))
def assert_class_equal(left, right, exact=True, obj='Input'):
"""checks classes are equal."""
+ obj = com._maybe_make_list(obj)
+
def repr_class(x):
if isinstance(x, Index):
# return Index as it is to include values in the error message
@@ -874,12 +892,13 @@ def repr_class(x):
# allow equivalence of Int64Index/RangeIndex
types = set([type(left).__name__, type(right).__name__])
if len(types - set(['Int64Index', 'RangeIndex'])):
- msg = '{obj} classes are not equivalent'.format(obj=obj)
+ msg = '{obj} classes are not equivalent'.format(
+ obj=join_obj(obj))
raise_assert_detail(obj, msg, repr_class(left),
repr_class(right))
elif exact:
if type(left) != type(right):
- msg = '{obj} classes are different'.format(obj=obj)
+ msg = '{obj} classes are different'.format(obj=join_obj(obj))
raise_assert_detail(obj, msg, repr_class(left),
repr_class(right))
@@ -893,11 +912,13 @@ def assert_attr_equal(attr, left, right, obj='Attributes'):
Attribute name being compared.
left : object
right : object
- obj : str, default 'Attributes'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'Attributes'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
"""
+ # obj = com._maybe_make_list(obj)
+
left_attr = getattr(left, attr)
right_attr = getattr(right, attr)
@@ -941,11 +962,12 @@ def isiterable(obj):
return hasattr(obj, '__iter__')
-def is_sorted(seq):
+def is_sorted(seq, obj='seq'):
if isinstance(seq, (Index, Series)):
seq = seq.values
# sorting does not change precisions
- return assert_numpy_array_equal(seq, np.sort(np.array(seq)))
+ return assert_numpy_array_equal(seq, np.sort(np.array(seq)),
+ obj=(obj, 'sorted({})'.format(obj)))
def assert_categorical_equal(left, right, check_dtype=True,
@@ -958,8 +980,8 @@ def assert_categorical_equal(left, right, check_dtype=True,
Categoricals to compare
check_dtype : bool, default True
Check that integer dtype of the codes are the same
- obj : str, default 'Categorical'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'Categorical'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
check_category_order : bool, default True
Whether the order of the categories should be compared, which
@@ -967,26 +989,43 @@ def assert_categorical_equal(left, right, check_dtype=True,
values are compared. The ordered attribute is
checked regardless.
"""
+
+ # obj = com._maybe_make_list(obj)
+
_check_isinstance(left, right, Categorical)
+ obj = com._maybe_make_list(obj)
+
if check_category_order:
assert_index_equal(left.categories, right.categories,
- obj='{obj}.categories'.format(obj=obj))
+ obj=map_obj('{obj}.categories', obj))
+
assert_numpy_array_equal(left.codes, right.codes,
check_dtype=check_dtype,
- obj='{obj}.codes'.format(obj=obj))
+ obj=map_obj('{obj}.codes', obj))
else:
assert_index_equal(left.categories.sort_values(),
right.categories.sort_values(),
- obj='{obj}.categories'.format(obj=obj))
+ obj=map_obj('{obj}.categories', obj))
+
assert_index_equal(left.categories.take(left.codes),
right.categories.take(right.codes),
- obj='{obj}.values'.format(obj=obj))
+ obj=map_obj('{obj}.values', obj))
assert_attr_equal('ordered', left, right, obj=obj)
def raise_assert_detail(obj, message, left, right, diff=None):
+ obj = com._maybe_make_list(obj)
+
+ if len(obj) >= 2:
+ names = obj[:2]
+ else:
+ names = ('left', 'right')
+
+ names_formatted = list(map(lambda x: '[{}]:'.format(x), names))
+ max_len = max(map(len, names_formatted))
+
if isinstance(left, np.ndarray):
left = pprint_thing(left)
elif is_categorical_dtype(left):
@@ -999,8 +1038,10 @@ def raise_assert_detail(obj, message, left, right, diff=None):
msg = """{obj} are different
{message}
-[left]: {left}
-[right]: {right}""".format(obj=obj, message=message, left=left, right=right)
+{names[0]:<{max_len}} {left}
+{names[1]:<{max_len}} {right}""".format(obj=join_obj(obj), message=message,
+ left=left, right=right,
+ names=names_formatted, max_len=max_len)
if diff is not None:
msg += "\n[diff]: {diff}".format(diff=diff)
@@ -1023,13 +1064,15 @@ def assert_numpy_array_equal(left, right, strict_nan=False,
check dtype if both a and b are np.ndarray
err_msg : str, default None
If provided, used as assertion message
- obj : str, default 'numpy array'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'numpy array'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
check_same : None|'copy'|'same', default None
Ensure left and right refer/do not refer to the same memory area
"""
+ # obj = com._maybe_make_list(obj)
+
# instance validation
# Show a detailed error message when classes are different
assert_class_equal(left, right, obj=obj)
@@ -1057,7 +1100,8 @@ def _raise(left, right, err_msg):
if err_msg is None:
if left.shape != right.shape:
raise_assert_detail(obj, '{obj} shapes are different'
- .format(obj=obj), left.shape, right.shape)
+ .format(obj=join_obj(obj)), left.shape,
+ right.shape)
diff = 0
for l, r in zip(left, right):
@@ -1067,7 +1111,7 @@ def _raise(left, right, err_msg):
diff = diff * 100.0 / left.size
msg = '{obj} values are different ({pct} %)'.format(
- obj=obj, pct=np.round(diff, 5))
+ obj=join_obj(obj), pct=np.round(diff, 5))
raise_assert_detail(obj, msg, left, right)
raise AssertionError(err_msg)
@@ -1118,11 +1162,13 @@ def assert_series_equal(left, right, check_dtype=True,
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
- obj : str, default 'Series'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'Series'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
"""
+ # obj = com._maybe_make_list(obj)
+
# instance validation
_check_isinstance(left, right, Series)
@@ -1144,7 +1190,7 @@ def assert_series_equal(left, right, check_dtype=True,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
- obj='{obj}.index'.format(obj=obj))
+ obj=map_obj('{obj}.index', obj))
if check_dtype:
# We want to skip exact dtype checking when `check_categorical`
@@ -1159,14 +1205,14 @@ def assert_series_equal(left, right, check_dtype=True,
if check_exact:
assert_numpy_array_equal(left.get_values(), right.get_values(),
check_dtype=check_dtype,
- obj='{obj}'.format(obj=obj),)
+ obj=map_obj('{obj}', obj), )
elif check_datetimelike_compat:
# we want to check only if we have compat dtypes
# e.g. integer and M|m are NOT compat, but we can simply check
# the values in that case
if (is_datetimelike_v_numeric(left, right) or
- is_datetimelike_v_object(left, right) or
- needs_i8_conversion(left) or
+ is_datetimelike_v_object(left, right) or
+ needs_i8_conversion(left) or
needs_i8_conversion(right)):
# datetimelike may have different objects (e.g. datetime.datetime
@@ -1182,13 +1228,13 @@ def assert_series_equal(left, right, check_dtype=True,
# TODO: big hack here
left = pd.IntervalIndex(left)
right = pd.IntervalIndex(right)
- assert_index_equal(left, right, obj='{obj}.index'.format(obj=obj))
+ assert_index_equal(left, right, obj=map_obj('{obj}.index', obj))
else:
_testing.assert_almost_equal(left.get_values(), right.get_values(),
check_less_precise=check_less_precise,
check_dtype=check_dtype,
- obj='{obj}'.format(obj=obj))
+ obj=map_obj('{obj}', obj))
# metadata comparison
if check_names:
@@ -1197,7 +1243,7 @@ def assert_series_equal(left, right, check_dtype=True,
if check_categorical:
if is_categorical_dtype(left) or is_categorical_dtype(right):
assert_categorical_equal(left.values, right.values,
- obj='{obj} category'.format(obj=obj))
+ obj=map_obj('{obj} category', obj))
# This could be refactored to use the NDFrame.equals method
@@ -1246,11 +1292,13 @@ def assert_frame_equal(left, right, check_dtype=True,
Whether to compare internal Categorical exactly.
check_like : bool, default False
If true, ignore the order of rows & columns
- obj : str, default 'DataFrame'
- Specify object name being compared, internally used to show appropriate
+ obj : str or 2-length sequence (tuple, list, ...), default 'DataFrame'
+ Specify object(s) name(s) being compared, used to show appropriate
assertion message
"""
+ # obj = com._maybe_make_list(obj)
+
# instance validation
_check_isinstance(left, right, DataFrame)
@@ -1263,7 +1311,7 @@ def assert_frame_equal(left, right, check_dtype=True,
# shape comparison
if left.shape != right.shape:
raise_assert_detail(obj,
- 'DataFrame shape mismatch',
+ '{obj} shape mismatch'.format(obj=join_obj(obj)),
'{shape!r}'.format(shape=left.shape),
'{shape!r}'.format(shape=right.shape))
@@ -1276,7 +1324,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
- obj='{obj}.index'.format(obj=obj))
+ obj=map_obj('{obj}.index', obj))
# column comparison
assert_index_equal(left.columns, right.columns, exact=check_column_type,
@@ -1284,7 +1332,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_less_precise=check_less_precise,
check_exact=check_exact,
check_categorical=check_categorical,
- obj='{obj}.columns'.format(obj=obj))
+ obj=map_obj('{obj}.columns', obj))
# compare by blocks
if by_blocks:
@@ -1294,7 +1342,8 @@ def assert_frame_equal(left, right, check_dtype=True,
assert dtype in lblocks
assert dtype in rblocks
assert_frame_equal(lblocks[dtype], rblocks[dtype],
- check_dtype=check_dtype, obj='DataFrame.blocks')
+ check_dtype=check_dtype,
+ obj=map_obj('{obj}.blocks', obj))
# compare by columns
else:
@@ -1309,7 +1358,7 @@ def assert_frame_equal(left, right, check_dtype=True,
check_exact=check_exact, check_names=check_names,
check_datetimelike_compat=check_datetimelike_compat,
check_categorical=check_categorical,
- obj='DataFrame.iloc[:, {idx}]'.format(idx=i))
+ obj=map_obj('{obj}.iloc[:, {idx}]', obj, idx=i))
def assert_panel_equal(left, right,
@@ -1343,6 +1392,8 @@ def assert_panel_equal(left, right,
the appropriate assertion message.
"""
+ # obj = com._maybe_make_list(obj)
+
if check_panel_type:
assert_class_equal(left, right, obj=obj)
@@ -1430,13 +1481,16 @@ def assert_sp_series_equal(left, right, check_dtype=True, exact_indices=True,
Specify the object name being compared, internally used to show
the appropriate assertion message.
"""
+
+ # obj = com._maybe_make_list(obj)
+
_check_isinstance(left, right, pd.SparseSeries)
if check_series_type:
assert_class_equal(left, right, obj=obj)
assert_index_equal(left.index, right.index,
- obj='{obj}.index'.format(obj=obj))
+ obj=map_obj('{obj}.index', obj))
assert_sp_array_equal(left.block.values, right.block.values)
@@ -1467,15 +1521,18 @@ def assert_sp_frame_equal(left, right, check_dtype=True, exact_indices=True,
Specify the object name being compared, internally used to show
the appropriate assertion message.
"""
+
+ # obj = com._maybe_make_list(obj)
+
_check_isinstance(left, right, pd.SparseDataFrame)
if check_frame_type:
assert_class_equal(left, right, obj=obj)
assert_index_equal(left.index, right.index,
- obj='{obj}.index'.format(obj=obj))
+ obj=map_obj('{obj}.index', obj))
assert_index_equal(left.columns, right.columns,
- obj='{obj}.columns'.format(obj=obj))
+ obj=map_obj('{obj}.columns', obj))
for col, series in compat.iteritems(left):
assert (col in right)
@@ -1496,6 +1553,7 @@ def assert_sp_frame_equal(left, right, check_dtype=True, exact_indices=True,
for col in right:
assert (col in left)
+
# -----------------------------------------------------------------------------
# Others
@@ -1564,7 +1622,7 @@ def makeIntIndex(k=10, name=None):
def makeUIntIndex(k=10, name=None):
- return Index([2**63 + i for i in lrange(k)], name=name)
+ return Index([2 ** 63 + i for i in lrange(k)], name=name)
def makeRangeIndex(k=10, name=None, **kwargs):
@@ -2049,8 +2107,8 @@ def dec(f):
111, # Connection refused
110, # Connection timed out
104, # Connection reset Error
- 54, # Connection reset by peer
- 60, # urllib.error.URLError: [Errno 60] Connection timed out
+ 54, # Connection reset by peer
+ 60, # urllib.error.URLError: [Errno 60] Connection timed out
)
# Both of the above shouldn't mask real issues such as 404's
@@ -2223,7 +2281,6 @@ def wrapper(*args, **kwargs):
class SimpleMock(object):
-
"""
Poor man's mocking object
@@ -2239,7 +2296,7 @@ class SimpleMock(object):
"""
def __init__(self, obj, *args, **kwds):
- assert(len(args) % 2 == 0)
+ assert (len(args) % 2 == 0)
attrs = kwds.get("attrs", {})
for k, v in zip(args[::2], args[1::2]):
# dict comprehensions break 2.6
@@ -2525,12 +2582,10 @@ def __init__(self, seed):
self.seed = seed
def __enter__(self):
-
self.start_state = np.random.get_state()
np.random.seed(self.seed)
def __exit__(self, exc_type, exc_value, traceback):
-
np.random.set_state(self.start_state)
@@ -2592,7 +2647,9 @@ def inner(*args, **kwargs):
thread.start()
for thread in threads:
thread.join()
+
return inner
+
return wrapper
|
### Feature
---
Ability to name objects being passed into `pandas.util.testing.assert_<type>_equal()` statements to allow for more helpful error messages Via the `obj` parameter.
#### Example:
```python
a = pd.Index([1, 2, 3, 4])
b = pd.Index([1, 2, 3, 5])
testing.assert_index_equal(a, b, obj=('Red', 'Blue'))
```
#### Output:
```diff
- AssertionError: Red and Blue are different
-
- Red and Blue values are different (25.0 %)
- [Red]: Int64Index([1, 2, 3, 4], dtype='int64')
- [Blue]: Int64Index([1, 2, 3, 5], dtype='int64')
```
---
However, messages & are still kept in tact. For instance if I do a equal on two Dataframes with different values & pass in the names it will still append `.iloc[:, <idx>]` as shown:
#### Example:
```python
df1 = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, index=['a', 'b', 'c'])
df2 = pd.DataFrame({'A': [1, 2, 3], 'B': [6, 5, 4]}, index=['a', 'b', 'c'])
testing.assert_frame_equal(df1, df2, obj=('df1', 'df2'))
```
#### Output:
```diff
- AssertionError: df1.iloc[:, 1] and df2.iloc[:, 1] are different
-
- df1.iloc[:, 1] and df2.iloc[:, 1] values are different (66.66667 %)
- [df1.iloc[:, 1]]: [4, 5, 6]
- [df2.iloc[:, 1]]: [6, 5, 4]
```
---
But, if this is undesired or unneeded, the default names, `('left', 'right')`, still persists. As shown:
#### Example:
```python
testing.assert_index_equal(a, b)
```
#### Output:
```diff
- AssertionError: Index are different
-
- Index values are different (25.0 %)
- [left]: Int64Index([1, 2, 3, 4], dtype='int64')
- [right]: Int64Index([1, 2, 3, 5], dtype='int64')
```
It should be noted where were a _few_ cases where I had to change the default messages due to some hard-coded messages, I've updated the tests for those specific cases. **but** if for some reason someone has something that relies on those messages being exactly what they were they will now break.
---
Checklist for other PRs:
- [x] closes #xxxx (N/A)
- [x] tests added / passed*
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
\* There were 4 unrelated tests that were failing on upstream/master & those 4 still fail, but all the tests related to the files I edit still pass. | https://api.github.com/repos/pandas-dev/pandas/pulls/20403 | 2018-03-18T18:21:24Z | 2018-07-08T16:03:21Z | null | 2018-07-08T16:03:22Z |
DOC: Only use ~ in class links to hide prefixes. | diff --git a/doc/source/contributing_docstring.rst b/doc/source/contributing_docstring.rst
index cd56b76fa891b..c210bb7050fb8 100644
--- a/doc/source/contributing_docstring.rst
+++ b/doc/source/contributing_docstring.rst
@@ -107,10 +107,18 @@ backticks. It is considered inline code:
- The name of a parameter
- Python code, a module, function, built-in, type, literal... (e.g. ``os``,
``list``, ``numpy.abs``, ``datetime.date``, ``True``)
-- A pandas class (in the form ``:class:`~pandas.Series```)
+- A pandas class (in the form ``:class:`pandas.Series```)
- A pandas method (in the form ``:meth:`pandas.Series.sum```)
- A pandas function (in the form ``:func:`pandas.to_datetime```)
+.. note::
+ To display only the last component of the linked class, method or
+ function, prefix it with ``~``. For example, ``:class:`~pandas.Series```
+ will link to ``pandas.Series`` but only display the last part, ``Series``
+ as the link text. See `Sphinx cross-referencing syntax
+ <http://www.sphinx-doc.org/en/stable/domains.html#cross-referencing-syntax>`_
+ for details.
+
**Good:**
.. code-block:: python
| This is ported from https://github.com/python-sprints/python-sprints.github.io/pull/78/, as suggested in a comment.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20402 | 2018-03-18T17:50:48Z | 2018-03-19T10:58:58Z | 2018-03-19T10:58:58Z | 2018-03-19T10:59:04Z |
Fix Series construction with dtype=str | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index acbfa2eb3ccac..3f39d52eb832e 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -1035,6 +1035,7 @@ Reshaping
- Bug in :class:`Series` constructor with ``Categorical`` where a ```ValueError`` is not raised when an index of different length is given (:issue:`19342`)
- Bug in :meth:`DataFrame.astype` where column metadata is lost when converting to categorical or a dictionary of dtypes (:issue:`19920`)
- Bug in :func:`cut` and :func:`qcut` where timezone information was dropped (:issue:`19872`)
+- Bug in :class:`Series` constructor with a ``dtype=str``, previously raised in some cases (:issue:`19853`)
Other
^^^^^
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e4801242073a2..bcf97832e40ce 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4059,9 +4059,10 @@ def _try_cast(arr, take_fast_path):
if issubclass(subarr.dtype.type, compat.string_types):
# GH 16605
# If not empty convert the data to dtype
- if not isna(data).all():
- data = np.array(data, dtype=dtype, copy=False)
-
- subarr = np.array(data, dtype=object, copy=copy)
+ # GH 19853: If data is a scalar, subarr has already the result
+ if not is_scalar(data):
+ if not np.all(isna(data)):
+ data = np.array(data, dtype=dtype, copy=False)
+ subarr = np.array(data, dtype=object, copy=copy)
return subarr
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index e0bfe41645a3f..82b5b1c10fa2d 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -110,6 +110,11 @@ def test_constructor_empty(self, input_class):
empty2 = Series(input_class(), index=lrange(10), dtype='float64')
assert_series_equal(empty, empty2)
+ # GH 19853 : with empty string, index and dtype str
+ empty = Series('', dtype=str, index=range(3))
+ empty2 = Series('', index=range(3))
+ assert_series_equal(empty, empty2)
+
@pytest.mark.parametrize('input_arg', [np.nan, float('nan')])
def test_constructor_nan(self, input_arg):
empty = Series(dtype='float64', index=lrange(10))
| TST: Added test for construction Series with dtype=str
BUG: Handles case where data is scalar
DOC: added changes to whatsnew/v0.23.0.txt
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [x] closes #19853
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20401 | 2018-03-18T16:20:47Z | 2018-03-30T20:09:28Z | 2018-03-30T20:09:28Z | 2018-03-30T20:09:32Z |
ENH: dtype('bool') ndarray or Series coerced to dtype('int') in qcut GH20303 | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 9159c03edee2e..96e60e3ab9acc 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -345,6 +345,7 @@ Other Enhancements
``SQLAlchemy`` dialects supporting multivalue inserts include: ``mysql``, ``postgresql``, ``sqlite`` and any dialect with ``supports_multivalues_insert``. (:issue:`14315`, :issue:`8953`)
- :func:`read_html` now accepts a ``displayed_only`` keyword argument to controls whether or not hidden elements are parsed (``True`` by default) (:issue:`20027`)
- zip compression is supported via ``compression=zip`` in :func:`DataFrame.to_pickle`, :func:`Series.to_pickle`, :func:`DataFrame.to_csv`, :func:`Series.to_csv`, :func:`DataFrame.to_json`, :func:`Series.to_json`. (:issue:`17778`)
+- :func:`qcut` now coerces ``dtype='bool'`` `ndarrays` and `Series` to int (equal to `.astype` method) (:issue:`20303`)
.. _whatsnew_0230.api_breaking:
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index 118198ea0320d..98dd6ef1c6577 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -7,6 +7,7 @@
from pandas.core.dtypes.common import (
is_integer,
is_scalar,
+ is_bool_dtype,
is_categorical_dtype,
is_datetime64_dtype,
is_timedelta64_dtype,
@@ -16,7 +17,7 @@
import pandas.core.algorithms as algos
import pandas.core.nanops as nanops
from pandas._libs.lib import infer_dtype
-from pandas import (to_timedelta, to_datetime,
+from pandas import (to_timedelta, to_datetime, notna,
Categorical, Timestamp, Timedelta,
Series, Interval, IntervalIndex)
@@ -345,7 +346,7 @@ def _trim_zeros(x):
def _coerce_to_type(x):
"""
- if the passed data is of datetime/timedelta type,
+ if the passed data is of datetime/timedelta or bool type,
this method converts it to numeric so that cut method can
handle it
"""
@@ -359,10 +360,14 @@ def _coerce_to_type(x):
elif is_timedelta64_dtype(x):
x = to_timedelta(x)
dtype = np.timedelta64
+ elif is_bool_dtype(x):
+ # GH 20303: coerce bool dtype to int
+ x = x.astype(np.int64)
+ dtype = np.bool_
if dtype is not None:
# GH 19768: force NaT to NaN during integer conversion
- x = np.where(x.notna(), x.view(np.int64), np.nan)
+ x = np.where(notna(x), x.view(np.int64), np.nan)
return x, dtype
diff --git a/pandas/tests/reshape/test_tile.py b/pandas/tests/reshape/test_tile.py
index 8d093f2784ba1..8331e98dc636d 100644
--- a/pandas/tests/reshape/test_tile.py
+++ b/pandas/tests/reshape/test_tile.py
@@ -263,6 +263,27 @@ def test_qcut_index(self):
expected = Categorical(intervals, ordered=True)
tm.assert_categorical_equal(result, expected)
+ def test_qcut_bool_series_to_int(self):
+ # GH 20303
+ x = Series(np.random.randint(2, size=100))
+ bins = 6
+
+ expected = qcut(x, bins, duplicates='drop')
+
+ data = x.astype(bool)
+ result = qcut(data, bins, duplicates='drop')
+ tm.assert_series_equal(result, expected)
+
+ def test_qcut_bool_array_to_int(self):
+ x = np.random.randint(2, size=100)
+ bins = 6
+
+ expected = qcut(arr, bins, duplicates='drop')
+
+ data = x.astype(bool)
+ result = qcut(data, bins, duplicates='drop')
+ tm.assert_categorical_equal(result, expected)
+
def test_round_frac(self):
# it works
result = cut(np.arange(11.), 2)
|
- [x] closes #20303
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Currently, passing a `dtype('bool')` `ndarray` or `Series` as argument to `qcut` raises a `TypeError`. As was suggested in #20303, now `_coerce_to_type` also coerces `bool` types to `int`. | https://api.github.com/repos/pandas-dev/pandas/pulls/20400 | 2018-03-18T15:52:43Z | 2018-09-25T16:49:58Z | null | 2018-09-25T16:49:59Z |
BUG: fixed json_normalize for subrecords with NoneTypes (#20030) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index cfe28edd175b6..f0fc62f455fd1 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -979,6 +979,7 @@ I/O
- :class:`Timedelta` now supported in :func:`DataFrame.to_excel` for all Excel file types (:issue:`19242`, :issue:`9155`, :issue:`19900`)
- Bug in :meth:`pandas.io.stata.StataReader.value_labels` raising an ``AttributeError`` when called on very old files. Now returns an empty dict (:issue:`19417`)
- Bug in :func:`read_pickle` when unpickling objects with :class:`TimedeltaIndex` or :class:`Float64Index` created with pandas prior to version 0.20 (:issue:`19939`)
+- Bug in :meth:`pandas.io.json.json_normalize` where subrecords are not properly normalized if any subrecords values are NoneType (:issue:`20030`)
Plotting
^^^^^^^^
diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py
index c7901f4352d00..549204abd3caf 100644
--- a/pandas/io/json/normalize.py
+++ b/pandas/io/json/normalize.py
@@ -80,6 +80,8 @@ def nested_to_record(ds, prefix="", sep=".", level=0):
if level != 0: # so we skip copying for top level, common case
v = new_d.pop(k)
new_d[newkey] = v
+ if v is None: # pop the key if the value is None
+ new_d.pop(k)
continue
else:
v = new_d.pop(k)
@@ -189,7 +191,8 @@ def _pull_field(js, spec):
data = [data]
if record_path is None:
- if any(isinstance(x, dict) for x in compat.itervalues(data[0])):
+ if any([[isinstance(x, dict)
+ for x in compat.itervalues(y)] for y in data]):
# naive normalization, this is idempotent for flat records
# and potentially will inflate the data considerably for
# deeply nested structures:
diff --git a/pandas/tests/io/json/test_normalize.py b/pandas/tests/io/json/test_normalize.py
index 1cceae32cd748..0fabaf747b6de 100644
--- a/pandas/tests/io/json/test_normalize.py
+++ b/pandas/tests/io/json/test_normalize.py
@@ -54,6 +54,17 @@ def state_data():
'state': 'Ohio'}]
+@pytest.fixture
+def author_missing_data():
+ return [
+ {'info': None},
+ {'info':
+ {'created_at': '11/08/1993', 'last_updated': '26/05/2012'},
+ 'author_name':
+ {'first': 'Jane', 'last_name': 'Doe'}
+ }]
+
+
class TestJSONNormalize(object):
def test_simple_records(self):
@@ -226,6 +237,23 @@ def test_non_ascii_key(self):
result = json_normalize(json.loads(testjson))
tm.assert_frame_equal(result, expected)
+ def test_missing_field(self, author_missing_data):
+ # GH20030: Checks for robustness of json_normalize - should
+ # unnest records where only the first record has a None value
+ result = json_normalize(author_missing_data)
+ ex_data = [
+ {'author_name.first': np.nan,
+ 'author_name.last_name': np.nan,
+ 'info.created_at': np.nan,
+ 'info.last_updated': np.nan},
+ {'author_name.first': 'Jane',
+ 'author_name.last_name': 'Doe',
+ 'info.created_at': '11/08/1993',
+ 'info.last_updated': '26/05/2012'}
+ ]
+ expected = DataFrame(ex_data)
+ tm.assert_frame_equal(result, expected)
+
class TestNestedToRecord(object):
@@ -322,3 +350,28 @@ def test_json_normalize_errors(self):
['general', 'trade_version']],
errors='raise'
)
+
+ def test_nonetype_dropping(self):
+ # GH20030: Checks that None values are dropped in nested_to_record
+ # to prevent additional columns of nans when passed to DataFrame
+ data = [
+ {'info': None,
+ 'author_name':
+ {'first': 'Smith', 'last_name': 'Appleseed'}
+ },
+ {'info':
+ {'created_at': '11/08/1993', 'last_updated': '26/05/2012'},
+ 'author_name':
+ {'first': 'Jane', 'last_name': 'Doe'}
+ }
+ ]
+ result = nested_to_record(data)
+ expected = [
+ {'author_name.first': 'Smith',
+ 'author_name.last_name': 'Appleseed'},
+ {'author_name.first': 'Jane',
+ 'author_name.last_name': 'Doe',
+ 'info.created_at': '11/08/1993',
+ 'info.last_updated': '26/05/2012'}]
+
+ assert result == expected
| TST: additional coverage for the test cases
DOC: added changes to whatsnew/v0.23.0.txt
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [x] closes #20030
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20399 | 2018-03-18T11:02:10Z | 2018-03-20T10:36:23Z | 2018-03-20T10:36:23Z | 2019-04-10T14:01:07Z |
CLN: display.notebook_repr_html -> display.html.notebook (GH11784) | diff --git a/doc/source/options.rst b/doc/source/options.rst
index a82be4d84bf3f..ffd73786ab15e 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -372,7 +372,7 @@ display.memory_usage True This specifies if the memor
display.multi_sparse True "Sparsify" MultiIndex display (don't
display repeated elements in outer
levels within groups)
-display.notebook_repr_html True When True, IPython notebook will
+display.html.notebook True When True, IPython notebook will
use html representation for
pandas objects (if it is available).
display.pprint_nest_depth 3 Controls the number of nested levels
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 4179277291478..09eff0208599d 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -740,6 +740,7 @@ Deprecations
- ``pandas.tseries.plotting.tsplot`` is deprecated. Use :func:`Series.plot` instead (:issue:`18627`)
- ``Index.summary()`` is deprecated and will be removed in a future version (:issue:`18217`)
+- ``pd.options.display.notebook_repr_html`` is deprecated in favor of ``pd.options.display.html.notebook`` (:issue:`11784`)
.. _whatsnew_0230.prior_deprecations:
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 0edbf892172a9..fa8aedc3f391d 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -101,12 +101,17 @@ def use_numexpr_cb(key):
per column information will be printed.
"""
-pc_nb_repr_h_doc = """
+html_notebook_doc = """
: boolean
When True, IPython notebook will use html representation for
pandas objects (if it is available).
"""
+pc_nb_repr_h_deprecation_warning = """
+display.notebook_repr_html has been deprecated,
+use display.html.notebook instead
+"""
+
pc_date_dayfirst_doc = """
: boolean
When True, prints and parses dates with the day first, eg 20/01/2005
@@ -322,7 +327,7 @@ def table_schema_cb(key):
validator=is_int)
cf.register_option('colheader_justify', 'right', colheader_justify_doc,
validator=is_text)
- cf.register_option('notebook_repr_html', True, pc_nb_repr_h_doc,
+ cf.register_option('notebook_repr_html', True, html_notebook_doc,
validator=is_bool)
cf.register_option('date_dayfirst', False, pc_date_dayfirst_doc,
validator=is_bool)
@@ -366,6 +371,8 @@ def table_schema_cb(key):
validator=is_int)
cf.register_option('html.use_mathjax', True, pc_html_use_mathjax_doc,
validator=is_bool)
+ cf.register_option('html.notebook', True, html_notebook_doc,
+ validator=is_bool)
with cf.config_prefix('html'):
cf.register_option('border', 1, pc_html_border_doc,
@@ -374,6 +381,9 @@ def table_schema_cb(key):
cf.deprecate_option('html.border', msg=pc_html_border_deprecation_warning,
rkey='display.html.border')
+cf.deprecate_option('display.notebook_repr_html',
+ msg=pc_nb_repr_h_deprecation_warning,
+ rkey='display.html.notebook')
tc_sim_interactive_doc = """
: boolean
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e95cf045148dc..03b08e73f0dd6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -703,7 +703,7 @@ def _repr_html_(self):
val = val.replace('>', r'>', 1)
return '<pre>' + val + '</pre>'
- if get_option("display.notebook_repr_html"):
+ if get_option("display.html.notebook"):
max_rows = get_option("display.max_rows")
max_cols = get_option("display.max_columns")
show_dimensions = get_option("display.show_dimensions")
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 6c3b75cdfa6df..78908f6c291f1 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1243,7 +1243,7 @@ def test_to_string_line_width_no_index(self):
def test_to_string_float_formatting(self):
tm.reset_display_options()
fmt.set_option('display.precision', 5, 'display.column_space', 12,
- 'display.notebook_repr_html', False)
+ 'display.html.notebook', False)
df = DataFrame({'x': [0, 0.25, 3456.000, 12e+45, 1.64e+6, 1.7e+8,
1.253456, np.pi, -1e6]})
@@ -1421,7 +1421,7 @@ def test_repr_html(self):
fmt.set_option('display.max_rows', 1, 'display.max_columns', 1)
self.frame._repr_html_()
- fmt.set_option('display.notebook_repr_html', False)
+ fmt.set_option('display.html.notebook', False)
self.frame._repr_html_()
tm.reset_display_options()
@@ -1651,6 +1651,13 @@ def test_period(self):
"3 2013-04 2011-04 d")
assert str(df) == exp
+ @tm.capture_stdout
+ def test_option_notebook_repr_html_deprecated(self):
+ # GH11784
+ with tm.assert_produces_warning(FutureWarning,
+ check_stacklevel=False):
+ pd.options.display.notebook_repr_html
+
def gen_series_formatting():
s1 = pd.Series(['a'] * 100)
| Deprecate display.notebook_repr_html option and move it to
display.html.notebook
- [X] closes #11784
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/20396 | 2018-03-18T00:15:05Z | 2018-07-08T15:59:43Z | null | 2023-05-11T01:17:35Z |
EHN: allow zip compression in `to_pickle`, `to_json`, `to_csv` | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 4179277291478..37ecd7eaa5246 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -344,6 +344,7 @@ Other Enhancements
- :meth:`DataFrame.to_sql` now performs a multivalue insert if the underlying connection supports itk rather than inserting row by row.
``SQLAlchemy`` dialects supporting multivalue inserts include: ``mysql``, ``postgresql``, ``sqlite`` and any dialect with ``supports_multivalues_insert``. (:issue:`14315`, :issue:`8953`)
- :func:`read_html` now accepts a ``displayed_only`` keyword argument to controls whether or not hidden elements are parsed (``True`` by default) (:issue:`20027`)
+- zip compression is supported via ``compression=zip`` in :func:`DataFrame.to_pickle`, :func:`Series.to_pickle`, :func:`DataFrame.to_csv`, :func:`Series.to_csv`, :func:`DataFrame.to_json`, :func:`Series.to_json`. (:issue:`17778`)
.. _whatsnew_0230.api_breaking:
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 7a4ef56d7d749..81a039e484cf1 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -75,16 +75,6 @@ def compression(request):
return request.param
-@pytest.fixture(params=[None, 'gzip', 'bz2',
- pytest.param('xz', marks=td.skip_if_no_lzma)])
-def compression_no_zip(request):
- """
- Fixture for trying common compression types in compression tests
- except zip
- """
- return request.param
-
-
@pytest.fixture(scope='module')
def datetime_tz_utc():
from datetime import timezone
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index efb002474f876..a03a3141a3b70 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1654,9 +1654,9 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
A string representing the encoding to use in the output file,
defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
compression : string, optional
- a string representing the compression to use in the output file,
- allowed values are 'gzip', 'bz2', 'xz',
- only used when the first argument is a filename
+ A string representing the compression to use in the output file.
+ Allowed values are 'gzip', 'bz2', 'zip', 'xz'. This input is only
+ used when the first argument is a filename.
line_terminator : string, default ``'\n'``
The newline character or character sequence to use in the output
file
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5682ad411fd2f..1a090f273e68e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1814,9 +1814,9 @@ def to_json(self, path_or_buf=None, orient=None, date_format=None,
.. versionadded:: 0.19.0
- compression : {None, 'gzip', 'bz2', 'xz'}
+ compression : {None, 'gzip', 'bz2', 'zip', 'xz'}
A string representing the compression to use in the output file,
- only used when the first argument is a filename
+ only used when the first argument is a filename.
.. versionadded:: 0.21.0
@@ -2085,7 +2085,8 @@ def to_pickle(self, path, compression='infer',
----------
path : str
File path where the pickled object will be stored.
- compression : {'infer', 'gzip', 'bz2', 'xz', None}, default 'infer'
+ compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, \
+ default 'infer'
A string representing the compression to use in the output file. By
default, infers from the file extension in specified path.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e4801242073a2..9e086b165ca3e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3632,9 +3632,9 @@ def to_csv(self, path=None, index=True, sep=",", na_rep='',
a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
compression : string, optional
- a string representing the compression to use in the output file,
- allowed values are 'gzip', 'bz2', 'xz', only used when the first
- argument is a filename
+ A string representing the compression to use in the output file.
+ Allowed values are 'gzip', 'bz2', 'zip', 'xz'. This input is only
+ used when the first argument is a filename.
date_format: string, default None
Format string for datetime objects.
decimal: string, default '.'
diff --git a/pandas/io/common.py b/pandas/io/common.py
index e312181f08512..4769edd157b94 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -5,6 +5,7 @@
import codecs
import mmap
from contextlib import contextmanager, closing
+from zipfile import ZipFile
from pandas.compat import StringIO, BytesIO, string_types, text_type
from pandas import compat
@@ -363,18 +364,20 @@ def _get_handle(path_or_buf, mode, encoding=None, compression=None,
# ZIP Compression
elif compression == 'zip':
- import zipfile
- zip_file = zipfile.ZipFile(path_or_buf)
- zip_names = zip_file.namelist()
- if len(zip_names) == 1:
- f = zip_file.open(zip_names.pop())
- elif len(zip_names) == 0:
- raise ValueError('Zero files found in ZIP file {}'
- .format(path_or_buf))
- else:
- raise ValueError('Multiple files found in ZIP file.'
- ' Only one file per ZIP: {}'
- .format(zip_names))
+ zf = BytesZipFile(path_or_buf, mode)
+ if zf.mode == 'w':
+ f = zf
+ elif zf.mode == 'r':
+ zip_names = zf.namelist()
+ if len(zip_names) == 1:
+ f = zf.open(zip_names.pop())
+ elif len(zip_names) == 0:
+ raise ValueError('Zero files found in ZIP file {}'
+ .format(path_or_buf))
+ else:
+ raise ValueError('Multiple files found in ZIP file.'
+ ' Only one file per ZIP: {}'
+ .format(zip_names))
# XZ Compression
elif compression == 'xz':
@@ -425,6 +428,24 @@ def _get_handle(path_or_buf, mode, encoding=None, compression=None,
return f, handles
+class BytesZipFile(ZipFile, BytesIO):
+ """
+ Wrapper for standard library class ZipFile and allow the returned file-like
+ handle to accept byte strings via `write` method.
+
+ BytesIO provides attributes of file-like object and ZipFile.writestr writes
+ bytes strings into a member of the archive.
+ """
+ # GH 17778
+ def __init__(self, file, mode='r', **kwargs):
+ if mode in ['wb', 'rb']:
+ mode = mode.replace('b', '')
+ super(BytesZipFile, self).__init__(file, mode, **kwargs)
+
+ def write(self, data):
+ super(BytesZipFile, self).writestr(self.filename, data)
+
+
class MMapWrapper(BaseIterator):
"""
Wrapper for the Python's mmap class so that it can be properly read in
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
index 4e2021bcba72b..29b8d29af0808 100644
--- a/pandas/io/formats/csvs.py
+++ b/pandas/io/formats/csvs.py
@@ -133,8 +133,8 @@ def save(self):
else:
f, handles = _get_handle(self.path_or_buf, self.mode,
encoding=encoding,
- compression=self.compression)
- close = True
+ compression=None)
+ close = True if self.compression is None else False
try:
writer_kwargs = dict(lineterminator=self.line_terminator,
@@ -151,6 +151,16 @@ def save(self):
self._save()
finally:
+ # GH 17778 handles compression for byte strings.
+ if not close and self.compression:
+ f.close()
+ with open(self.path_or_buf, 'r') as f:
+ data = f.read()
+ f, handles = _get_handle(self.path_or_buf, self.mode,
+ encoding=encoding,
+ compression=self.compression)
+ f.write(data)
+ close = True
if close:
f.close()
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index 8c72c315c142c..d27735fbca318 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -18,7 +18,7 @@ def to_pickle(obj, path, compression='infer', protocol=pkl.HIGHEST_PROTOCOL):
Any python object.
path : str
File path where the pickled object will be stored.
- compression : {'infer', 'gzip', 'bz2', 'xz', None}, default 'infer'
+ compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
A string representing the compression to use in the output file. By
default, infers from the file extension in specified path.
@@ -74,7 +74,7 @@ def to_pickle(obj, path, compression='infer', protocol=pkl.HIGHEST_PROTOCOL):
if protocol < 0:
protocol = pkl.HIGHEST_PROTOCOL
try:
- pkl.dump(obj, f, protocol=protocol)
+ f.write(pkl.dumps(obj, protocol=protocol))
finally:
for _f in fh:
_f.close()
@@ -93,7 +93,7 @@ def read_pickle(path, compression='infer'):
----------
path : str
File path where the pickled object will be loaded.
- compression : {'infer', 'gzip', 'bz2', 'xz', 'zip', None}, default 'infer'
+ compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer', then use
gzip, bz2, xz or zip if path ends in '.gz', '.bz2', '.xz',
or '.zip' respectively, and no decompression otherwise.
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index dda5cdea52cac..e4829ebf48561 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -919,7 +919,7 @@ def test_to_csv_path_is_none(self):
recons = pd.read_csv(StringIO(csv_str), index_col=0)
assert_frame_equal(self.frame, recons)
- def test_to_csv_compression(self, compression_no_zip):
+ def test_to_csv_compression(self, compression):
df = DataFrame([[0.123456, 0.234567, 0.567567],
[12.32112, 123123.2, 321321.2]],
@@ -927,35 +927,22 @@ def test_to_csv_compression(self, compression_no_zip):
with ensure_clean() as filename:
- df.to_csv(filename, compression=compression_no_zip)
+ df.to_csv(filename, compression=compression)
# test the round trip - to_csv -> read_csv
- rs = read_csv(filename, compression=compression_no_zip,
+ rs = read_csv(filename, compression=compression,
index_col=0)
assert_frame_equal(df, rs)
# explicitly make sure file is compressed
- with tm.decompress_file(filename, compression_no_zip) as fh:
+ with tm.decompress_file(filename, compression) as fh:
text = fh.read().decode('utf8')
for col in df.columns:
assert col in text
- with tm.decompress_file(filename, compression_no_zip) as fh:
+ with tm.decompress_file(filename, compression) as fh:
assert_frame_equal(df, read_csv(fh, index_col=0))
- def test_to_csv_compression_value_error(self):
- # GH7615
- # use the compression kw in to_csv
- df = DataFrame([[0.123456, 0.234567, 0.567567],
- [12.32112, 123123.2, 321321.2]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
-
- with ensure_clean() as filename:
- # zip compression is not supported and should raise ValueError
- import zipfile
- pytest.raises(zipfile.BadZipfile, df.to_csv,
- filename, compression="zip")
-
def test_to_csv_date_format(self):
with ensure_clean('__tmp_to_csv_date_format__') as path:
dt_index = self.tsframe.index
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 08335293f9292..c9074ca49e5be 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -5,32 +5,22 @@
from pandas.util.testing import assert_frame_equal, assert_raises_regex
-def test_compression_roundtrip(compression_no_zip):
+def test_compression_roundtrip(compression):
df = pd.DataFrame([[0.123456, 0.234567, 0.567567],
[12.32112, 123123.2, 321321.2]],
index=['A', 'B'], columns=['X', 'Y', 'Z'])
with tm.ensure_clean() as path:
- df.to_json(path, compression=compression_no_zip)
+ df.to_json(path, compression=compression)
assert_frame_equal(df, pd.read_json(path,
- compression=compression_no_zip))
+ compression=compression))
# explicitly ensure file was compressed.
- with tm.decompress_file(path, compression_no_zip) as fh:
+ with tm.decompress_file(path, compression) as fh:
result = fh.read().decode('utf8')
assert_frame_equal(df, pd.read_json(result))
-def test_compress_zip_value_error():
- df = pd.DataFrame([[0.123456, 0.234567, 0.567567],
- [12.32112, 123123.2, 321321.2]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
-
- with tm.ensure_clean() as path:
- import zipfile
- pytest.raises(zipfile.BadZipfile, df.to_json, path, compression="zip")
-
-
def test_read_zipped_json():
uncompressed_path = tm.get_data_path("tsframe_v012.json")
uncompressed_df = pd.read_json(uncompressed_path)
@@ -41,7 +31,7 @@ def test_read_zipped_json():
assert_frame_equal(uncompressed_df, compressed_df)
-def test_with_s3_url(compression_no_zip):
+def test_with_s3_url(compression):
boto3 = pytest.importorskip('boto3')
pytest.importorskip('s3fs')
moto = pytest.importorskip('moto')
@@ -52,35 +42,35 @@ def test_with_s3_url(compression_no_zip):
bucket = conn.create_bucket(Bucket="pandas-test")
with tm.ensure_clean() as path:
- df.to_json(path, compression=compression_no_zip)
+ df.to_json(path, compression=compression)
with open(path, 'rb') as f:
bucket.put_object(Key='test-1', Body=f)
roundtripped_df = pd.read_json('s3://pandas-test/test-1',
- compression=compression_no_zip)
+ compression=compression)
assert_frame_equal(df, roundtripped_df)
-def test_lines_with_compression(compression_no_zip):
+def test_lines_with_compression(compression):
with tm.ensure_clean() as path:
df = pd.read_json('{"a": [1, 2, 3], "b": [4, 5, 6]}')
df.to_json(path, orient='records', lines=True,
- compression=compression_no_zip)
+ compression=compression)
roundtripped_df = pd.read_json(path, lines=True,
- compression=compression_no_zip)
+ compression=compression)
assert_frame_equal(df, roundtripped_df)
-def test_chunksize_with_compression(compression_no_zip):
+def test_chunksize_with_compression(compression):
with tm.ensure_clean() as path:
df = pd.read_json('{"a": ["foo", "bar", "baz"], "b": [4, 5, 6]}')
df.to_json(path, orient='records', lines=True,
- compression=compression_no_zip)
+ compression=compression)
res = pd.read_json(path, lines=True, chunksize=1,
- compression=compression_no_zip)
+ compression=compression)
roundtripped_df = pd.concat(res)
assert_frame_equal(df, roundtripped_df)
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 2ba3e174404c7..6bc3af2ba3fd2 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -352,7 +352,7 @@ def compress_file(self, src_path, dest_path, compression):
f.write(fh.read())
f.close()
- def test_write_explicit(self, compression_no_zip, get_random_path):
+ def test_write_explicit(self, compression, get_random_path):
base = get_random_path
path1 = base + ".compressed"
path2 = base + ".raw"
@@ -361,10 +361,10 @@ def test_write_explicit(self, compression_no_zip, get_random_path):
df = tm.makeDataFrame()
# write to compressed file
- df.to_pickle(p1, compression=compression_no_zip)
+ df.to_pickle(p1, compression=compression)
# decompress
- with tm.decompress_file(p1, compression=compression_no_zip) as f:
+ with tm.decompress_file(p1, compression=compression) as f:
with open(p2, "wb") as fh:
fh.write(f.read())
diff --git a/pandas/tests/series/test_io.py b/pandas/tests/series/test_io.py
index 62d1372525cc8..0b0d4334c86a3 100644
--- a/pandas/tests/series/test_io.py
+++ b/pandas/tests/series/test_io.py
@@ -138,26 +138,26 @@ def test_to_csv_path_is_none(self):
csv_str = s.to_csv(path=None)
assert isinstance(csv_str, str)
- def test_to_csv_compression(self, compression_no_zip):
+ def test_to_csv_compression(self, compression):
s = Series([0.123456, 0.234567, 0.567567], index=['A', 'B', 'C'],
name='X')
with ensure_clean() as filename:
- s.to_csv(filename, compression=compression_no_zip, header=True)
+ s.to_csv(filename, compression=compression, header=True)
# test the round trip - to_csv -> read_csv
- rs = pd.read_csv(filename, compression=compression_no_zip,
+ rs = pd.read_csv(filename, compression=compression,
index_col=0, squeeze=True)
assert_series_equal(s, rs)
# explicitly ensure file was compressed
- with tm.decompress_file(filename, compression_no_zip) as fh:
+ with tm.decompress_file(filename, compression) as fh:
text = fh.read().decode('utf8')
assert s.name in text
- with tm.decompress_file(filename, compression_no_zip) as fh:
+ with tm.decompress_file(filename, compression) as fh:
assert_series_equal(s, pd.read_csv(fh,
index_col=0,
squeeze=True))
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a223e4d8fd23e..f79e73b8ba417 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -172,7 +172,7 @@ def decompress_file(path, compression):
path : str
The path where the file is read from
- compression : {'gzip', 'bz2', 'xz', None}
+ compression : {'gzip', 'bz2', 'zip', 'xz', None}
Name of the decompression to use
Returns
| - [x] closes https://github.com/pandas-dev/pandas/issues/17778
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Currently, `to_pickle` doesn't support compression='zip' whereas `read_pickle` is able to unpickle zip compressed file. The same applies to `to_json`, `to_csv` methods under generic, frame, series.
Standard library ZipFile class default write method requires a filename but pickle.dump(obj, f, protocol) requires f being a file-like object (i.e. file handle) which offers writing bytes. Create a new BytesZipFile class that allows pickle.dump to write bytes into zip file handle.
Now zip compressed objects (pickle, json, csv) are roundtripable with `read_pickle`, `read_json`, `read_csv`.
Need suggestion as to where to put BytesZipFile class which overrides `write` method with `writestr`. Other comments welcome. | https://api.github.com/repos/pandas-dev/pandas/pulls/20394 | 2018-03-17T19:16:51Z | 2018-03-22T23:12:07Z | 2018-03-22T23:12:06Z | 2018-03-22T23:58:34Z |
DOC: add disallowing of Series construction of len-1 list with index to whatsnew | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 4179277291478..2fb5e8678794a 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -714,6 +714,7 @@ Other API Changes
- ``pd.to_datetime('today')`` now returns a datetime, consistent with ``pd.Timestamp('today')``; previously ``pd.to_datetime('today')`` returned a ``.normalized()`` datetime (:issue:`19935`)
- :func:`Series.str.replace` now takes an optional `regex` keyword which, when set to ``False``, uses literal string replacement rather than regex replacement (:issue:`16808`)
- :func:`DatetimeIndex.strftime` and :func:`PeriodIndex.strftime` now return an ``Index`` instead of a numpy array to be consistent with similar accessors (:issue:`20127`)
+- Constructing a Series from a list of length 1 no longer broadcasts this list when a longer index is specified (:issue:`19714`, :issue:`20391`).
.. _whatsnew_0230.deprecations:
| Closes https://github.com/pandas-dev/pandas/issues/20391 | https://api.github.com/repos/pandas-dev/pandas/pulls/20392 | 2018-03-17T18:44:47Z | 2018-03-19T10:01:03Z | 2018-03-19T10:01:02Z | 2018-03-19T10:01:05Z |
CI: fix linter | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 2c84e459c25de..008747c0a9e78 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1355,7 +1355,7 @@ cdef class _Period(object):
Returns
-------
- int
+ int
See Also
--------
@@ -1371,7 +1371,7 @@ cdef class _Period(object):
>>> p = pd.Period("2018-02-01", "D")
>>> p.week
5
-
+
>>> p = pd.Period("2018-01-06", "D")
>>> p.week
1
| Sorry, broke master | https://api.github.com/repos/pandas-dev/pandas/pulls/20389 | 2018-03-17T12:18:16Z | 2018-03-17T12:18:39Z | 2018-03-17T12:18:39Z | 2018-03-17T12:18:42Z |
DOC: enable docstring on DataFrame.columns/index | diff --git a/doc/source/api.rst b/doc/source/api.rst
index dba7f6526f22a..dfb6d03ec9159 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -263,7 +263,11 @@ Constructor
Attributes
~~~~~~~~~~
**Axes**
- * **index**: axis labels
+
+.. autosummary::
+ :toctree: generated/
+
+ Series.index
.. autosummary::
:toctree: generated/
@@ -845,13 +849,15 @@ Attributes and underlying data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Axes**
- * **index**: row labels
- * **columns**: column labels
+.. autosummary::
+ :toctree: generated/
+
+ DataFrame.index
+ DataFrame.columns
.. autosummary::
:toctree: generated/
- DataFrame.as_matrix
DataFrame.dtypes
DataFrame.ftypes
DataFrame.get_dtype_counts
@@ -2546,8 +2552,7 @@ objects.
:hidden:
generated/pandas.DataFrame.blocks
- generated/pandas.DataFrame.columns
- generated/pandas.DataFrame.index
+ generated/pandas.DataFrame.as_matrix
generated/pandas.DataFrame.ix
generated/pandas.Index.asi8
generated/pandas.Index.data
@@ -2566,6 +2571,5 @@ objects.
generated/pandas.Series.asobject
generated/pandas.Series.blocks
generated/pandas.Series.from_array
- generated/pandas.Series.index
generated/pandas.Series.ix
generated/pandas.Timestamp.offset
diff --git a/pandas/_libs/properties.pyx b/pandas/_libs/properties.pyx
index 67f58851a9a70..e3f16f224db1c 100644
--- a/pandas/_libs/properties.pyx
+++ b/pandas/_libs/properties.pyx
@@ -42,11 +42,14 @@ cache_readonly = CachedProperty
cdef class AxisProperty(object):
- cdef:
+
+ cdef readonly:
Py_ssize_t axis
+ object __doc__
- def __init__(self, axis=0):
+ def __init__(self, axis=0, doc=""):
self.axis = axis
+ self.__doc__ = doc
def __get__(self, obj, type):
cdef:
@@ -54,7 +57,7 @@ cdef class AxisProperty(object):
if obj is None:
# Only instances have _data, not classes
- return None
+ return self
else:
axes = obj._data.axes
return axes[self.axis]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d9dbe22dddcdd..7c0cce55eb021 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6788,7 +6788,10 @@ def isin(self, values):
DataFrame._setup_axes(['index', 'columns'], info_axis=1, stat_axis=0,
- axes_are_reversed=True, aliases={'rows': 0})
+ axes_are_reversed=True, aliases={'rows': 0},
+ docs={
+ 'index': 'The index (row labels) of the DataFrame.',
+ 'columns': 'The column labels of the DataFrame.'})
DataFrame._add_numeric_operations()
DataFrame._add_series_or_dataframe_operations()
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 8bab065d8b748..6d9a290af2c08 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -236,7 +236,7 @@ def _constructor_expanddim(self):
@classmethod
def _setup_axes(cls, axes, info_axis=None, stat_axis=None, aliases=None,
slicers=None, axes_are_reversed=False, build_axes=True,
- ns=None):
+ ns=None, docs=None):
"""Provide axes setup for the major PandasObjects.
Parameters
@@ -278,7 +278,7 @@ def _setup_axes(cls, axes, info_axis=None, stat_axis=None, aliases=None,
if build_axes:
def set_axis(a, i):
- setattr(cls, a, properties.AxisProperty(i))
+ setattr(cls, a, properties.AxisProperty(i, docs.get(a, a)))
cls._internal_names_set.add(a)
if axes_are_reversed:
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 052d555df76f1..5bb4b72a0562d 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -1523,7 +1523,8 @@ def _extract_axis(self, data, axis=0, intersect=False):
stat_axis=1, aliases={'major': 'major_axis',
'minor': 'minor_axis'},
slicers={'major_axis': 'index',
- 'minor_axis': 'columns'})
+ 'minor_axis': 'columns'},
+ docs={})
ops.add_special_arithmetic_methods(Panel)
ops.add_flex_arithmetic_methods(Panel)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4d88b4e33b872..38fe39e7e3f7b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3809,7 +3809,8 @@ def to_period(self, freq=None, copy=True):
hist = gfx.hist_series
-Series._setup_axes(['index'], info_axis=0, stat_axis=0, aliases={'rows': 0})
+Series._setup_axes(['index'], info_axis=0, stat_axis=0, aliases={'rows': 0},
+ docs={'index': 'The index (axis labels) of the Series.'})
Series._add_numeric_operations()
Series._add_series_only_operations()
Series._add_series_or_dataframe_operations()
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 8ba5469480e64..b2cbd0b07d7f5 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -6,6 +6,7 @@
# pylint: disable-msg=W0612,E1101
from copy import deepcopy
+import pydoc
import sys
from distutils.version import LooseVersion
@@ -362,8 +363,9 @@ def test_axis_aliases(self):
def test_class_axis(self):
# https://github.com/pandas-dev/pandas/issues/18147
- DataFrame.index # no exception!
- DataFrame.columns # no exception!
+ # no exception and no empty docstring
+ assert pydoc.getdoc(DataFrame.index)
+ assert pydoc.getdoc(DataFrame.columns)
def test_more_values(self):
values = self.mixed_frame.values
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index cf8698bc5ed5e..f7f1ea019a3f0 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -1,6 +1,7 @@
# coding=utf-8
# pylint: disable-msg=E1101,W0612
from collections import OrderedDict
+import pydoc
import pytest
@@ -384,7 +385,8 @@ def test_axis_alias(self):
def test_class_axis(self):
# https://github.com/pandas-dev/pandas/issues/18147
- Series.index # no exception!
+ # no exception and no empty docstring
+ assert pydoc.getdoc(Series.index)
def test_numpy_unique(self):
# it works!
| Follow-up on https://github.com/pandas-dev/pandas/pull/18196, similar to https://github.com/pandas-dev/pandas/pull/19991
Didn't add an actual full docstring yet, but this already gives:
```
In [1]: pd.DataFrame.columns?
Type: AxisProperty
String form: <pandas._libs.properties.AxisProperty object at 0x7fa975d74da0>
File: ~/scipy/pandas/pandas/_libs/properties.cpython-35m-x86_64-linux-gnu.so
Docstring: columns
```
instead of (on master, on 0.22 this raises an exception):
```
In [1]: pd.DataFrame.columns?
Type: NoneType
String form: None
Docstring: <no docstring>
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20385 | 2018-03-16T17:29:04Z | 2018-03-22T09:46:55Z | 2018-03-22T09:46:55Z | 2018-03-22T09:46:58Z |
DOC: update vendored numpydoc version | diff --git a/doc/source/api.rst b/doc/source/api.rst
index dfb6d03ec9159..a5d24302e69e2 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -2557,6 +2557,8 @@ objects.
generated/pandas.Index.asi8
generated/pandas.Index.data
generated/pandas.Index.flags
+ generated/pandas.Index.holds_integer
+ generated/pandas.Index.is_type_compatible
generated/pandas.Index.nlevels
generated/pandas.Index.sort
generated/pandas.Panel.agg
@@ -2572,4 +2574,6 @@ objects.
generated/pandas.Series.blocks
generated/pandas.Series.from_array
generated/pandas.Series.ix
+ generated/pandas.Series.imag
+ generated/pandas.Series.real
generated/pandas.Timestamp.offset
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 46249af8a5a56..43c7c23c5e20d 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -86,6 +86,12 @@
if any(re.match("\s*api\s*", l) for l in index_rst_lines):
autosummary_generate = True
+# numpydoc
+# for now use old parameter listing (styling + **kwargs problem)
+numpydoc_use_blockquotes = True
+# use member listing for attributes
+numpydoc_attributes_as_param_list = False
+
# matplotlib plot directive
plot_include_source = True
plot_formats = [("png", 90)]
diff --git a/doc/sphinxext/numpydoc/__init__.py b/doc/sphinxext/numpydoc/__init__.py
old mode 100755
new mode 100644
index 0fce2cf747e23..30dba8fcf9132
--- a/doc/sphinxext/numpydoc/__init__.py
+++ b/doc/sphinxext/numpydoc/__init__.py
@@ -1,3 +1,8 @@
from __future__ import division, absolute_import, print_function
-from .numpydoc import setup
+__version__ = '0.8.0.dev0'
+
+
+def setup(app, *args, **kwargs):
+ from .numpydoc import setup
+ return setup(app, *args, **kwargs)
diff --git a/doc/sphinxext/numpydoc/comment_eater.py b/doc/sphinxext/numpydoc/comment_eater.py
deleted file mode 100755
index 8cddd3305f0bc..0000000000000
--- a/doc/sphinxext/numpydoc/comment_eater.py
+++ /dev/null
@@ -1,169 +0,0 @@
-from __future__ import division, absolute_import, print_function
-
-import sys
-if sys.version_info[0] >= 3:
- from io import StringIO
-else:
- from io import StringIO
-
-import compiler
-import inspect
-import textwrap
-import tokenize
-
-from .compiler_unparse import unparse
-
-
-class Comment(object):
- """ A comment block.
- """
- is_comment = True
- def __init__(self, start_lineno, end_lineno, text):
- # int : The first line number in the block. 1-indexed.
- self.start_lineno = start_lineno
- # int : The last line number. Inclusive!
- self.end_lineno = end_lineno
- # str : The text block including '#' character but not any leading spaces.
- self.text = text
-
- def add(self, string, start, end, line):
- """ Add a new comment line.
- """
- self.start_lineno = min(self.start_lineno, start[0])
- self.end_lineno = max(self.end_lineno, end[0])
- self.text += string
-
- def __repr__(self):
- return '%s(%r, %r, %r)' % (self.__class__.__name__, self.start_lineno,
- self.end_lineno, self.text)
-
-
-class NonComment(object):
- """ A non-comment block of code.
- """
- is_comment = False
- def __init__(self, start_lineno, end_lineno):
- self.start_lineno = start_lineno
- self.end_lineno = end_lineno
-
- def add(self, string, start, end, line):
- """ Add lines to the block.
- """
- if string.strip():
- # Only add if not entirely whitespace.
- self.start_lineno = min(self.start_lineno, start[0])
- self.end_lineno = max(self.end_lineno, end[0])
-
- def __repr__(self):
- return '%s(%r, %r)' % (self.__class__.__name__, self.start_lineno,
- self.end_lineno)
-
-
-class CommentBlocker(object):
- """ Pull out contiguous comment blocks.
- """
- def __init__(self):
- # Start with a dummy.
- self.current_block = NonComment(0, 0)
-
- # All of the blocks seen so far.
- self.blocks = []
-
- # The index mapping lines of code to their associated comment blocks.
- self.index = {}
-
- def process_file(self, file):
- """ Process a file object.
- """
- if sys.version_info[0] >= 3:
- nxt = file.__next__
- else:
- nxt = file.next
- for token in tokenize.generate_tokens(nxt):
- self.process_token(*token)
- self.make_index()
-
- def process_token(self, kind, string, start, end, line):
- """ Process a single token.
- """
- if self.current_block.is_comment:
- if kind == tokenize.COMMENT:
- self.current_block.add(string, start, end, line)
- else:
- self.new_noncomment(start[0], end[0])
- else:
- if kind == tokenize.COMMENT:
- self.new_comment(string, start, end, line)
- else:
- self.current_block.add(string, start, end, line)
-
- def new_noncomment(self, start_lineno, end_lineno):
- """ We are transitioning from a noncomment to a comment.
- """
- block = NonComment(start_lineno, end_lineno)
- self.blocks.append(block)
- self.current_block = block
-
- def new_comment(self, string, start, end, line):
- """ Possibly add a new comment.
-
- Only adds a new comment if this comment is the only thing on the line.
- Otherwise, it extends the noncomment block.
- """
- prefix = line[:start[1]]
- if prefix.strip():
- # Oops! Trailing comment, not a comment block.
- self.current_block.add(string, start, end, line)
- else:
- # A comment block.
- block = Comment(start[0], end[0], string)
- self.blocks.append(block)
- self.current_block = block
-
- def make_index(self):
- """ Make the index mapping lines of actual code to their associated
- prefix comments.
- """
- for prev, block in zip(self.blocks[:-1], self.blocks[1:]):
- if not block.is_comment:
- self.index[block.start_lineno] = prev
-
- def search_for_comment(self, lineno, default=None):
- """ Find the comment block just before the given line number.
-
- Returns None (or the specified default) if there is no such block.
- """
- if not self.index:
- self.make_index()
- block = self.index.get(lineno, None)
- text = getattr(block, 'text', default)
- return text
-
-
-def strip_comment_marker(text):
- """ Strip # markers at the front of a block of comment text.
- """
- lines = []
- for line in text.splitlines():
- lines.append(line.lstrip('#'))
- text = textwrap.dedent('\n'.join(lines))
- return text
-
-
-def get_class_traits(klass):
- """ Yield all of the documentation for trait definitions on a class object.
- """
- # FIXME: gracefully handle errors here or in the caller?
- source = inspect.getsource(klass)
- cb = CommentBlocker()
- cb.process_file(StringIO(source))
- mod_ast = compiler.parse(source)
- class_ast = mod_ast.node.nodes[0]
- for node in class_ast.code.nodes:
- # FIXME: handle other kinds of assignments?
- if isinstance(node, compiler.ast.Assign):
- name = node.nodes[0].name
- rhs = unparse(node.expr).strip()
- doc = strip_comment_marker(cb.search_for_comment(node.lineno, default=''))
- yield name, rhs, doc
-
diff --git a/doc/sphinxext/numpydoc/compiler_unparse.py b/doc/sphinxext/numpydoc/compiler_unparse.py
deleted file mode 100755
index eabb5934c9cc8..0000000000000
--- a/doc/sphinxext/numpydoc/compiler_unparse.py
+++ /dev/null
@@ -1,865 +0,0 @@
-""" Turn compiler.ast structures back into executable python code.
-
- The unparse method takes a compiler.ast tree and transforms it back into
- valid python code. It is incomplete and currently only works for
- import statements, function calls, function definitions, assignments, and
- basic expressions.
-
- Inspired by python-2.5-svn/Demo/parser/unparse.py
-
- fixme: We may want to move to using _ast trees because the compiler for
- them is about 6 times faster than compiler.compile.
-"""
-from __future__ import division, absolute_import, print_function
-
-import sys
-from compiler.ast import Const, Name, Tuple, Div, Mul, Sub, Add
-
-if sys.version_info[0] >= 3:
- from io import StringIO
-else:
- from StringIO import StringIO
-
-def unparse(ast, single_line_functions=False):
- s = StringIO()
- UnparseCompilerAst(ast, s, single_line_functions)
- return s.getvalue().lstrip()
-
-op_precedence = { 'compiler.ast.Power':3, 'compiler.ast.Mul':2, 'compiler.ast.Div':2,
- 'compiler.ast.Add':1, 'compiler.ast.Sub':1 }
-
-class UnparseCompilerAst:
- """ Methods in this class recursively traverse an AST and
- output source code for the abstract syntax; original formatting
- is disregarged.
- """
-
- #########################################################################
- # object interface.
- #########################################################################
-
- def __init__(self, tree, file = sys.stdout, single_line_functions=False):
- """ Unparser(tree, file=sys.stdout) -> None.
-
- Print the source for tree to file.
- """
- self.f = file
- self._single_func = single_line_functions
- self._do_indent = True
- self._indent = 0
- self._dispatch(tree)
- self._write("\n")
- self.f.flush()
-
- #########################################################################
- # Unparser private interface.
- #########################################################################
-
- ### format, output, and dispatch methods ################################
-
- def _fill(self, text = ""):
- "Indent a piece of text, according to the current indentation level"
- if self._do_indent:
- self._write("\n"+" "*self._indent + text)
- else:
- self._write(text)
-
- def _write(self, text):
- "Append a piece of text to the current line."
- self.f.write(text)
-
- def _enter(self):
- "Print ':', and increase the indentation."
- self._write(": ")
- self._indent += 1
-
- def _leave(self):
- "Decrease the indentation level."
- self._indent -= 1
-
- def _dispatch(self, tree):
- "_dispatcher function, _dispatching tree type T to method _T."
- if isinstance(tree, list):
- for t in tree:
- self._dispatch(t)
- return
- meth = getattr(self, "_"+tree.__class__.__name__)
- if tree.__class__.__name__ == 'NoneType' and not self._do_indent:
- return
- meth(tree)
-
-
- #########################################################################
- # compiler.ast unparsing methods.
- #
- # There should be one method per concrete grammar type. They are
- # organized in alphabetical order.
- #########################################################################
-
- def _Add(self, t):
- self.__binary_op(t, '+')
-
- def _And(self, t):
- self._write(" (")
- for i, node in enumerate(t.nodes):
- self._dispatch(node)
- if i != len(t.nodes)-1:
- self._write(") and (")
- self._write(")")
-
- def _AssAttr(self, t):
- """ Handle assigning an attribute of an object
- """
- self._dispatch(t.expr)
- self._write('.'+t.attrname)
-
- def _Assign(self, t):
- """ Expression Assignment such as "a = 1".
-
- This only handles assignment in expressions. Keyword assignment
- is handled separately.
- """
- self._fill()
- for target in t.nodes:
- self._dispatch(target)
- self._write(" = ")
- self._dispatch(t.expr)
- if not self._do_indent:
- self._write('; ')
-
- def _AssName(self, t):
- """ Name on left hand side of expression.
-
- Treat just like a name on the right side of an expression.
- """
- self._Name(t)
-
- def _AssTuple(self, t):
- """ Tuple on left hand side of an expression.
- """
-
- # _write each elements, separated by a comma.
- for element in t.nodes[:-1]:
- self._dispatch(element)
- self._write(", ")
-
- # Handle the last one without writing comma
- last_element = t.nodes[-1]
- self._dispatch(last_element)
-
- def _AugAssign(self, t):
- """ +=,-=,*=,/=,**=, etc. operations
- """
-
- self._fill()
- self._dispatch(t.node)
- self._write(' '+t.op+' ')
- self._dispatch(t.expr)
- if not self._do_indent:
- self._write(';')
-
- def _Bitand(self, t):
- """ Bit and operation.
- """
-
- for i, node in enumerate(t.nodes):
- self._write("(")
- self._dispatch(node)
- self._write(")")
- if i != len(t.nodes)-1:
- self._write(" & ")
-
- def _Bitor(self, t):
- """ Bit or operation
- """
-
- for i, node in enumerate(t.nodes):
- self._write("(")
- self._dispatch(node)
- self._write(")")
- if i != len(t.nodes)-1:
- self._write(" | ")
-
- def _CallFunc(self, t):
- """ Function call.
- """
- self._dispatch(t.node)
- self._write("(")
- comma = False
- for e in t.args:
- if comma: self._write(", ")
- else: comma = True
- self._dispatch(e)
- if t.star_args:
- if comma: self._write(", ")
- else: comma = True
- self._write("*")
- self._dispatch(t.star_args)
- if t.dstar_args:
- if comma: self._write(", ")
- else: comma = True
- self._write("**")
- self._dispatch(t.dstar_args)
- self._write(")")
-
- def _Compare(self, t):
- self._dispatch(t.expr)
- for op, expr in t.ops:
- self._write(" " + op + " ")
- self._dispatch(expr)
-
- def _Const(self, t):
- """ A constant value such as an integer value, 3, or a string, "hello".
- """
- self._dispatch(t.value)
-
- def _Decorators(self, t):
- """ Handle function decorators (eg. @has_units)
- """
- for node in t.nodes:
- self._dispatch(node)
-
- def _Dict(self, t):
- self._write("{")
- for i, (k, v) in enumerate(t.items):
- self._dispatch(k)
- self._write(": ")
- self._dispatch(v)
- if i < len(t.items)-1:
- self._write(", ")
- self._write("}")
-
- def _Discard(self, t):
- """ Node for when return value is ignored such as in "foo(a)".
- """
- self._fill()
- self._dispatch(t.expr)
-
- def _Div(self, t):
- self.__binary_op(t, '/')
-
- def _Ellipsis(self, t):
- self._write("...")
-
- def _From(self, t):
- """ Handle "from xyz import foo, bar as baz".
- """
- # fixme: Are From and ImportFrom handled differently?
- self._fill("from ")
- self._write(t.modname)
- self._write(" import ")
- for i, (name,asname) in enumerate(t.names):
- if i != 0:
- self._write(", ")
- self._write(name)
- if asname is not None:
- self._write(" as "+asname)
-
- def _Function(self, t):
- """ Handle function definitions
- """
- if t.decorators is not None:
- self._fill("@")
- self._dispatch(t.decorators)
- self._fill("def "+t.name + "(")
- defaults = [None] * (len(t.argnames) - len(t.defaults)) + list(t.defaults)
- for i, arg in enumerate(zip(t.argnames, defaults)):
- self._write(arg[0])
- if arg[1] is not None:
- self._write('=')
- self._dispatch(arg[1])
- if i < len(t.argnames)-1:
- self._write(', ')
- self._write(")")
- if self._single_func:
- self._do_indent = False
- self._enter()
- self._dispatch(t.code)
- self._leave()
- self._do_indent = True
-
- def _Getattr(self, t):
- """ Handle getting an attribute of an object
- """
- if isinstance(t.expr, (Div, Mul, Sub, Add)):
- self._write('(')
- self._dispatch(t.expr)
- self._write(')')
- else:
- self._dispatch(t.expr)
-
- self._write('.'+t.attrname)
-
- def _If(self, t):
- self._fill()
-
- for i, (compare,code) in enumerate(t.tests):
- if i == 0:
- self._write("if ")
- else:
- self._write("elif ")
- self._dispatch(compare)
- self._enter()
- self._fill()
- self._dispatch(code)
- self._leave()
- self._write("\n")
-
- if t.else_ is not None:
- self._write("else")
- self._enter()
- self._fill()
- self._dispatch(t.else_)
- self._leave()
- self._write("\n")
-
- def _IfExp(self, t):
- self._dispatch(t.then)
- self._write(" if ")
- self._dispatch(t.test)
-
- if t.else_ is not None:
- self._write(" else (")
- self._dispatch(t.else_)
- self._write(")")
-
- def _Import(self, t):
- """ Handle "import xyz.foo".
- """
- self._fill("import ")
-
- for i, (name,asname) in enumerate(t.names):
- if i != 0:
- self._write(", ")
- self._write(name)
- if asname is not None:
- self._write(" as "+asname)
-
- def _Keyword(self, t):
- """ Keyword value assignment within function calls and definitions.
- """
- self._write(t.name)
- self._write("=")
- self._dispatch(t.expr)
-
- def _List(self, t):
- self._write("[")
- for i,node in enumerate(t.nodes):
- self._dispatch(node)
- if i < len(t.nodes)-1:
- self._write(", ")
- self._write("]")
-
- def _Module(self, t):
- if t.doc is not None:
- self._dispatch(t.doc)
- self._dispatch(t.node)
-
- def _Mul(self, t):
- self.__binary_op(t, '*')
-
- def _Name(self, t):
- self._write(t.name)
-
- def _NoneType(self, t):
- self._write("None")
-
- def _Not(self, t):
- self._write('not (')
- self._dispatch(t.expr)
- self._write(')')
-
- def _Or(self, t):
- self._write(" (")
- for i, node in enumerate(t.nodes):
- self._dispatch(node)
- if i != len(t.nodes)-1:
- self._write(") or (")
- self._write(")")
-
- def _Pass(self, t):
- self._write("pass\n")
-
- def _Printnl(self, t):
- self._fill("print ")
- if t.dest:
- self._write(">> ")
- self._dispatch(t.dest)
- self._write(", ")
- comma = False
- for node in t.nodes:
- if comma: self._write(', ')
- else: comma = True
- self._dispatch(node)
-
- def _Power(self, t):
- self.__binary_op(t, '**')
-
- def _Return(self, t):
- self._fill("return ")
- if t.value:
- if isinstance(t.value, Tuple):
- text = ', '.join(name.name for name in t.value.asList())
- self._write(text)
- else:
- self._dispatch(t.value)
- if not self._do_indent:
- self._write('; ')
-
- def _Slice(self, t):
- self._dispatch(t.expr)
- self._write("[")
- if t.lower:
- self._dispatch(t.lower)
- self._write(":")
- if t.upper:
- self._dispatch(t.upper)
- #if t.step:
- # self._write(":")
- # self._dispatch(t.step)
- self._write("]")
-
- def _Sliceobj(self, t):
- for i, node in enumerate(t.nodes):
- if i != 0:
- self._write(":")
- if not (isinstance(node, Const) and node.value is None):
- self._dispatch(node)
-
- def _Stmt(self, tree):
- for node in tree.nodes:
- self._dispatch(node)
-
- def _Sub(self, t):
- self.__binary_op(t, '-')
-
- def _Subscript(self, t):
- self._dispatch(t.expr)
- self._write("[")
- for i, value in enumerate(t.subs):
- if i != 0:
- self._write(",")
- self._dispatch(value)
- self._write("]")
-
- def _TryExcept(self, t):
- self._fill("try")
- self._enter()
- self._dispatch(t.body)
- self._leave()
-
- for handler in t.handlers:
- self._fill('except ')
- self._dispatch(handler[0])
- if handler[1] is not None:
- self._write(', ')
- self._dispatch(handler[1])
- self._enter()
- self._dispatch(handler[2])
- self._leave()
-
- if t.else_:
- self._fill("else")
- self._enter()
- self._dispatch(t.else_)
- self._leave()
-
- def _Tuple(self, t):
-
- if not t.nodes:
- # Empty tuple.
- self._write("()")
- else:
- self._write("(")
-
- # _write each elements, separated by a comma.
- for element in t.nodes[:-1]:
- self._dispatch(element)
- self._write(", ")
-
- # Handle the last one without writing comma
- last_element = t.nodes[-1]
- self._dispatch(last_element)
-
- self._write(")")
-
- def _UnaryAdd(self, t):
- self._write("+")
- self._dispatch(t.expr)
-
- def _UnarySub(self, t):
- self._write("-")
- self._dispatch(t.expr)
-
- def _With(self, t):
- self._fill('with ')
- self._dispatch(t.expr)
- if t.vars:
- self._write(' as ')
- self._dispatch(t.vars.name)
- self._enter()
- self._dispatch(t.body)
- self._leave()
- self._write('\n')
-
- def _int(self, t):
- self._write(repr(t))
-
- def __binary_op(self, t, symbol):
- # Check if parenthesis are needed on left side and then dispatch
- has_paren = False
- left_class = str(t.left.__class__)
- if (left_class in op_precedence.keys() and
- op_precedence[left_class] < op_precedence[str(t.__class__)]):
- has_paren = True
- if has_paren:
- self._write('(')
- self._dispatch(t.left)
- if has_paren:
- self._write(')')
- # Write the appropriate symbol for operator
- self._write(symbol)
- # Check if parenthesis are needed on the right side and then dispatch
- has_paren = False
- right_class = str(t.right.__class__)
- if (right_class in op_precedence.keys() and
- op_precedence[right_class] < op_precedence[str(t.__class__)]):
- has_paren = True
- if has_paren:
- self._write('(')
- self._dispatch(t.right)
- if has_paren:
- self._write(')')
-
- def _float(self, t):
- # if t is 0.1, str(t)->'0.1' while repr(t)->'0.1000000000001'
- # We prefer str here.
- self._write(str(t))
-
- def _str(self, t):
- self._write(repr(t))
-
- def _tuple(self, t):
- self._write(str(t))
-
- #########################################################################
- # These are the methods from the _ast modules unparse.
- #
- # As our needs to handle more advanced code increase, we may want to
- # modify some of the methods below so that they work for compiler.ast.
- #########################################################################
-
-# # stmt
-# def _Expr(self, tree):
-# self._fill()
-# self._dispatch(tree.value)
-#
-# def _Import(self, t):
-# self._fill("import ")
-# first = True
-# for a in t.names:
-# if first:
-# first = False
-# else:
-# self._write(", ")
-# self._write(a.name)
-# if a.asname:
-# self._write(" as "+a.asname)
-#
-## def _ImportFrom(self, t):
-## self._fill("from ")
-## self._write(t.module)
-## self._write(" import ")
-## for i, a in enumerate(t.names):
-## if i == 0:
-## self._write(", ")
-## self._write(a.name)
-## if a.asname:
-## self._write(" as "+a.asname)
-## # XXX(jpe) what is level for?
-##
-#
-# def _Break(self, t):
-# self._fill("break")
-#
-# def _Continue(self, t):
-# self._fill("continue")
-#
-# def _Delete(self, t):
-# self._fill("del ")
-# self._dispatch(t.targets)
-#
-# def _Assert(self, t):
-# self._fill("assert ")
-# self._dispatch(t.test)
-# if t.msg:
-# self._write(", ")
-# self._dispatch(t.msg)
-#
-# def _Exec(self, t):
-# self._fill("exec ")
-# self._dispatch(t.body)
-# if t.globals:
-# self._write(" in ")
-# self._dispatch(t.globals)
-# if t.locals:
-# self._write(", ")
-# self._dispatch(t.locals)
-#
-# def _Print(self, t):
-# self._fill("print ")
-# do_comma = False
-# if t.dest:
-# self._write(">>")
-# self._dispatch(t.dest)
-# do_comma = True
-# for e in t.values:
-# if do_comma:self._write(", ")
-# else:do_comma=True
-# self._dispatch(e)
-# if not t.nl:
-# self._write(",")
-#
-# def _Global(self, t):
-# self._fill("global")
-# for i, n in enumerate(t.names):
-# if i != 0:
-# self._write(",")
-# self._write(" " + n)
-#
-# def _Yield(self, t):
-# self._fill("yield")
-# if t.value:
-# self._write(" (")
-# self._dispatch(t.value)
-# self._write(")")
-#
-# def _Raise(self, t):
-# self._fill('raise ')
-# if t.type:
-# self._dispatch(t.type)
-# if t.inst:
-# self._write(", ")
-# self._dispatch(t.inst)
-# if t.tback:
-# self._write(", ")
-# self._dispatch(t.tback)
-#
-#
-# def _TryFinally(self, t):
-# self._fill("try")
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-#
-# self._fill("finally")
-# self._enter()
-# self._dispatch(t.finalbody)
-# self._leave()
-#
-# def _excepthandler(self, t):
-# self._fill("except ")
-# if t.type:
-# self._dispatch(t.type)
-# if t.name:
-# self._write(", ")
-# self._dispatch(t.name)
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-#
-# def _ClassDef(self, t):
-# self._write("\n")
-# self._fill("class "+t.name)
-# if t.bases:
-# self._write("(")
-# for a in t.bases:
-# self._dispatch(a)
-# self._write(", ")
-# self._write(")")
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-#
-# def _FunctionDef(self, t):
-# self._write("\n")
-# for deco in t.decorators:
-# self._fill("@")
-# self._dispatch(deco)
-# self._fill("def "+t.name + "(")
-# self._dispatch(t.args)
-# self._write(")")
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-#
-# def _For(self, t):
-# self._fill("for ")
-# self._dispatch(t.target)
-# self._write(" in ")
-# self._dispatch(t.iter)
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-# if t.orelse:
-# self._fill("else")
-# self._enter()
-# self._dispatch(t.orelse)
-# self._leave
-#
-# def _While(self, t):
-# self._fill("while ")
-# self._dispatch(t.test)
-# self._enter()
-# self._dispatch(t.body)
-# self._leave()
-# if t.orelse:
-# self._fill("else")
-# self._enter()
-# self._dispatch(t.orelse)
-# self._leave
-#
-# # expr
-# def _Str(self, tree):
-# self._write(repr(tree.s))
-##
-# def _Repr(self, t):
-# self._write("`")
-# self._dispatch(t.value)
-# self._write("`")
-#
-# def _Num(self, t):
-# self._write(repr(t.n))
-#
-# def _ListComp(self, t):
-# self._write("[")
-# self._dispatch(t.elt)
-# for gen in t.generators:
-# self._dispatch(gen)
-# self._write("]")
-#
-# def _GeneratorExp(self, t):
-# self._write("(")
-# self._dispatch(t.elt)
-# for gen in t.generators:
-# self._dispatch(gen)
-# self._write(")")
-#
-# def _comprehension(self, t):
-# self._write(" for ")
-# self._dispatch(t.target)
-# self._write(" in ")
-# self._dispatch(t.iter)
-# for if_clause in t.ifs:
-# self._write(" if ")
-# self._dispatch(if_clause)
-#
-# def _IfExp(self, t):
-# self._dispatch(t.body)
-# self._write(" if ")
-# self._dispatch(t.test)
-# if t.orelse:
-# self._write(" else ")
-# self._dispatch(t.orelse)
-#
-# unop = {"Invert":"~", "Not": "not", "UAdd":"+", "USub":"-"}
-# def _UnaryOp(self, t):
-# self._write(self.unop[t.op.__class__.__name__])
-# self._write("(")
-# self._dispatch(t.operand)
-# self._write(")")
-#
-# binop = { "Add":"+", "Sub":"-", "Mult":"*", "Div":"/", "Mod":"%",
-# "LShift":">>", "RShift":"<<", "BitOr":"|", "BitXor":"^", "BitAnd":"&",
-# "FloorDiv":"//", "Pow": "**"}
-# def _BinOp(self, t):
-# self._write("(")
-# self._dispatch(t.left)
-# self._write(")" + self.binop[t.op.__class__.__name__] + "(")
-# self._dispatch(t.right)
-# self._write(")")
-#
-# boolops = {_ast.And: 'and', _ast.Or: 'or'}
-# def _BoolOp(self, t):
-# self._write("(")
-# self._dispatch(t.values[0])
-# for v in t.values[1:]:
-# self._write(" %s " % self.boolops[t.op.__class__])
-# self._dispatch(v)
-# self._write(")")
-#
-# def _Attribute(self,t):
-# self._dispatch(t.value)
-# self._write(".")
-# self._write(t.attr)
-#
-## def _Call(self, t):
-## self._dispatch(t.func)
-## self._write("(")
-## comma = False
-## for e in t.args:
-## if comma: self._write(", ")
-## else: comma = True
-## self._dispatch(e)
-## for e in t.keywords:
-## if comma: self._write(", ")
-## else: comma = True
-## self._dispatch(e)
-## if t.starargs:
-## if comma: self._write(", ")
-## else: comma = True
-## self._write("*")
-## self._dispatch(t.starargs)
-## if t.kwargs:
-## if comma: self._write(", ")
-## else: comma = True
-## self._write("**")
-## self._dispatch(t.kwargs)
-## self._write(")")
-#
-# # slice
-# def _Index(self, t):
-# self._dispatch(t.value)
-#
-# def _ExtSlice(self, t):
-# for i, d in enumerate(t.dims):
-# if i != 0:
-# self._write(': ')
-# self._dispatch(d)
-#
-# # others
-# def _arguments(self, t):
-# first = True
-# nonDef = len(t.args)-len(t.defaults)
-# for a in t.args[0:nonDef]:
-# if first:first = False
-# else: self._write(", ")
-# self._dispatch(a)
-# for a,d in zip(t.args[nonDef:], t.defaults):
-# if first:first = False
-# else: self._write(", ")
-# self._dispatch(a),
-# self._write("=")
-# self._dispatch(d)
-# if t.vararg:
-# if first:first = False
-# else: self._write(", ")
-# self._write("*"+t.vararg)
-# if t.kwarg:
-# if first:first = False
-# else: self._write(", ")
-# self._write("**"+t.kwarg)
-#
-## def _keyword(self, t):
-## self._write(t.arg)
-## self._write("=")
-## self._dispatch(t.value)
-#
-# def _Lambda(self, t):
-# self._write("lambda ")
-# self._dispatch(t.args)
-# self._write(": ")
-# self._dispatch(t.body)
-
-
-
diff --git a/doc/sphinxext/numpydoc/docscrape.py b/doc/sphinxext/numpydoc/docscrape.py
old mode 100755
new mode 100644
index 38cb62581ae76..598b4438ffabc
--- a/doc/sphinxext/numpydoc/docscrape.py
+++ b/doc/sphinxext/numpydoc/docscrape.py
@@ -9,6 +9,17 @@
import pydoc
from warnings import warn
import collections
+import copy
+import sys
+
+
+def strip_blank_lines(l):
+ "Remove leading and trailing blank lines from a list of lines"
+ while l and not l[0].strip():
+ del l[0]
+ while l and not l[-1].strip():
+ del l[-1]
+ return l
class Reader(object):
@@ -23,10 +34,10 @@ def __init__(self, data):
String with lines separated by '\n'.
"""
- if isinstance(data,list):
+ if isinstance(data, list):
self._str = data
else:
- self._str = data.split('\n') # store string as list of lines
+ self._str = data.split('\n') # store string as list of lines
self.reset()
@@ -34,7 +45,7 @@ def __getitem__(self, n):
return self._str[n]
def reset(self):
- self._l = 0 # current line nr
+ self._l = 0 # current line nr
def read(self):
if not self.eof():
@@ -66,8 +77,10 @@ def read_to_condition(self, condition_func):
def read_to_next_empty_line(self):
self.seek_next_non_empty_line()
+
def is_empty(line):
return not line.strip()
+
return self.read_to_condition(is_empty)
def read_to_next_unindented_line(self):
@@ -75,7 +88,7 @@ def is_unindented(line):
return (line.strip() and (len(line.lstrip()) == len(line)))
return self.read_to_condition(is_unindented)
- def peek(self,n=0):
+ def peek(self, n=0):
if self._l + n < len(self._str):
return self[self._l + n]
else:
@@ -85,41 +98,69 @@ def is_empty(self):
return not ''.join(self._str).strip()
-class NumpyDocString(object):
+class ParseError(Exception):
+ def __str__(self):
+ message = self.args[0]
+ if hasattr(self, 'docstring'):
+ message = "%s in %r" % (message, self.docstring)
+ return message
+
+
+class NumpyDocString(collections.Mapping):
+ """Parses a numpydoc string to an abstract representation
+
+ Instances define a mapping from section title to structured data.
+
+ """
+
+ sections = {
+ 'Signature': '',
+ 'Summary': [''],
+ 'Extended Summary': [],
+ 'Parameters': [],
+ 'Returns': [],
+ 'Yields': [],
+ 'Raises': [],
+ 'Warns': [],
+ 'Other Parameters': [],
+ 'Attributes': [],
+ 'Methods': [],
+ 'See Also': [],
+ 'Notes': [],
+ 'Warnings': [],
+ 'References': '',
+ 'Examples': '',
+ 'index': {}
+ }
+
def __init__(self, docstring, config={}):
+ orig_docstring = docstring
docstring = textwrap.dedent(docstring).split('\n')
self._doc = Reader(docstring)
- self._parsed_data = {
- 'Signature': '',
- 'Summary': [''],
- 'Extended Summary': [],
- 'Parameters': [],
- 'Returns': [],
- 'Raises': [],
- 'Warns': [],
- 'Other Parameters': [],
- 'Attributes': [],
- 'Methods': [],
- 'See Also': [],
- 'Notes': [],
- 'Warnings': [],
- 'References': '',
- 'Examples': '',
- 'index': {}
- }
-
- self._parse()
-
- def __getitem__(self,key):
+ self._parsed_data = copy.deepcopy(self.sections)
+
+ try:
+ self._parse()
+ except ParseError as e:
+ e.docstring = orig_docstring
+ raise
+
+ def __getitem__(self, key):
return self._parsed_data[key]
- def __setitem__(self,key,val):
+ def __setitem__(self, key, val):
if key not in self._parsed_data:
- warn("Unknown section %s" % key)
+ self._error_location("Unknown section %s" % key, error=False)
else:
self._parsed_data[key] = val
+ def __iter__(self):
+ return iter(self._parsed_data)
+
+ def __len__(self):
+ return len(self._parsed_data)
+
def _is_at_section(self):
self._doc.seek_next_non_empty_line()
@@ -131,17 +172,19 @@ def _is_at_section(self):
if l1.startswith('.. index::'):
return True
- l2 = self._doc.peek(1).strip() # ---------- or ==========
+ l2 = self._doc.peek(1).strip() # ---------- or ==========
return l2.startswith('-'*len(l1)) or l2.startswith('='*len(l1))
- def _strip(self,doc):
+ def _strip(self, doc):
i = 0
j = 0
- for i,line in enumerate(doc):
- if line.strip(): break
+ for i, line in enumerate(doc):
+ if line.strip():
+ break
- for j,line in enumerate(doc[::-1]):
- if line.strip(): break
+ for j, line in enumerate(doc[::-1]):
+ if line.strip():
+ break
return doc[i:len(doc)-j]
@@ -149,7 +192,7 @@ def _read_to_next_section(self):
section = self._doc.read_to_next_empty_line()
while not self._is_at_section() and not self._doc.eof():
- if not self._doc.peek(-1).strip(): # previous line was empty
+ if not self._doc.peek(-1).strip(): # previous line was empty
section += ['']
section += self._doc.read_to_next_empty_line()
@@ -161,14 +204,14 @@ def _read_sections(self):
data = self._read_to_next_section()
name = data[0].strip()
- if name.startswith('..'): # index section
+ if name.startswith('..'): # index section
yield name, data[1:]
elif len(data) < 2:
yield StopIteration
else:
yield name, self._strip(data[2:])
- def _parse_param_list(self,content):
+ def _parse_param_list(self, content):
r = Reader(content)
params = []
while not r.eof():
@@ -180,14 +223,16 @@ def _parse_param_list(self,content):
desc = r.read_to_next_unindented_line()
desc = dedent_lines(desc)
+ desc = strip_blank_lines(desc)
- params.append((arg_name,arg_type,desc))
+ params.append((arg_name, arg_type, desc))
return params
-
- _name_rgx = re.compile(r"^\s*(:(?P<role>\w+):`(?P<name>[a-zA-Z0-9_.-]+)`|"
+ _name_rgx = re.compile(r"^\s*(:(?P<role>\w+):"
+ r"`(?P<name>(?:~\w+\.)?[a-zA-Z0-9_.-]+)`|"
r" (?P<name2>[a-zA-Z0-9_.-]+))\s*", re.X)
+
def _parse_see_also(self, content):
"""
func_name : Descriptive text
@@ -207,7 +252,7 @@ def parse_item_name(text):
return g[3], None
else:
return g[2], g[1]
- raise ValueError("%s is not a item name" % text)
+ raise ParseError("%s is not a item name" % text)
def push_item(name, rest):
if not name:
@@ -220,7 +265,8 @@ def push_item(name, rest):
rest = []
for line in content:
- if not line.strip(): continue
+ if not line.strip():
+ continue
m = self._name_rgx.match(line)
if m and line[m.end():].strip().startswith(':'):
@@ -270,7 +316,7 @@ def _parse_summary(self):
# If several signatures present, take the last one
while True:
summary = self._doc.read_to_next_empty_line()
- summary_str = " ".join(s.strip() for s in summary).strip()
+ summary_str = " ".join([s.strip() for s in summary]).strip()
if re.compile('^([\w., ]+=)?\s*[\w\.]+\(.*\)$').match(summary_str):
self['Signature'] = summary_str
if not self._is_at_section():
@@ -287,11 +333,27 @@ def _parse(self):
self._doc.reset()
self._parse_summary()
- for (section,content) in self._read_sections():
+ sections = list(self._read_sections())
+ section_names = set([section for section, content in sections])
+
+ has_returns = 'Returns' in section_names
+ has_yields = 'Yields' in section_names
+ # We could do more tests, but we are not. Arbitrarily.
+ if has_returns and has_yields:
+ msg = 'Docstring contains both a Returns and Yields section.'
+ raise ValueError(msg)
+
+ for (section, content) in sections:
if not section.startswith('..'):
- section = ' '.join(s.capitalize() for s in section.split(' '))
- if section in ('Parameters', 'Returns', 'Raises', 'Warns',
- 'Other Parameters', 'Attributes', 'Methods'):
+ section = (s.capitalize() for s in section.split(' '))
+ section = ' '.join(section)
+ if self.get(section):
+ self._error_location("The section %s appears twice"
+ % section)
+
+ if section in ('Parameters', 'Returns', 'Yields', 'Raises',
+ 'Warns', 'Other Parameters', 'Attributes',
+ 'Methods'):
self[section] = self._parse_param_list(content)
elif section.startswith('.. index::'):
self['index'] = self._parse_index(section, content)
@@ -300,6 +362,20 @@ def _parse(self):
else:
self[section] = content
+ def _error_location(self, msg, error=True):
+ if hasattr(self, '_obj'):
+ # we know where the docs came from:
+ try:
+ filename = inspect.getsourcefile(self._obj)
+ except TypeError:
+ filename = None
+ msg = msg + (" in the docstring of %s in %s."
+ % (self._obj, filename))
+ if error:
+ raise ValueError(msg)
+ else:
+ warn(msg)
+
# string conversion routines
def _str_header(self, name, symbol='-'):
@@ -313,7 +389,7 @@ def _str_indent(self, doc, indent=4):
def _str_signature(self):
if self['Signature']:
- return [self['Signature'].replace('*','\*')] + ['']
+ return [self['Signature'].replace('*', '\*')] + ['']
else:
return ['']
@@ -333,12 +409,13 @@ def _str_param_list(self, name):
out = []
if self[name]:
out += self._str_header(name)
- for param,param_type,desc in self[name]:
+ for param, param_type, desc in self[name]:
if param_type:
out += ['%s : %s' % (param, param_type)]
else:
out += [param]
- out += self._str_indent(desc)
+ if desc and ''.join(desc).strip():
+ out += self._str_indent(desc)
out += ['']
return out
@@ -351,7 +428,8 @@ def _str_section(self, name):
return out
def _str_see_also(self, func_role):
- if not self['See Also']: return []
+ if not self['See Also']:
+ return []
out = []
out += self._str_header("See Also")
last_had_desc = True
@@ -378,7 +456,7 @@ def _str_see_also(self, func_role):
def _str_index(self):
idx = self['index']
out = []
- out += ['.. index:: %s' % idx.get('default','')]
+ out += ['.. index:: %s' % idx.get('default', '')]
for section, references in idx.items():
if section == 'default':
continue
@@ -390,12 +468,12 @@ def __str__(self, func_role=''):
out += self._str_signature()
out += self._str_summary()
out += self._str_extended_summary()
- for param_list in ('Parameters', 'Returns', 'Other Parameters',
- 'Raises', 'Warns'):
+ for param_list in ('Parameters', 'Returns', 'Yields',
+ 'Other Parameters', 'Raises', 'Warns'):
out += self._str_param_list(param_list)
out += self._str_section('Warnings')
out += self._str_see_also(func_role)
- for s in ('Notes','References','Examples'):
+ for s in ('Notes', 'References', 'Examples'):
out += self._str_section(s)
for param_list in ('Attributes', 'Methods'):
out += self._str_param_list(param_list)
@@ -403,17 +481,19 @@ def __str__(self, func_role=''):
return '\n'.join(out)
-def indent(str,indent=4):
+def indent(str, indent=4):
indent_str = ' '*indent
if str is None:
return indent_str
lines = str.split('\n')
return '\n'.join(indent_str + l for l in lines)
+
def dedent_lines(lines):
"""Deindent a list of lines maximally"""
return textwrap.dedent("\n".join(lines)).split("\n")
+
def header(text, style='-'):
return text + '\n' + style*len(text) + '\n'
@@ -421,7 +501,7 @@ def header(text, style='-'):
class FunctionDoc(NumpyDocString):
def __init__(self, func, role='func', doc=None, config={}):
self._f = func
- self._role = role # e.g. "func" or "meth"
+ self._role = role # e.g. "func" or "meth"
if doc is None:
if func is None:
@@ -432,12 +512,17 @@ def __init__(self, func, role='func', doc=None, config={}):
if not self['Signature'] and func is not None:
func, func_name = self.get_func()
try:
- # try to read signature
- argspec = inspect.getargspec(func)
- argspec = inspect.formatargspec(*argspec)
- argspec = argspec.replace('*','\*')
- signature = '%s%s' % (func_name, argspec)
- except TypeError as e:
+ try:
+ signature = str(inspect.signature(func))
+ except (AttributeError, ValueError):
+ # try to read signature, backward compat for older Python
+ if sys.version_info[0] >= 3:
+ argspec = inspect.getfullargspec(func)
+ else:
+ argspec = inspect.getargspec(func)
+ signature = inspect.formatargspec(*argspec)
+ signature = '%s%s' % (func_name, signature.replace('*', '\*'))
+ except TypeError:
signature = '%s()' % func_name
self['Signature'] = signature
@@ -461,7 +546,7 @@ def __str__(self):
if self._role:
if self._role not in roles:
print("Warning: invalid role %s" % self._role)
- out += '.. %s:: %s\n \n\n' % (roles.get(self._role,''),
+ out += '.. %s:: %s\n \n\n' % (roles.get(self._role, ''),
func_name)
out += super(FunctionDoc, self).__str__(func_role=self._role)
@@ -478,6 +563,9 @@ def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
raise ValueError("Expected a class or None, but got %r" % cls)
self._cls = cls
+ self.show_inherited_members = config.get(
+ 'show_inherited_class_members', True)
+
if modulename and not modulename.endswith('.'):
modulename += '.'
self._mod = modulename
@@ -501,27 +589,36 @@ def splitlines_x(s):
if not self[field]:
doc_list = []
for name in sorted(items):
- try:
+ try:
doc_item = pydoc.getdoc(getattr(self._cls, name))
doc_list.append((name, '', splitlines_x(doc_item)))
- except AttributeError:
- pass # method doesn't exist
+ except AttributeError:
+ pass # method doesn't exist
self[field] = doc_list
@property
def methods(self):
if self._cls is None:
return []
- return [name for name,func in inspect.getmembers(self._cls)
+ return [name for name, func in inspect.getmembers(self._cls)
if ((not name.startswith('_')
or name in self.extra_public_methods)
- and isinstance(func, collections.Callable))]
+ and isinstance(func, collections.Callable)
+ and self._is_show_member(name))]
@property
def properties(self):
if self._cls is None:
return []
- return [name for name,func in inspect.getmembers(self._cls)
- if not name.startswith('_') and
- (func is None or isinstance(func, property) or
- inspect.isgetsetdescriptor(func))]
+ return [name for name, func in inspect.getmembers(self._cls)
+ if (not name.startswith('_') and
+ (func is None or isinstance(func, property) or
+ inspect.isdatadescriptor(func))
+ and self._is_show_member(name))]
+
+ def _is_show_member(self, name):
+ if self.show_inherited_members:
+ return True # show all class members
+ if name not in self._cls.__dict__:
+ return False # class member is inherited, we do not show it
+ return True
diff --git a/doc/sphinxext/numpydoc/docscrape_sphinx.py b/doc/sphinxext/numpydoc/docscrape_sphinx.py
old mode 100755
new mode 100644
index 127ed49c106ad..19c355eba1898
--- a/doc/sphinxext/numpydoc/docscrape_sphinx.py
+++ b/doc/sphinxext/numpydoc/docscrape_sphinx.py
@@ -1,8 +1,18 @@
from __future__ import division, absolute_import, print_function
-import sys, re, inspect, textwrap, pydoc
-import sphinx
+import sys
+import re
+import inspect
+import textwrap
+import pydoc
import collections
+import os
+
+from jinja2 import FileSystemLoader
+from jinja2.sandbox import SandboxedEnvironment
+import sphinx
+from sphinx.jinja2glue import BuiltinTemplateLoader
+
from .docscrape import NumpyDocString, FunctionDoc, ClassDoc
if sys.version_info[0] >= 3:
@@ -11,15 +21,25 @@
sixu = lambda s: unicode(s, 'unicode_escape')
+IMPORT_MATPLOTLIB_RE = r'\b(import +matplotlib|from +matplotlib +import)\b'
+
+
class SphinxDocString(NumpyDocString):
def __init__(self, docstring, config={}):
- # Subclasses seemingly do not call this.
NumpyDocString.__init__(self, docstring, config=config)
+ self.load_config(config)
def load_config(self, config):
self.use_plots = config.get('use_plots', False)
+ self.use_blockquotes = config.get('use_blockquotes', False)
self.class_members_toctree = config.get('class_members_toctree', True)
- self.class_members_list = config.get('class_members_list', True)
+ self.attributes_as_param_list = config.get('attributes_as_param_list', True)
+ self.template = config.get('template', None)
+ if self.template is None:
+ template_dirs = [os.path.join(os.path.dirname(__file__), 'templates')]
+ template_loader = FileSystemLoader(template_dirs)
+ template_env = SandboxedEnvironment(loader=template_loader)
+ self.template = template_env.get_template('numpydoc_docstring.rst')
# string conversion routines
def _str_header(self, name, symbol='`'):
@@ -47,38 +67,149 @@ def _str_summary(self):
def _str_extended_summary(self):
return self['Extended Summary'] + ['']
- def _str_returns(self):
+ def _str_returns(self, name='Returns'):
+ if self.use_blockquotes:
+ typed_fmt = '**%s** : %s'
+ untyped_fmt = '**%s**'
+ else:
+ typed_fmt = '%s : %s'
+ untyped_fmt = '%s'
+
out = []
- if self['Returns']:
- out += self._str_field_list('Returns')
+ if self[name]:
+ out += self._str_field_list(name)
out += ['']
- for param, param_type, desc in self['Returns']:
+ for param, param_type, desc in self[name]:
if param_type:
- out += self._str_indent(['**%s** : %s' % (param.strip(),
- param_type)])
+ out += self._str_indent([typed_fmt % (param.strip(),
+ param_type)])
else:
- out += self._str_indent([param.strip()])
- if desc:
+ out += self._str_indent([untyped_fmt % param.strip()])
+ if desc and self.use_blockquotes:
out += ['']
- out += self._str_indent(desc, 8)
+ elif not desc:
+ desc = ['..']
+ out += self._str_indent(desc, 8)
out += ['']
return out
- def _str_param_list(self, name):
+ def _process_param(self, param, desc, fake_autosummary):
+ """Determine how to display a parameter
+
+ Emulates autosummary behavior if fake_autosummary
+
+ Parameters
+ ----------
+ param : str
+ The name of the parameter
+ desc : list of str
+ The parameter description as given in the docstring. This is
+ ignored when autosummary logic applies.
+ fake_autosummary : bool
+ If True, autosummary-style behaviour will apply for params
+ that are attributes of the class and have a docstring.
+
+ Returns
+ -------
+ display_param : str
+ The marked up parameter name for display. This may include a link
+ to the corresponding attribute's own documentation.
+ desc : list of str
+ A list of description lines. This may be identical to the input
+ ``desc``, if ``autosum is None`` or ``param`` is not a class
+ attribute, or it will be a summary of the class attribute's
+ docstring.
+
+ Notes
+ -----
+ This does not have the autosummary functionality to display a method's
+ signature, and hence is not used to format methods. It may be
+ complicated to incorporate autosummary's signature mangling, as it
+ relies on Sphinx's plugin mechanism.
+ """
+ param = param.strip()
+ display_param = ('**%s**' if self.use_blockquotes else '%s') % param
+
+ if not fake_autosummary:
+ return display_param, desc
+
+ param_obj = getattr(self._obj, param, None)
+ if not (callable(param_obj)
+ or isinstance(param_obj, property)
+ or inspect.isgetsetdescriptor(param_obj)):
+ param_obj = None
+ obj_doc = pydoc.getdoc(param_obj)
+
+ if not (param_obj and obj_doc):
+ return display_param, desc
+
+ prefix = getattr(self, '_name', '')
+ if prefix:
+ autosum_prefix = '~%s.' % prefix
+ link_prefix = '%s.' % prefix
+ else:
+ autosum_prefix = ''
+ link_prefix = ''
+
+ # Referenced object has a docstring
+ display_param = ':obj:`%s <%s%s>`' % (param,
+ link_prefix,
+ param)
+ if obj_doc:
+ # Overwrite desc. Take summary logic of autosummary
+ desc = re.split('\n\s*\n', obj_doc.strip(), 1)[0]
+ # XXX: Should this have DOTALL?
+ # It does not in autosummary
+ m = re.search(r"^([A-Z].*?\.)(?:\s|$)",
+ ' '.join(desc.split()))
+ if m:
+ desc = m.group(1).strip()
+ else:
+ desc = desc.partition('\n')[0]
+ desc = desc.split('\n')
+ return display_param, desc
+
+ def _str_param_list(self, name, fake_autosummary=False):
+ """Generate RST for a listing of parameters or similar
+
+ Parameter names are displayed as bold text, and descriptions
+ are in blockquotes. Descriptions may therefore contain block
+ markup as well.
+
+ Parameters
+ ----------
+ name : str
+ Section name (e.g. Parameters)
+ fake_autosummary : bool
+ When True, the parameter names may correspond to attributes of the
+ object beign documented, usually ``property`` instances on a class.
+ In this case, names will be linked to fuller descriptions.
+
+ Returns
+ -------
+ rst : list of str
+ """
out = []
if self[name]:
out += self._str_field_list(name)
out += ['']
for param, param_type, desc in self[name]:
+ display_param, desc = self._process_param(param, desc,
+ fake_autosummary)
+
if param_type:
- out += self._str_indent(['**%s** : %s' % (param.strip(),
- param_type)])
+ out += self._str_indent(['%s : %s' % (display_param,
+ param_type)])
else:
- out += self._str_indent(['**%s**' % param.strip()])
- if desc:
+ out += self._str_indent([display_param])
+ if desc and self.use_blockquotes:
out += ['']
- out += self._str_indent(desc, 8)
+ elif not desc:
+ # empty definition
+ desc = ['..']
+ out += self._str_indent(desc, 8)
out += ['']
+
return out
@property
@@ -96,7 +227,7 @@ def _str_member_list(self, name):
"""
out = []
- if self[name] and self.class_members_list:
+ if self[name]:
out += ['.. rubric:: %s' % name, '']
prefix = getattr(self, '_name', '')
@@ -112,16 +243,14 @@ def _str_member_list(self, name):
param_obj = getattr(self._obj, param, None)
if not (callable(param_obj)
or isinstance(param_obj, property)
- or inspect.isgetsetdescriptor(param_obj)):
+ or inspect.isdatadescriptor(param_obj)):
param_obj = None
- # pandas HACK - do not exclude attributes which are None
- # if param_obj and (pydoc.getdoc(param_obj) or not desc):
- # # Referenced object has a docstring
- # autosum += [" %s%s" % (prefix, param)]
- # else:
- # others.append((param, param_type, desc))
- autosum += [" %s%s" % (prefix, param)]
+ if param_obj and pydoc.getdoc(param_obj):
+ # Referenced object has a docstring
+ autosum += [" %s%s" % (prefix, param)]
+ else:
+ others.append((param, param_type, desc))
if autosum:
out += ['.. autosummary::']
@@ -130,15 +259,15 @@ def _str_member_list(self, name):
out += [''] + autosum
if others:
- maxlen_0 = max(3, max(len(x[0]) for x in others))
- hdr = sixu("=")*maxlen_0 + sixu(" ") + sixu("=")*10
+ maxlen_0 = max(3, max([len(x[0]) + 4 for x in others]))
+ hdr = sixu("=") * maxlen_0 + sixu(" ") + sixu("=") * 10
fmt = sixu('%%%ds %%s ') % (maxlen_0,)
- out += ['', hdr]
+ out += ['', '', hdr]
for param, param_type, desc in others:
desc = sixu(" ").join(x.strip() for x in desc).strip()
if param_type:
desc = "(%s) %s" % (param_type, desc)
- out += [fmt % (param.strip(), desc)]
+ out += [fmt % ("**" + param.strip() + "**", desc)]
out += [hdr]
out += ['']
return out
@@ -147,7 +276,6 @@ def _str_section(self, name):
out = []
if self[name]:
out += self._str_header(name)
- out += ['']
content = textwrap.dedent("\n".join(self[name])).split("\n")
out += content
out += ['']
@@ -166,6 +294,7 @@ def _str_warnings(self):
if self['Warnings']:
out = ['.. warning::', '']
out += self._str_indent(self['Warnings'])
+ out += ['']
return out
def _str_index(self):
@@ -174,7 +303,7 @@ def _str_index(self):
if len(idx) == 0:
return out
- out += ['.. index:: %s' % idx.get('default','')]
+ out += ['.. index:: %s' % idx.get('default', '')]
for section, references in idx.items():
if section == 'default':
continue
@@ -182,6 +311,7 @@ def _str_index(self):
out += [' single: %s' % (', '.join(references))]
else:
out += [' %s: %s' % (section, ','.join(references))]
+ out += ['']
return out
def _str_references(self):
@@ -195,21 +325,21 @@ def _str_references(self):
# Latex collects all references to a separate bibliography,
# so we need to insert links to it
if sphinx.__version__ >= "0.6":
- out += ['.. only:: latex','']
+ out += ['.. only:: latex', '']
else:
- out += ['.. latexonly::','']
+ out += ['.. latexonly::', '']
items = []
for line in self['References']:
m = re.match(r'.. \[([a-z0-9._-]+)\]', line, re.I)
if m:
items.append(m.group(1))
- out += [' ' + ", ".join("[%s]_" % item for item in items), '']
+ out += [' ' + ", ".join(["[%s]_" % item for item in items]), '']
return out
def _str_examples(self):
examples_str = "\n".join(self['Examples'])
- if (self.use_plots and 'import matplotlib' in examples_str
+ if (self.use_plots and re.search(IMPORT_MATPLOTLIB_RE, examples_str)
and 'plot::' not in examples_str):
out = []
out += self._str_header('Examples')
@@ -221,42 +351,54 @@ def _str_examples(self):
return self._str_section('Examples')
def __str__(self, indent=0, func_role="obj"):
- out = []
- out += self._str_signature()
- out += self._str_index() + ['']
- out += self._str_summary()
- out += self._str_extended_summary()
- out += self._str_param_list('Parameters')
- out += self._str_returns()
- for param_list in ('Other Parameters', 'Raises', 'Warns'):
- out += self._str_param_list(param_list)
- out += self._str_warnings()
- out += self._str_see_also(func_role)
- out += self._str_section('Notes')
- out += self._str_references()
- out += self._str_examples()
- for param_list in ('Attributes', 'Methods'):
- out += self._str_member_list(param_list)
- out = self._str_indent(out,indent)
- return '\n'.join(out)
+ ns = {
+ 'signature': self._str_signature(),
+ 'index': self._str_index(),
+ 'summary': self._str_summary(),
+ 'extended_summary': self._str_extended_summary(),
+ 'parameters': self._str_param_list('Parameters'),
+ 'returns': self._str_returns('Returns'),
+ 'yields': self._str_returns('Yields'),
+ 'other_parameters': self._str_param_list('Other Parameters'),
+ 'raises': self._str_param_list('Raises'),
+ 'warns': self._str_param_list('Warns'),
+ 'warnings': self._str_warnings(),
+ 'see_also': self._str_see_also(func_role),
+ 'notes': self._str_section('Notes'),
+ 'references': self._str_references(),
+ 'examples': self._str_examples(),
+ 'attributes':
+ self._str_param_list('Attributes', fake_autosummary=True)
+ if self.attributes_as_param_list
+ else self._str_member_list('Attributes'),
+ 'methods': self._str_member_list('Methods'),
+ }
+ ns = dict((k, '\n'.join(v)) for k, v in ns.items())
+
+ rendered = self.template.render(**ns)
+ return '\n'.join(self._str_indent(rendered.split('\n'), indent))
+
class SphinxFunctionDoc(SphinxDocString, FunctionDoc):
def __init__(self, obj, doc=None, config={}):
self.load_config(config)
FunctionDoc.__init__(self, obj, doc=doc, config=config)
+
class SphinxClassDoc(SphinxDocString, ClassDoc):
def __init__(self, obj, doc=None, func_doc=None, config={}):
self.load_config(config)
ClassDoc.__init__(self, obj, doc=doc, func_doc=None, config=config)
+
class SphinxObjDoc(SphinxDocString):
def __init__(self, obj, doc=None, config={}):
self._f = obj
self.load_config(config)
SphinxDocString.__init__(self, doc, config=config)
-def get_doc_object(obj, what=None, doc=None, config={}):
+
+def get_doc_object(obj, what=None, doc=None, config={}, builder=None):
if what is None:
if inspect.isclass(obj):
what = 'class'
@@ -266,6 +408,16 @@ def get_doc_object(obj, what=None, doc=None, config={}):
what = 'function'
else:
what = 'object'
+
+ template_dirs = [os.path.join(os.path.dirname(__file__), 'templates')]
+ if builder is not None:
+ template_loader = BuiltinTemplateLoader()
+ template_loader.init(builder, dirs=template_dirs)
+ else:
+ template_loader = FileSystemLoader(template_dirs)
+ template_env = SandboxedEnvironment(loader=template_loader)
+ config['template'] = template_env.get_template('numpydoc_docstring.rst')
+
if what == 'class':
return SphinxClassDoc(obj, func_doc=SphinxFunctionDoc, doc=doc,
config=config)
diff --git a/doc/sphinxext/numpydoc/linkcode.py b/doc/sphinxext/numpydoc/linkcode.py
deleted file mode 100644
index 1ad3ab82cb49c..0000000000000
--- a/doc/sphinxext/numpydoc/linkcode.py
+++ /dev/null
@@ -1,83 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
- linkcode
- ~~~~~~~~
-
- Add external links to module code in Python object descriptions.
-
- :copyright: Copyright 2007-2011 by the Sphinx team, see AUTHORS.
- :license: BSD, see LICENSE for details.
-
-"""
-from __future__ import division, absolute_import, print_function
-
-import warnings
-import collections
-
-warnings.warn("This extension has been accepted to Sphinx upstream. "
- "Use the version from there (Sphinx >= 1.2) "
- "https://bitbucket.org/birkenfeld/sphinx/pull-request/47/sphinxextlinkcode",
- FutureWarning, stacklevel=1)
-
-
-from docutils import nodes
-
-from sphinx import addnodes
-from sphinx.locale import _
-from sphinx.errors import SphinxError
-
-class LinkcodeError(SphinxError):
- category = "linkcode error"
-
-def doctree_read(app, doctree):
- env = app.builder.env
-
- resolve_target = getattr(env.config, 'linkcode_resolve', None)
- if not isinstance(env.config.linkcode_resolve, collections.Callable):
- raise LinkcodeError(
- "Function `linkcode_resolve` is not given in conf.py")
-
- domain_keys = dict(
- py=['module', 'fullname'],
- c=['names'],
- cpp=['names'],
- js=['object', 'fullname'],
- )
-
- for objnode in doctree.traverse(addnodes.desc):
- domain = objnode.get('domain')
- uris = set()
- for signode in objnode:
- if not isinstance(signode, addnodes.desc_signature):
- continue
-
- # Convert signode to a specified format
- info = {}
- for key in domain_keys.get(domain, []):
- value = signode.get(key)
- if not value:
- value = ''
- info[key] = value
- if not info:
- continue
-
- # Call user code to resolve the link
- uri = resolve_target(domain, info)
- if not uri:
- # no source
- continue
-
- if uri in uris or not uri:
- # only one link per name, please
- continue
- uris.add(uri)
-
- onlynode = addnodes.only(expr='html')
- onlynode += nodes.reference('', '', internal=False, refuri=uri)
- onlynode[0] += nodes.inline('', _('[source]'),
- classes=['viewcode-link'])
- signode += onlynode
-
-def setup(app):
- app.connect('doctree-read', doctree_read)
- app.add_config_value('linkcode_resolve', None, '')
diff --git a/doc/sphinxext/numpydoc/numpydoc.py b/doc/sphinxext/numpydoc/numpydoc.py
old mode 100755
new mode 100644
index 4861aa90edce1..dc20b3f828eb2
--- a/doc/sphinxext/numpydoc/numpydoc.py
+++ b/doc/sphinxext/numpydoc/numpydoc.py
@@ -10,14 +10,17 @@
- Convert Parameters etc. sections to field lists.
- Convert See Also section to a See also entry.
- Renumber references.
-- Extract the signature from the docstring, if it can't be determined otherwise.
+- Extract the signature from the docstring, if it can't be determined
+ otherwise.
.. [1] https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt
"""
from __future__ import division, absolute_import, print_function
-import os, sys, re, pydoc
+import sys
+import re
+import pydoc
import sphinx
import inspect
import collections
@@ -26,6 +29,7 @@
raise RuntimeError("Sphinx 1.0.1 or newer is required")
from .docscrape_sphinx import get_doc_object, SphinxDocString
+from . import __version__
if sys.version_info[0] >= 3:
sixu = lambda s: s
@@ -33,29 +37,66 @@
sixu = lambda s: unicode(s, 'unicode_escape')
-def mangle_docstrings(app, what, name, obj, options, lines,
+def rename_references(app, what, name, obj, options, lines,
reference_offset=[0]):
+ # replace reference numbers so that there are no duplicates
+ references = set()
+ for line in lines:
+ line = line.strip()
+ m = re.match(sixu('^.. \\[(%s)\\]') % app.config.numpydoc_citation_re,
+ line, re.I)
+ if m:
+ references.add(m.group(1))
- cfg = dict(use_plots=app.config.numpydoc_use_plots,
- show_class_members=app.config.numpydoc_show_class_members,
- class_members_toctree=app.config.numpydoc_class_members_toctree,
- )
+ if references:
+ for r in references:
+ if r.isdigit():
+ new_r = sixu("R%d") % (reference_offset[0] + int(r))
+ else:
+ new_r = sixu("%s%d") % (r, reference_offset[0])
+ for i, line in enumerate(lines):
+ lines[i] = lines[i].replace(sixu('[%s]_') % r,
+ sixu('[%s]_') % new_r)
+ lines[i] = lines[i].replace(sixu('.. [%s]') % r,
+ sixu('.. [%s]') % new_r)
+
+ reference_offset[0] += len(references)
+
+
+DEDUPLICATION_TAG = ' !! processed by numpydoc !!'
+
+
+def mangle_docstrings(app, what, name, obj, options, lines):
+ if DEDUPLICATION_TAG in lines:
+ return
+
+ cfg = {'use_plots': app.config.numpydoc_use_plots,
+ 'use_blockquotes': app.config.numpydoc_use_blockquotes,
+ 'show_class_members': app.config.numpydoc_show_class_members,
+ 'show_inherited_class_members':
+ app.config.numpydoc_show_inherited_class_members,
+ 'class_members_toctree': app.config.numpydoc_class_members_toctree,
+ 'attributes_as_param_list':
+ app.config.numpydoc_attributes_as_param_list}
+
+ u_NL = sixu('\n')
if what == 'module':
# Strip top title
- title_re = re.compile(sixu('^\\s*[#*=]{4,}\\n[a-z0-9 -]+\\n[#*=]{4,}\\s*'),
- re.I|re.S)
- lines[:] = title_re.sub(sixu(''), sixu("\n").join(lines)).split(sixu("\n"))
+ pattern = '^\\s*[#*=]{4,}\\n[a-z0-9 -]+\\n[#*=]{4,}\\s*'
+ title_re = re.compile(sixu(pattern), re.I | re.S)
+ lines[:] = title_re.sub(sixu(''), u_NL.join(lines)).split(u_NL)
else:
- doc = get_doc_object(obj, what, sixu("\n").join(lines), config=cfg)
+ doc = get_doc_object(obj, what, u_NL.join(lines), config=cfg,
+ builder=app.builder)
if sys.version_info[0] >= 3:
doc = str(doc)
else:
doc = unicode(doc)
- lines[:] = doc.split(sixu("\n"))
+ lines[:] = doc.split(u_NL)
- if app.config.numpydoc_edit_link and hasattr(obj, '__name__') and \
- obj.__name__:
+ if (app.config.numpydoc_edit_link and hasattr(obj, '__name__') and
+ obj.__name__):
if hasattr(obj, '__module__'):
v = dict(full_name=sixu("%s.%s") % (obj.__module__, obj.__name__))
else:
@@ -64,48 +105,36 @@ def mangle_docstrings(app, what, name, obj, options, lines,
lines += [sixu(' %s') % x for x in
(app.config.numpydoc_edit_link % v).split("\n")]
- # replace reference numbers so that there are no duplicates
- references = []
- for line in lines:
- line = line.strip()
- m = re.match(sixu('^.. \\[([a-z0-9_.-])\\]'), line, re.I)
- if m:
- references.append(m.group(1))
+ # call function to replace reference numbers so that there are no
+ # duplicates
+ rename_references(app, what, name, obj, options, lines)
- # start renaming from the longest string, to avoid overwriting parts
- references.sort(key=lambda x: -len(x))
- if references:
- for i, line in enumerate(lines):
- for r in references:
- if re.match(sixu('^\\d+$'), r):
- new_r = sixu("R%d") % (reference_offset[0] + int(r))
- else:
- new_r = sixu("%s%d") % (r, reference_offset[0])
- lines[i] = lines[i].replace(sixu('[%s]_') % r,
- sixu('[%s]_') % new_r)
- lines[i] = lines[i].replace(sixu('.. [%s]') % r,
- sixu('.. [%s]') % new_r)
+ lines += ['..', DEDUPLICATION_TAG]
- reference_offset[0] += len(references)
def mangle_signature(app, what, name, obj, options, sig, retann):
# Do not try to inspect classes that don't define `__init__`
if (inspect.isclass(obj) and
(not hasattr(obj, '__init__') or
- 'initializes x; see ' in pydoc.getdoc(obj.__init__))):
+ 'initializes x; see ' in pydoc.getdoc(obj.__init__))):
return '', ''
- if not (isinstance(obj, collections.Callable) or hasattr(obj, '__argspec_is_invalid_')): return
- if not hasattr(obj, '__doc__'): return
+ if not (isinstance(obj, collections.Callable) or
+ hasattr(obj, '__argspec_is_invalid_')):
+ return
+ if not hasattr(obj, '__doc__'):
+ return
doc = SphinxDocString(pydoc.getdoc(obj))
- if doc['Signature']:
- sig = re.sub(sixu("^[^(]*"), sixu(""), doc['Signature'])
+ sig = doc['Signature'] or getattr(obj, '__text_signature__', None)
+ if sig:
+ sig = re.sub(sixu("^[^(]*"), sixu(""), sig)
return sig, sixu('')
+
def setup(app, get_doc_object_=get_doc_object):
if not hasattr(app, 'add_config_value'):
- return # probably called by nose, better bail out
+ return # probably called by nose, better bail out
global get_doc_object
get_doc_object = get_doc_object_
@@ -114,21 +143,32 @@ def setup(app, get_doc_object_=get_doc_object):
app.connect('autodoc-process-signature', mangle_signature)
app.add_config_value('numpydoc_edit_link', None, False)
app.add_config_value('numpydoc_use_plots', None, False)
+ app.add_config_value('numpydoc_use_blockquotes', None, False)
app.add_config_value('numpydoc_show_class_members', True, True)
+ app.add_config_value('numpydoc_show_inherited_class_members', True, True)
app.add_config_value('numpydoc_class_members_toctree', True, True)
+ app.add_config_value('numpydoc_citation_re', '[a-z0-9_.-]+', True)
+ app.add_config_value('numpydoc_attributes_as_param_list', True, True)
# Extra mangling domains
app.add_domain(NumpyPythonDomain)
app.add_domain(NumpyCDomain)
-#------------------------------------------------------------------------------
+ app.setup_extension('sphinx.ext.autosummary')
+
+ metadata = {'version': __version__,
+ 'parallel_read_safe': True}
+ return metadata
+
+# ------------------------------------------------------------------------------
# Docstring-mangling domains
-#------------------------------------------------------------------------------
+# ------------------------------------------------------------------------------
from docutils.statemachine import ViewList
from sphinx.domains.c import CDomain
from sphinx.domains.python import PythonDomain
+
class ManglingDomainBase(object):
directive_mangling_map = {}
@@ -141,6 +181,7 @@ def wrap_mangling_directives(self):
self.directives[name] = wrap_mangling_directive(
self.directives[name], objtype)
+
class NumpyPythonDomain(ManglingDomainBase, PythonDomain):
name = 'np'
directive_mangling_map = {
@@ -154,6 +195,7 @@ class NumpyPythonDomain(ManglingDomainBase, PythonDomain):
}
indices = []
+
class NumpyCDomain(ManglingDomainBase, CDomain):
name = 'np-c'
directive_mangling_map = {
@@ -164,6 +206,63 @@ class NumpyCDomain(ManglingDomainBase, CDomain):
'var': 'object',
}
+
+def match_items(lines, content_old):
+ """Create items for mangled lines.
+
+ This function tries to match the lines in ``lines`` with the items (source
+ file references and line numbers) in ``content_old``. The
+ ``mangle_docstrings`` function changes the actual docstrings, but doesn't
+ keep track of where each line came from. The manging does many operations
+ on the original lines, which are hard to track afterwards.
+
+ Many of the line changes come from deleting or inserting blank lines. This
+ function tries to match lines by ignoring blank lines. All other changes
+ (such as inserting figures or changes in the references) are completely
+ ignored, so the generated line numbers will be off if ``mangle_docstrings``
+ does anything non-trivial.
+
+ This is a best-effort function and the real fix would be to make
+ ``mangle_docstrings`` actually keep track of the ``items`` together with
+ the ``lines``.
+
+ Examples
+ --------
+ >>> lines = ['', 'A', '', 'B', ' ', '', 'C', 'D']
+ >>> lines_old = ['a', '', '', 'b', '', 'c']
+ >>> items_old = [('file1.py', 0), ('file1.py', 1), ('file1.py', 2),
+ ... ('file2.py', 0), ('file2.py', 1), ('file2.py', 2)]
+ >>> content_old = ViewList(lines_old, items=items_old)
+ >>> match_items(lines, content_old) # doctest: +NORMALIZE_WHITESPACE
+ [('file1.py', 0), ('file1.py', 0), ('file2.py', 0), ('file2.py', 0),
+ ('file2.py', 2), ('file2.py', 2), ('file2.py', 2), ('file2.py', 2)]
+ >>> # first 2 ``lines`` are matched to 'a', second 2 to 'b', rest to 'c'
+ >>> # actual content is completely ignored.
+
+ Notes
+ -----
+ The algorithm tries to match any line in ``lines`` with one in
+ ``lines_old``. It skips over all empty lines in ``lines_old`` and assigns
+ this line number to all lines in ``lines``, unless a non-empty line is
+ found in ``lines`` in which case it goes to the next line in ``lines_old``.
+
+ """
+ items_new = []
+ lines_old = content_old.data
+ items_old = content_old.items
+ j = 0
+ for i, line in enumerate(lines):
+ # go to next non-empty line in old:
+ # line.strip() checks whether the string is all whitespace
+ while j < len(lines_old) - 1 and not lines_old[j].strip():
+ j += 1
+ items_new.append(items_old[j])
+ if line.strip() and j < len(lines_old) - 1:
+ j += 1
+ assert(len(items_new) == len(lines))
+ return items_new
+
+
def wrap_mangling_directive(base_directive, objtype):
class directive(base_directive):
def run(self):
@@ -179,7 +278,10 @@ def run(self):
lines = list(self.content)
mangle_docstrings(env.app, objtype, name, None, None, lines)
- self.content = ViewList(lines, self.content.parent)
+ if self.content:
+ items = match_items(lines, self.content)
+ self.content = ViewList(lines, items=items,
+ parent=self.content.parent)
return base_directive.run(self)
diff --git a/doc/sphinxext/numpydoc/phantom_import.py b/doc/sphinxext/numpydoc/phantom_import.py
deleted file mode 100755
index f33dd838e8bb3..0000000000000
--- a/doc/sphinxext/numpydoc/phantom_import.py
+++ /dev/null
@@ -1,167 +0,0 @@
-"""
-==============
-phantom_import
-==============
-
-Sphinx extension to make directives from ``sphinx.ext.autodoc`` and similar
-extensions to use docstrings loaded from an XML file.
-
-This extension loads an XML file in the Pydocweb format [1] and
-creates a dummy module that contains the specified docstrings. This
-can be used to get the current docstrings from a Pydocweb instance
-without needing to rebuild the documented module.
-
-.. [1] https://github.com/pv/pydocweb
-
-"""
-from __future__ import division, absolute_import, print_function
-
-import imp, sys, compiler, types, os, inspect, re
-
-def setup(app):
- app.connect('builder-inited', initialize)
- app.add_config_value('phantom_import_file', None, True)
-
-def initialize(app):
- fn = app.config.phantom_import_file
- if (fn and os.path.isfile(fn)):
- print("[numpydoc] Phantom importing modules from", fn, "...")
- import_phantom_module(fn)
-
-#------------------------------------------------------------------------------
-# Creating 'phantom' modules from an XML description
-#------------------------------------------------------------------------------
-def import_phantom_module(xml_file):
- """
- Insert a fake Python module to sys.modules, based on a XML file.
-
- The XML file is expected to conform to Pydocweb DTD. The fake
- module will contain dummy objects, which guarantee the following:
-
- - Docstrings are correct.
- - Class inheritance relationships are correct (if present in XML).
- - Function argspec is *NOT* correct (even if present in XML).
- Instead, the function signature is prepended to the function docstring.
- - Class attributes are *NOT* correct; instead, they are dummy objects.
-
- Parameters
- ----------
- xml_file : str
- Name of an XML file to read
-
- """
- import lxml.etree as etree
-
- object_cache = {}
-
- tree = etree.parse(xml_file)
- root = tree.getroot()
-
- # Sort items so that
- # - Base classes come before classes inherited from them
- # - Modules come before their contents
- all_nodes = {n.attrib['id']: n for n in root}
-
- def _get_bases(node, recurse=False):
- bases = [x.attrib['ref'] for x in node.findall('base')]
- if recurse:
- j = 0
- while True:
- try:
- b = bases[j]
- except IndexError: break
- if b in all_nodes:
- bases.extend(_get_bases(all_nodes[b]))
- j += 1
- return bases
-
- type_index = ['module', 'class', 'callable', 'object']
-
- def base_cmp(a, b):
- x = cmp(type_index.index(a.tag), type_index.index(b.tag))
- if x != 0: return x
-
- if a.tag == 'class' and b.tag == 'class':
- a_bases = _get_bases(a, recurse=True)
- b_bases = _get_bases(b, recurse=True)
- x = cmp(len(a_bases), len(b_bases))
- if x != 0: return x
- if a.attrib['id'] in b_bases: return -1
- if b.attrib['id'] in a_bases: return 1
-
- return cmp(a.attrib['id'].count('.'), b.attrib['id'].count('.'))
-
- nodes = root.getchildren()
- nodes.sort(base_cmp)
-
- # Create phantom items
- for node in nodes:
- name = node.attrib['id']
- doc = (node.text or '').decode('string-escape') + "\n"
- if doc == "\n": doc = ""
-
- # create parent, if missing
- parent = name
- while True:
- parent = '.'.join(parent.split('.')[:-1])
- if not parent: break
- if parent in object_cache: break
- obj = imp.new_module(parent)
- object_cache[parent] = obj
- sys.modules[parent] = obj
-
- # create object
- if node.tag == 'module':
- obj = imp.new_module(name)
- obj.__doc__ = doc
- sys.modules[name] = obj
- elif node.tag == 'class':
- bases = [object_cache[b] for b in _get_bases(node)
- if b in object_cache]
- bases.append(object)
- init = lambda self: None
- init.__doc__ = doc
- obj = type(name, tuple(bases), {'__doc__': doc, '__init__': init})
- obj.__name__ = name.split('.')[-1]
- elif node.tag == 'callable':
- funcname = node.attrib['id'].split('.')[-1]
- argspec = node.attrib.get('argspec')
- if argspec:
- argspec = re.sub('^[^(]*', '', argspec)
- doc = "%s%s\n\n%s" % (funcname, argspec, doc)
- obj = lambda: 0
- obj.__argspec_is_invalid_ = True
- if sys.version_info[0] >= 3:
- obj.__name__ = funcname
- else:
- obj.func_name = funcname
- obj.__name__ = name
- obj.__doc__ = doc
- if inspect.isclass(object_cache[parent]):
- obj.__objclass__ = object_cache[parent]
- else:
- class Dummy(object): pass
- obj = Dummy()
- obj.__name__ = name
- obj.__doc__ = doc
- if inspect.isclass(object_cache[parent]):
- obj.__get__ = lambda: None
- object_cache[name] = obj
-
- if parent:
- if inspect.ismodule(object_cache[parent]):
- obj.__module__ = parent
- setattr(object_cache[parent], name.split('.')[-1], obj)
-
- # Populate items
- for node in root:
- obj = object_cache.get(node.attrib['id'])
- if obj is None: continue
- for ref in node.findall('ref'):
- if node.tag == 'class':
- if ref.attrib['ref'].startswith(node.attrib['id'] + '.'):
- setattr(obj, ref.attrib['name'],
- object_cache.get(ref.attrib['ref']))
- else:
- setattr(obj, ref.attrib['name'],
- object_cache.get(ref.attrib['ref']))
diff --git a/doc/sphinxext/numpydoc/plot_directive.py b/doc/sphinxext/numpydoc/plot_directive.py
deleted file mode 100755
index 2014f857076c1..0000000000000
--- a/doc/sphinxext/numpydoc/plot_directive.py
+++ /dev/null
@@ -1,642 +0,0 @@
-"""
-A special directive for generating a matplotlib plot.
-
-.. warning::
-
- This is a hacked version of plot_directive.py from Matplotlib.
- It's very much subject to change!
-
-
-Usage
------
-
-Can be used like this::
-
- .. plot:: examples/example.py
-
- .. plot::
-
- import matplotlib.pyplot as plt
- plt.plot([1,2,3], [4,5,6])
-
- .. plot::
-
- A plotting example:
-
- >>> import matplotlib.pyplot as plt
- >>> plt.plot([1,2,3], [4,5,6])
-
-The content is interpreted as doctest formatted if it has a line starting
-with ``>>>``.
-
-The ``plot`` directive supports the options
-
- format : {'python', 'doctest'}
- Specify the format of the input
-
- include-source : bool
- Whether to display the source code. Default can be changed in conf.py
-
-and the ``image`` directive options ``alt``, ``height``, ``width``,
-``scale``, ``align``, ``class``.
-
-Configuration options
----------------------
-
-The plot directive has the following configuration options:
-
- plot_include_source
- Default value for the include-source option
-
- plot_pre_code
- Code that should be executed before each plot.
-
- plot_basedir
- Base directory, to which plot:: file names are relative to.
- (If None or empty, file names are relative to the directoly where
- the file containing the directive is.)
-
- plot_formats
- File formats to generate. List of tuples or strings::
-
- [(suffix, dpi), suffix, ...]
-
- that determine the file format and the DPI. For entries whose
- DPI was omitted, sensible defaults are chosen.
-
- plot_html_show_formats
- Whether to show links to the files in HTML.
-
-TODO
-----
-
-* Refactor Latex output; now it's plain images, but it would be nice
- to make them appear side-by-side, or in floats.
-
-"""
-from __future__ import division, absolute_import, print_function
-
-import sys, os, glob, shutil, imp, warnings, re, textwrap, traceback
-import sphinx
-
-if sys.version_info[0] >= 3:
- from io import StringIO
-else:
- from io import StringIO
-
-import warnings
-warnings.warn("A plot_directive module is also available under "
- "matplotlib.sphinxext; expect this numpydoc.plot_directive "
- "module to be deprecated after relevant features have been "
- "integrated there.",
- FutureWarning, stacklevel=2)
-
-
-#------------------------------------------------------------------------------
-# Registration hook
-#------------------------------------------------------------------------------
-
-def setup(app):
- setup.app = app
- setup.config = app.config
- setup.confdir = app.confdir
-
- app.add_config_value('plot_pre_code', '', True)
- app.add_config_value('plot_include_source', False, True)
- app.add_config_value('plot_formats', ['png', 'hires.png', 'pdf'], True)
- app.add_config_value('plot_basedir', None, True)
- app.add_config_value('plot_html_show_formats', True, True)
-
- app.add_directive('plot', plot_directive, True, (0, 1, False),
- **plot_directive_options)
-
-#------------------------------------------------------------------------------
-# plot:: directive
-#------------------------------------------------------------------------------
-from docutils.parsers.rst import directives
-from docutils import nodes
-
-def plot_directive(name, arguments, options, content, lineno,
- content_offset, block_text, state, state_machine):
- return run(arguments, content, options, state_machine, state, lineno)
-plot_directive.__doc__ = __doc__
-
-def _option_boolean(arg):
- if not arg or not arg.strip():
- # no argument given, assume used as a flag
- return True
- elif arg.strip().lower() in ('no', '0', 'false'):
- return False
- elif arg.strip().lower() in ('yes', '1', 'true'):
- return True
- else:
- raise ValueError('"%s" unknown boolean' % arg)
-
-def _option_format(arg):
- return directives.choice(arg, ('python', 'lisp'))
-
-def _option_align(arg):
- return directives.choice(arg, ("top", "middle", "bottom", "left", "center",
- "right"))
-
-plot_directive_options = {'alt': directives.unchanged,
- 'height': directives.length_or_unitless,
- 'width': directives.length_or_percentage_or_unitless,
- 'scale': directives.nonnegative_int,
- 'align': _option_align,
- 'class': directives.class_option,
- 'include-source': _option_boolean,
- 'format': _option_format,
- }
-
-#------------------------------------------------------------------------------
-# Generating output
-#------------------------------------------------------------------------------
-
-from docutils import nodes, utils
-
-try:
- # Sphinx depends on either Jinja or Jinja2
- import jinja2
- def format_template(template, **kw):
- return jinja2.Template(template).render(**kw)
-except ImportError:
- import jinja
- def format_template(template, **kw):
- return jinja.from_string(template, **kw)
-
-TEMPLATE = """
-{{ source_code }}
-
-{{ only_html }}
-
- {% if source_link or (html_show_formats and not multi_image) %}
- (
- {%- if source_link -%}
- `Source code <{{ source_link }}>`__
- {%- endif -%}
- {%- if html_show_formats and not multi_image -%}
- {%- for img in images -%}
- {%- for fmt in img.formats -%}
- {%- if source_link or not loop.first -%}, {% endif -%}
- `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__
- {%- endfor -%}
- {%- endfor -%}
- {%- endif -%}
- )
- {% endif %}
-
- {% for img in images %}
- .. figure:: {{ build_dir }}/{{ img.basename }}.png
- {%- for option in options %}
- {{ option }}
- {% endfor %}
-
- {% if html_show_formats and multi_image -%}
- (
- {%- for fmt in img.formats -%}
- {%- if not loop.first -%}, {% endif -%}
- `{{ fmt }} <{{ dest_dir }}/{{ img.basename }}.{{ fmt }}>`__
- {%- endfor -%}
- )
- {%- endif -%}
- {% endfor %}
-
-{{ only_latex }}
-
- {% for img in images %}
- .. image:: {{ build_dir }}/{{ img.basename }}.pdf
- {% endfor %}
-
-"""
-
-class ImageFile(object):
- def __init__(self, basename, dirname):
- self.basename = basename
- self.dirname = dirname
- self.formats = []
-
- def filename(self, format):
- return os.path.join(self.dirname, "%s.%s" % (self.basename, format))
-
- def filenames(self):
- return [self.filename(fmt) for fmt in self.formats]
-
-def run(arguments, content, options, state_machine, state, lineno):
- if arguments and content:
- raise RuntimeError("plot:: directive can't have both args and content")
-
- document = state_machine.document
- config = document.settings.env.config
-
- options.setdefault('include-source', config.plot_include_source)
-
- # determine input
- rst_file = document.attributes['source']
- rst_dir = os.path.dirname(rst_file)
-
- if arguments:
- if not config.plot_basedir:
- source_file_name = os.path.join(rst_dir,
- directives.uri(arguments[0]))
- else:
- source_file_name = os.path.join(setup.confdir, config.plot_basedir,
- directives.uri(arguments[0]))
- code = open(source_file_name, 'r').read()
- output_base = os.path.basename(source_file_name)
- else:
- source_file_name = rst_file
- code = textwrap.dedent("\n".join(map(str, content)))
- counter = document.attributes.get('_plot_counter', 0) + 1
- document.attributes['_plot_counter'] = counter
- base, ext = os.path.splitext(os.path.basename(source_file_name))
- output_base = '%s-%d.py' % (base, counter)
-
- base, source_ext = os.path.splitext(output_base)
- if source_ext in ('.py', '.rst', '.txt'):
- output_base = base
- else:
- source_ext = ''
-
- # ensure that LaTeX includegraphics doesn't choke in foo.bar.pdf filenames
- output_base = output_base.replace('.', '-')
-
- # is it in doctest format?
- is_doctest = contains_doctest(code)
- if 'format' in options:
- if options['format'] == 'python':
- is_doctest = False
- else:
- is_doctest = True
-
- # determine output directory name fragment
- source_rel_name = relpath(source_file_name, setup.confdir)
- source_rel_dir = os.path.dirname(source_rel_name)
- while source_rel_dir.startswith(os.path.sep):
- source_rel_dir = source_rel_dir[1:]
-
- # build_dir: where to place output files (temporarily)
- build_dir = os.path.join(os.path.dirname(setup.app.doctreedir),
- 'plot_directive',
- source_rel_dir)
- if not os.path.exists(build_dir):
- os.makedirs(build_dir)
-
- # output_dir: final location in the builder's directory
- dest_dir = os.path.abspath(os.path.join(setup.app.builder.outdir,
- source_rel_dir))
-
- # how to link to files from the RST file
- dest_dir_link = os.path.join(relpath(setup.confdir, rst_dir),
- source_rel_dir).replace(os.path.sep, '/')
- build_dir_link = relpath(build_dir, rst_dir).replace(os.path.sep, '/')
- source_link = dest_dir_link + '/' + output_base + source_ext
-
- # make figures
- try:
- results = makefig(code, source_file_name, build_dir, output_base,
- config)
- errors = []
- except PlotError as err:
- reporter = state.memo.reporter
- sm = reporter.system_message(
- 2, "Exception occurred in plotting %s: %s" % (output_base, err),
- line=lineno)
- results = [(code, [])]
- errors = [sm]
-
- # generate output restructuredtext
- total_lines = []
- for j, (code_piece, images) in enumerate(results):
- if options['include-source']:
- if is_doctest:
- lines = ['']
- lines += [row.rstrip() for row in code_piece.split('\n')]
- else:
- lines = ['.. code-block:: python', '']
- lines += [' %s' % row.rstrip()
- for row in code_piece.split('\n')]
- source_code = "\n".join(lines)
- else:
- source_code = ""
-
- opts = [':%s: %s' % (key, val) for key, val in list(options.items())
- if key in ('alt', 'height', 'width', 'scale', 'align', 'class')]
-
- only_html = ".. only:: html"
- only_latex = ".. only:: latex"
-
- if j == 0:
- src_link = source_link
- else:
- src_link = None
-
- result = format_template(
- TEMPLATE,
- dest_dir=dest_dir_link,
- build_dir=build_dir_link,
- source_link=src_link,
- multi_image=len(images) > 1,
- only_html=only_html,
- only_latex=only_latex,
- options=opts,
- images=images,
- source_code=source_code,
- html_show_formats=config.plot_html_show_formats)
-
- total_lines.extend(result.split("\n"))
- total_lines.extend("\n")
-
- if total_lines:
- state_machine.insert_input(total_lines, source=source_file_name)
-
- # copy image files to builder's output directory
- if not os.path.exists(dest_dir):
- os.makedirs(dest_dir)
-
- for code_piece, images in results:
- for img in images:
- for fn in img.filenames():
- shutil.copyfile(fn, os.path.join(dest_dir,
- os.path.basename(fn)))
-
- # copy script (if necessary)
- if source_file_name == rst_file:
- target_name = os.path.join(dest_dir, output_base + source_ext)
- f = open(target_name, 'w')
- f.write(unescape_doctest(code))
- f.close()
-
- return errors
-
-
-#------------------------------------------------------------------------------
-# Run code and capture figures
-#------------------------------------------------------------------------------
-
-import matplotlib
-matplotlib.use('Agg')
-import matplotlib.pyplot as plt
-import matplotlib.image as image
-from matplotlib import _pylab_helpers
-
-import exceptions
-
-def contains_doctest(text):
- try:
- # check if it's valid Python as-is
- compile(text, '<string>', 'exec')
- return False
- except SyntaxError:
- pass
- r = re.compile(r'^\s*>>>', re.M)
- m = r.search(text)
- return bool(m)
-
-def unescape_doctest(text):
- """
- Extract code from a piece of text, which contains either Python code
- or doctests.
-
- """
- if not contains_doctest(text):
- return text
-
- code = ""
- for line in text.split("\n"):
- m = re.match(r'^\s*(>>>|\.\.\.) (.*)$', line)
- if m:
- code += m.group(2) + "\n"
- elif line.strip():
- code += "# " + line.strip() + "\n"
- else:
- code += "\n"
- return code
-
-def split_code_at_show(text):
- """
- Split code at plt.show()
-
- """
-
- parts = []
- is_doctest = contains_doctest(text)
-
- part = []
- for line in text.split("\n"):
- if (not is_doctest and line.strip() == 'plt.show()') or \
- (is_doctest and line.strip() == '>>> plt.show()'):
- part.append(line)
- parts.append("\n".join(part))
- part = []
- else:
- part.append(line)
- if "\n".join(part).strip():
- parts.append("\n".join(part))
- return parts
-
-class PlotError(RuntimeError):
- pass
-
-def run_code(code, code_path, ns=None):
- # Change the working directory to the directory of the example, so
- # it can get at its data files, if any.
- pwd = os.getcwd()
- old_sys_path = list(sys.path)
- if code_path is not None:
- dirname = os.path.abspath(os.path.dirname(code_path))
- os.chdir(dirname)
- sys.path.insert(0, dirname)
-
- # Redirect stdout
- stdout = sys.stdout
- sys.stdout = StringIO()
-
- # Reset sys.argv
- old_sys_argv = sys.argv
- sys.argv = [code_path]
-
- try:
- try:
- code = unescape_doctest(code)
- if ns is None:
- ns = {}
- if not ns:
- exec(setup.config.plot_pre_code, ns)
- exec(code, ns)
- except (Exception, SystemExit) as err:
- raise PlotError(traceback.format_exc())
- finally:
- os.chdir(pwd)
- sys.argv = old_sys_argv
- sys.path[:] = old_sys_path
- sys.stdout = stdout
- return ns
-
-
-#------------------------------------------------------------------------------
-# Generating figures
-#------------------------------------------------------------------------------
-
-def out_of_date(original, derived):
- """
- Returns True if derivative is out-of-date wrt original,
- both of which are full file paths.
- """
- return (not os.path.exists(derived)
- or os.stat(derived).st_mtime < os.stat(original).st_mtime)
-
-
-def makefig(code, code_path, output_dir, output_base, config):
- """
- Run a pyplot script *code* and save the images under *output_dir*
- with file names derived from *output_base*
-
- """
-
- # -- Parse format list
- default_dpi = {'png': 80, 'hires.png': 200, 'pdf': 50}
- formats = []
- for fmt in config.plot_formats:
- if isinstance(fmt, str):
- formats.append((fmt, default_dpi.get(fmt, 80)))
- elif type(fmt) in (tuple, list) and len(fmt)==2:
- formats.append((str(fmt[0]), int(fmt[1])))
- else:
- raise PlotError('invalid image format "%r" in plot_formats' % fmt)
-
- # -- Try to determine if all images already exist
-
- code_pieces = split_code_at_show(code)
-
- # Look for single-figure output files first
- all_exists = True
- img = ImageFile(output_base, output_dir)
- for format, dpi in formats:
- if out_of_date(code_path, img.filename(format)):
- all_exists = False
- break
- img.formats.append(format)
-
- if all_exists:
- return [(code, [img])]
-
- # Then look for multi-figure output files
- results = []
- all_exists = True
- for i, code_piece in enumerate(code_pieces):
- images = []
- for j in range(1000):
- img = ImageFile('%s_%02d_%02d' % (output_base, i, j), output_dir)
- for format, dpi in formats:
- if out_of_date(code_path, img.filename(format)):
- all_exists = False
- break
- img.formats.append(format)
-
- # assume that if we have one, we have them all
- if not all_exists:
- all_exists = (j > 0)
- break
- images.append(img)
- if not all_exists:
- break
- results.append((code_piece, images))
-
- if all_exists:
- return results
-
- # -- We didn't find the files, so build them
-
- results = []
- ns = {}
-
- for i, code_piece in enumerate(code_pieces):
- # Clear between runs
- plt.close('all')
-
- # Run code
- run_code(code_piece, code_path, ns)
-
- # Collect images
- images = []
- fig_managers = _pylab_helpers.Gcf.get_all_fig_managers()
- for j, figman in enumerate(fig_managers):
- if len(fig_managers) == 1 and len(code_pieces) == 1:
- img = ImageFile(output_base, output_dir)
- else:
- img = ImageFile("%s_%02d_%02d" % (output_base, i, j),
- output_dir)
- images.append(img)
- for format, dpi in formats:
- try:
- figman.canvas.figure.savefig(img.filename(format), dpi=dpi)
- except exceptions.BaseException as err:
- raise PlotError(traceback.format_exc())
- img.formats.append(format)
-
- # Results
- results.append((code_piece, images))
-
- return results
-
-
-#------------------------------------------------------------------------------
-# Relative pathnames
-#------------------------------------------------------------------------------
-
-try:
- from os.path import relpath
-except ImportError:
- # Copied from Python 2.7
- if 'posix' in sys.builtin_module_names:
- def relpath(path, start=os.path.curdir):
- """Return a relative version of a path"""
- from os.path import sep, curdir, join, abspath, commonprefix, \
- pardir
-
- if not path:
- raise ValueError("no path specified")
-
- start_list = abspath(start).split(sep)
- path_list = abspath(path).split(sep)
-
- # Work out how much of the filepath is shared by start and path.
- i = len(commonprefix([start_list, path_list]))
-
- rel_list = [pardir] * (len(start_list)-i) + path_list[i:]
- if not rel_list:
- return curdir
- return join(*rel_list)
- elif 'nt' in sys.builtin_module_names:
- def relpath(path, start=os.path.curdir):
- """Return a relative version of a path"""
- from os.path import sep, curdir, join, abspath, commonprefix, \
- pardir, splitunc
-
- if not path:
- raise ValueError("no path specified")
- start_list = abspath(start).split(sep)
- path_list = abspath(path).split(sep)
- if start_list[0].lower() != path_list[0].lower():
- unc_path, rest = splitunc(path)
- unc_start, rest = splitunc(start)
- if bool(unc_path) ^ bool(unc_start):
- raise ValueError("Cannot mix UNC and non-UNC paths (%s and %s)"
- % (path, start))
- else:
- raise ValueError("path is on drive %s, start on drive %s"
- % (path_list[0], start_list[0]))
- # Work out how much of the filepath is shared by start and path.
- for i in range(min(len(start_list), len(path_list))):
- if start_list[i].lower() != path_list[i].lower():
- break
- else:
- i += 1
-
- rel_list = [pardir] * (len(start_list)-i) + path_list[i:]
- if not rel_list:
- return curdir
- return join(*rel_list)
- else:
- raise RuntimeError("Unsupported platform (no relpath available!)")
diff --git a/doc/sphinxext/numpydoc/templates/numpydoc_docstring.rst b/doc/sphinxext/numpydoc/templates/numpydoc_docstring.rst
new file mode 100644
index 0000000000000..1900db53cee47
--- /dev/null
+++ b/doc/sphinxext/numpydoc/templates/numpydoc_docstring.rst
@@ -0,0 +1,16 @@
+{{index}}
+{{summary}}
+{{extended_summary}}
+{{parameters}}
+{{returns}}
+{{yields}}
+{{other_parameters}}
+{{raises}}
+{{warns}}
+{{warnings}}
+{{see_also}}
+{{notes}}
+{{references}}
+{{examples}}
+{{attributes}}
+{{methods}}
diff --git a/doc/sphinxext/numpydoc/tests/test_docscrape.py b/doc/sphinxext/numpydoc/tests/test_docscrape.py
old mode 100755
new mode 100644
index b412124d774bb..2fb4eb5ab277e
--- a/doc/sphinxext/numpydoc/tests/test_docscrape.py
+++ b/doc/sphinxext/numpydoc/tests/test_docscrape.py
@@ -1,11 +1,25 @@
# -*- encoding:utf-8 -*-
from __future__ import division, absolute_import, print_function
-import sys, textwrap
-
-from numpydoc.docscrape import NumpyDocString, FunctionDoc, ClassDoc
-from numpydoc.docscrape_sphinx import SphinxDocString, SphinxClassDoc
-from nose.tools import *
+import re
+import sys
+import textwrap
+import warnings
+
+import jinja2
+
+from numpydoc.docscrape import (
+ NumpyDocString,
+ FunctionDoc,
+ ClassDoc,
+ ParseError
+)
+from numpydoc.docscrape_sphinx import (SphinxDocString, SphinxClassDoc,
+ SphinxFunctionDoc)
+from nose.tools import (assert_equal, assert_raises, assert_list_equal,
+ assert_true)
+
+assert_list_equal.__self__.maxDiff = None
if sys.version_info[0] >= 3:
sixu = lambda s: s
@@ -50,6 +64,7 @@
list of str
This is not a real return value. It exists to test
anonymous return values.
+ no_description
Other Parameters
----------------
@@ -122,18 +137,35 @@
'''
doc = NumpyDocString(doc_txt)
+doc_yields_txt = """
+Test generator
+
+Yields
+------
+a : int
+ The number of apples.
+b : int
+ The number of bananas.
+int
+ The number of unknowns.
+"""
+doc_yields = NumpyDocString(doc_yields_txt)
+
def test_signature():
assert doc['Signature'].startswith('numpy.multivariate_normal(')
assert doc['Signature'].endswith('spam=None)')
+
def test_summary():
assert doc['Summary'][0].startswith('Draw values')
assert doc['Summary'][-1].endswith('covariance.')
+
def test_extended_summary():
assert doc['Extended Summary'][0].startswith('The multivariate normal')
+
def test_parameters():
assert_equal(len(doc['Parameters']), 3)
assert_equal([n for n,_,_ in doc['Parameters']], ['mean','cov','shape'])
@@ -141,7 +173,8 @@ def test_parameters():
arg, arg_type, desc = doc['Parameters'][1]
assert_equal(arg_type, '(N, N) ndarray')
assert desc[0].startswith('Covariance matrix')
- assert doc['Parameters'][0][-1][-2] == ' (1+2+3)/3'
+ assert doc['Parameters'][0][-1][-1] == ' (1+2+3)/3'
+
def test_other_parameters():
assert_equal(len(doc['Other Parameters']), 1)
@@ -150,8 +183,9 @@ def test_other_parameters():
assert_equal(arg_type, 'parrot')
assert desc[0].startswith('A parrot off its mortal coil')
+
def test_returns():
- assert_equal(len(doc['Returns']), 2)
+ assert_equal(len(doc['Returns']), 3)
arg, arg_type, desc = doc['Returns'][0]
assert_equal(arg, 'out')
assert_equal(arg_type, 'ndarray')
@@ -164,36 +198,152 @@ def test_returns():
assert desc[0].startswith('This is not a real')
assert desc[-1].endswith('anonymous return values.')
+ arg, arg_type, desc = doc['Returns'][2]
+ assert_equal(arg, 'no_description')
+ assert_equal(arg_type, '')
+ assert not ''.join(desc).strip()
+
+
+def test_yields():
+ section = doc_yields['Yields']
+ assert_equal(len(section), 3)
+ truth = [('a', 'int', 'apples.'),
+ ('b', 'int', 'bananas.'),
+ ('int', '', 'unknowns.')]
+ for (arg, arg_type, desc), (arg_, arg_type_, end) in zip(section, truth):
+ assert_equal(arg, arg_)
+ assert_equal(arg_type, arg_type_)
+ assert desc[0].startswith('The number of')
+ assert desc[0].endswith(end)
+
+
+def test_returnyield():
+ doc_text = """
+Test having returns and yields.
+
+Returns
+-------
+int
+ The number of apples.
+
+Yields
+------
+a : int
+ The number of apples.
+b : int
+ The number of bananas.
+
+"""
+ assert_raises(ValueError, NumpyDocString, doc_text)
+
+
+def test_section_twice():
+ doc_text = """
+Test having a section Notes twice
+
+Notes
+-----
+See the next note for more information
+
+Notes
+-----
+That should break...
+"""
+ assert_raises(ValueError, NumpyDocString, doc_text)
+
+ # if we have a numpydoc object, we know where the error came from
+ class Dummy(object):
+ """
+ Dummy class.
+
+ Notes
+ -----
+ First note.
+
+ Notes
+ -----
+ Second note.
+
+ """
+ def spam(self, a, b):
+ """Spam\n\nSpam spam."""
+ pass
+
+ def ham(self, c, d):
+ """Cheese\n\nNo cheese."""
+ pass
+
+ def dummy_func(arg):
+ """
+ Dummy function.
+
+ Notes
+ -----
+ First note.
+
+ Notes
+ -----
+ Second note.
+ """
+
+ try:
+ SphinxClassDoc(Dummy)
+ except ValueError as e:
+ # python 3 version or python 2 version
+ assert_true("test_section_twice.<locals>.Dummy" in str(e)
+ or 'test_docscrape.Dummy' in str(e))
+
+ try:
+ SphinxFunctionDoc(dummy_func)
+ except ValueError as e:
+ # python 3 version or python 2 version
+ assert_true("test_section_twice.<locals>.dummy_func" in str(e)
+ or 'function dummy_func' in str(e))
+
+
def test_notes():
assert doc['Notes'][0].startswith('Instead')
assert doc['Notes'][-1].endswith('definite.')
assert_equal(len(doc['Notes']), 17)
+
def test_references():
assert doc['References'][0].startswith('..')
assert doc['References'][-1].endswith('2001.')
+
def test_examples():
assert doc['Examples'][0].startswith('>>>')
assert doc['Examples'][-1].endswith('True]')
+
def test_index():
assert_equal(doc['index']['default'], 'random')
assert_equal(len(doc['index']), 2)
assert_equal(len(doc['index']['refguide']), 2)
-def non_blank_line_by_line_compare(a,b):
+
+def _strip_blank_lines(s):
+ "Remove leading, trailing and multiple blank lines"
+ s = re.sub(r'^\s*\n', '', s)
+ s = re.sub(r'\n\s*$', '', s)
+ s = re.sub(r'\n\s*\n', r'\n\n', s)
+ return s
+
+
+def line_by_line_compare(a, b):
a = textwrap.dedent(a)
b = textwrap.dedent(b)
- a = [l.rstrip() for l in a.split('\n') if l.strip()]
- b = [l.rstrip() for l in b.split('\n') if l.strip()]
- for n,line in enumerate(a):
- if not line == b[n]:
- raise AssertionError("Lines %s of a and b differ: "
- "\n>>> %s\n<<< %s\n" %
- (n,line,b[n]))
+ a = [l.rstrip() for l in _strip_blank_lines(a).split('\n')]
+ b = [l.rstrip() for l in _strip_blank_lines(b).split('\n')]
+ assert_list_equal(a, b)
+
+
def test_str():
- non_blank_line_by_line_compare(str(doc),
+ # doc_txt has the order of Notes and See Also sections flipped.
+ # This should be handled automatically, and so, one thing this test does
+ # is to make sure that See Also precedes Notes in the output.
+ line_by_line_compare(str(doc),
"""numpy.multivariate_normal(mean, cov, shape=None, spam=None)
Draw values from a multivariate normal distribution with specified
@@ -210,7 +360,6 @@ def test_str():
.. math::
(1+2+3)/3
-
cov : (N, N) ndarray
Covariance matrix of the distribution.
shape : tuple of ints
@@ -230,6 +379,7 @@ def test_str():
list of str
This is not a real return value. It exists to test
anonymous return values.
+no_description
Other Parameters
----------------
@@ -252,6 +402,7 @@ def test_str():
See Also
--------
+
`some`_, `other`_, `funcs`_
`otherfunc`_
@@ -302,9 +453,25 @@ def test_str():
:refguide: random;distributions, random;gauss""")
+def test_yield_str():
+ line_by_line_compare(str(doc_yields),
+"""Test generator
+
+Yields
+------
+a : int
+ The number of apples.
+b : int
+ The number of bananas.
+int
+ The number of unknowns.
+
+.. index:: """)
+
+
def test_sphinx_str():
sphinx_doc = SphinxDocString(doc_txt)
- non_blank_line_by_line_compare(str(sphinx_doc),
+ line_by_line_compare(str(sphinx_doc),
"""
.. index:: random
single: random;distributions, random;gauss
@@ -317,28 +484,24 @@ def test_sphinx_str():
:Parameters:
- **mean** : (N,) ndarray
-
+ mean : (N,) ndarray
Mean of the N-dimensional distribution.
.. math::
(1+2+3)/3
- **cov** : (N, N) ndarray
-
+ cov : (N, N) ndarray
Covariance matrix of the distribution.
- **shape** : tuple of ints
-
+ shape : tuple of ints
Given a shape of, for example, (m,n,k), m*n*k samples are
generated, and packed in an m-by-n-by-k arrangement. Because
each sample is N-dimensional, the output shape is (m,n,k,N).
:Returns:
- **out** : ndarray
-
+ out : ndarray
The drawn samples, arranged according to `shape`. If the
shape given is (m,n,...), then the shape of `out` is
(m,n,...,N).
@@ -347,26 +510,25 @@ def test_sphinx_str():
value drawn from the distribution.
list of str
-
This is not a real return value. It exists to test
anonymous return values.
-:Other Parameters:
+ no_description
+ ..
- **spam** : parrot
+:Other Parameters:
+ spam : parrot
A parrot off its mortal coil.
:Raises:
- **RuntimeError**
-
+ RuntimeError
Some error
:Warns:
- **RuntimeWarning**
-
+ RuntimeWarning
Some warning
.. warning::
@@ -427,6 +589,24 @@ def test_sphinx_str():
""")
+def test_sphinx_yields_str():
+ sphinx_doc = SphinxDocString(doc_yields_txt)
+ line_by_line_compare(str(sphinx_doc),
+"""Test generator
+
+:Yields:
+
+ a : int
+ The number of apples.
+
+ b : int
+ The number of bananas.
+
+ int
+ The number of unknowns.
+""")
+
+
doc2 = NumpyDocString("""
Returns array of indices of the maximum values of along the given axis.
@@ -438,27 +618,39 @@ def test_sphinx_str():
If None, the index is into the flattened array, otherwise along
the specified axis""")
+
def test_parameters_without_extended_description():
assert_equal(len(doc2['Parameters']), 2)
+
doc3 = NumpyDocString("""
my_signature(*params, **kwds)
Return this and that.
""")
+
def test_escape_stars():
signature = str(doc3).split('\n')[0]
assert_equal(signature, 'my_signature(\*params, \*\*kwds)')
+ def my_func(a, b, **kwargs):
+ pass
+
+ fdoc = FunctionDoc(func=my_func)
+ assert_equal(fdoc['Signature'], 'my_func(a, b, \*\*kwargs)')
+
+
doc4 = NumpyDocString(
"""a.conj()
Return an array with all complex-valued elements conjugated.""")
+
def test_empty_extended_summary():
assert_equal(doc4['Extended Summary'], [])
+
doc5 = NumpyDocString(
"""
a.something()
@@ -474,18 +666,21 @@ def test_empty_extended_summary():
If needed
""")
+
def test_raises():
assert_equal(len(doc5['Raises']), 1)
name,_,desc = doc5['Raises'][0]
assert_equal(name,'LinAlgException')
assert_equal(desc,['If array is singular.'])
+
def test_warns():
assert_equal(len(doc5['Warns']), 1)
name,_,desc = doc5['Warns'][0]
assert_equal(name,'SomeWarning')
assert_equal(desc,['If needed'])
+
def test_see_also():
doc6 = NumpyDocString(
"""
@@ -500,21 +695,23 @@ def test_see_also():
func_f, func_g, :meth:`func_h`, func_j,
func_k
:obj:`baz.obj_q`
+ :obj:`~baz.obj_r`
:class:`class_j`: fubar
foobar
""")
- assert len(doc6['See Also']) == 12
+ assert len(doc6['See Also']) == 13
for func, desc, role in doc6['See Also']:
if func in ('func_a', 'func_b', 'func_c', 'func_f',
- 'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q'):
+ 'func_g', 'func_h', 'func_j', 'func_k', 'baz.obj_q',
+ '~baz.obj_r'):
assert(not desc)
else:
assert(desc)
if func == 'func_h':
assert role == 'meth'
- elif func == 'baz.obj_q':
+ elif func == 'baz.obj_q' or func == '~baz.obj_r':
assert role == 'obj'
elif func == 'class_j':
assert role == 'class'
@@ -528,6 +725,23 @@ def test_see_also():
elif func == 'class_j':
assert desc == ['fubar', 'foobar']
+
+def test_see_also_parse_error():
+ text = (
+ """
+ z(x,theta)
+
+ See Also
+ --------
+ :func:`~foo`
+ """)
+ with assert_raises(ParseError) as err:
+ NumpyDocString(text)
+ assert_equal(
+ str(r":func:`~foo` is not a item name in '\n z(x,theta)\n\n See Also\n --------\n :func:`~foo`\n '"),
+ str(err.exception)
+ )
+
def test_see_also_print():
class Dummy(object):
"""
@@ -546,12 +760,45 @@ class Dummy(object):
assert(' some relationship' in s)
assert(':func:`func_d`' in s)
+
+def test_unknown_section():
+ doc_text = """
+Test having an unknown section
+
+Mope
+----
+This should be ignored and warned about
+"""
+
+ class BadSection(object):
+ """Class with bad section.
+
+ Nope
+ ----
+ This class has a nope section.
+ """
+ pass
+
+ with warnings.catch_warnings(record=True) as w:
+ NumpyDocString(doc_text)
+ assert len(w) == 1
+ assert "Unknown section Mope" == str(w[0].message)
+
+ with warnings.catch_warnings(record=True) as w:
+ SphinxClassDoc(BadSection)
+ assert len(w) == 1
+ assert_true('test_docscrape.test_unknown_section.<locals>.BadSection'
+ in str(w[0].message)
+ or 'test_docscrape.BadSection' in str(w[0].message))
+
+
doc7 = NumpyDocString("""
Doc starts on second line.
""")
+
def test_empty_first_line():
assert doc7['Summary'][0].startswith('Doc starts')
@@ -582,6 +829,7 @@ def test_unicode():
assert isinstance(doc['Summary'][0], str)
assert doc['Summary'][0] == 'öäöäöäöäöåååå'
+
def test_plot_examples():
cfg = dict(use_plots=True)
@@ -594,6 +842,15 @@ def test_plot_examples():
""", config=cfg)
assert 'plot::' in str(doc), str(doc)
+ doc = SphinxDocString("""
+ Examples
+ --------
+ >>> from matplotlib import pyplot as plt
+ >>> plt.plot([1,2,3],[4,5,6])
+ >>> plt.show()
+ """, config=cfg)
+ assert 'plot::' in str(doc), str(doc)
+
doc = SphinxDocString("""
Examples
--------
@@ -605,6 +862,47 @@ def test_plot_examples():
""", config=cfg)
assert str(doc).count('plot::') == 1, str(doc)
+
+def test_use_blockquotes():
+ cfg = dict(use_blockquotes=True)
+ doc = SphinxDocString("""
+ Parameters
+ ----------
+ abc : def
+ ghi
+ jkl
+ mno
+
+ Returns
+ -------
+ ABC : DEF
+ GHI
+ JKL
+ MNO
+ """, config=cfg)
+ line_by_line_compare(str(doc), '''
+ :Parameters:
+
+ **abc** : def
+
+ ghi
+
+ **jkl**
+
+ mno
+
+ :Returns:
+
+ **ABC** : DEF
+
+ GHI
+
+ **JKL**
+
+ MNO
+ ''')
+
+
def test_class_members():
class Dummy(object):
@@ -646,6 +944,47 @@ class Ignorable(object):
else:
assert 'Spammity index' in str(doc), str(doc)
+ class SubDummy(Dummy):
+ """
+ Subclass of Dummy class.
+
+ """
+ def ham(self, c, d):
+ """Cheese\n\nNo cheese.\nOverloaded Dummy.ham"""
+ pass
+
+ def bar(self, a, b):
+ """Bar\n\nNo bar"""
+ pass
+
+ for cls in (ClassDoc, SphinxClassDoc):
+ doc = cls(SubDummy, config=dict(show_class_members=True,
+ show_inherited_class_members=False))
+ assert 'Methods' in str(doc), (cls, str(doc))
+ assert 'spam' not in str(doc), (cls, str(doc))
+ assert 'ham' in str(doc), (cls, str(doc))
+ assert 'bar' in str(doc), (cls, str(doc))
+ assert 'spammity' not in str(doc), (cls, str(doc))
+
+ if cls is SphinxClassDoc:
+ assert '.. autosummary::' in str(doc), str(doc)
+ else:
+ assert 'Spammity index' not in str(doc), str(doc)
+
+ doc = cls(SubDummy, config=dict(show_class_members=True,
+ show_inherited_class_members=True))
+ assert 'Methods' in str(doc), (cls, str(doc))
+ assert 'spam' in str(doc), (cls, str(doc))
+ assert 'ham' in str(doc), (cls, str(doc))
+ assert 'bar' in str(doc), (cls, str(doc))
+ assert 'spammity' in str(doc), (cls, str(doc))
+
+ if cls is SphinxClassDoc:
+ assert '.. autosummary::' in str(doc), str(doc)
+ else:
+ assert 'Spammity index' in str(doc), str(doc)
+
+
def test_duplicate_signature():
# Duplicate function signatures occur e.g. in ufuncs, when the
# automatic mechanism adds one, and a more detailed comes from the
@@ -669,6 +1008,7 @@ def test_duplicate_signature():
f : callable ``f(t, y, *f_args)``
Aaa.
jac : callable ``jac(t, y, *jac_args)``
+
Bbb.
Attributes
@@ -678,6 +1018,17 @@ def test_duplicate_signature():
y : ndarray
Current variable values.
+ * hello
+ * world
+ an_attribute : float
+ The docstring is printed instead
+ no_docstring : str
+ But a description
+ no_docstring2 : str
+ multiline_sentence
+ midword_period
+ no_period
+
Methods
-------
a
@@ -689,9 +1040,10 @@ def test_duplicate_signature():
For usage examples, see `ode`.
"""
+
def test_class_members_doc():
doc = ClassDoc(None, class_doc_txt)
- non_blank_line_by_line_compare(str(doc),
+ line_by_line_compare(str(doc),
"""
Foo
@@ -713,55 +1065,140 @@ def test_class_members_doc():
y : ndarray
Current variable values.
+ * hello
+ * world
+ an_attribute : float
+ The docstring is printed instead
+ no_docstring : str
+ But a description
+ no_docstring2 : str
+ multiline_sentence
+ midword_period
+ no_period
+
Methods
-------
a
-
b
-
c
.. index::
""")
+
def test_class_members_doc_sphinx():
- doc = SphinxClassDoc(None, class_doc_txt)
- non_blank_line_by_line_compare(str(doc),
+ class Foo:
+ @property
+ def an_attribute(self):
+ """Test attribute"""
+ return None
+
+ @property
+ def no_docstring(self):
+ return None
+
+ @property
+ def no_docstring2(self):
+ return None
+
+ @property
+ def multiline_sentence(self):
+ """This is a
+ sentence. It spans multiple lines."""
+ return None
+
+ @property
+ def midword_period(self):
+ """The sentence for numpy.org."""
+ return None
+
+ @property
+ def no_period(self):
+ """This does not have a period
+ so we truncate its summary to the first linebreak
+
+ Apparently.
+ """
+ return None
+
+ doc = SphinxClassDoc(Foo, class_doc_txt)
+ line_by_line_compare(str(doc),
"""
Foo
:Parameters:
- **f** : callable ``f(t, y, *f_args)``
-
+ f : callable ``f(t, y, *f_args)``
Aaa.
- **jac** : callable ``jac(t, y, *jac_args)``
-
+ jac : callable ``jac(t, y, *jac_args)``
Bbb.
.. rubric:: Examples
For usage examples, see `ode`.
- .. rubric:: Attributes
+ :Attributes:
+
+ t : float
+ Current time.
+
+ y : ndarray
+ Current variable values.
+
+ * hello
+ * world
- === ==========
- t (float) Current time.
- y (ndarray) Current variable values.
- === ==========
+ :obj:`an_attribute <an_attribute>` : float
+ Test attribute
+
+ no_docstring : str
+ But a description
+
+ no_docstring2 : str
+ ..
+
+ :obj:`multiline_sentence <multiline_sentence>`
+ This is a sentence.
+
+ :obj:`midword_period <midword_period>`
+ The sentence for numpy.org.
+
+ :obj:`no_period <no_period>`
+ This does not have a period
.. rubric:: Methods
- === ==========
- a
- b
- c
- === ==========
+ ===== ==========
+ **a**
+ **b**
+ **c**
+ ===== ==========
""")
+
+def test_templated_sections():
+ doc = SphinxClassDoc(None, class_doc_txt,
+ config={'template': jinja2.Template('{{examples}}\n{{parameters}}')})
+ line_by_line_compare(str(doc),
+ """
+ .. rubric:: Examples
+
+ For usage examples, see `ode`.
+
+ :Parameters:
+
+ f : callable ``f(t, y, *f_args)``
+ Aaa.
+
+ jac : callable ``jac(t, y, *jac_args)``
+ Bbb.
+
+ """)
+
+
if __name__ == "__main__":
import nose
nose.run()
diff --git a/doc/sphinxext/numpydoc/tests/test_linkcode.py b/doc/sphinxext/numpydoc/tests/test_linkcode.py
deleted file mode 100644
index 340166a485fcd..0000000000000
--- a/doc/sphinxext/numpydoc/tests/test_linkcode.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from __future__ import division, absolute_import, print_function
-
-import numpydoc.linkcode
-
-# No tests at the moment...
diff --git a/doc/sphinxext/numpydoc/tests/test_phantom_import.py b/doc/sphinxext/numpydoc/tests/test_phantom_import.py
deleted file mode 100644
index 173b5662b8df7..0000000000000
--- a/doc/sphinxext/numpydoc/tests/test_phantom_import.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from __future__ import division, absolute_import, print_function
-
-import numpydoc.phantom_import
-
-# No tests at the moment...
diff --git a/doc/sphinxext/numpydoc/tests/test_plot_directive.py b/doc/sphinxext/numpydoc/tests/test_plot_directive.py
deleted file mode 100644
index 0e511fcbc1428..0000000000000
--- a/doc/sphinxext/numpydoc/tests/test_plot_directive.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from __future__ import division, absolute_import, print_function
-
-import numpydoc.plot_directive
-
-# No tests at the moment...
diff --git a/doc/sphinxext/numpydoc/tests/test_traitsdoc.py b/doc/sphinxext/numpydoc/tests/test_traitsdoc.py
deleted file mode 100644
index d36e5ddbd751f..0000000000000
--- a/doc/sphinxext/numpydoc/tests/test_traitsdoc.py
+++ /dev/null
@@ -1,5 +0,0 @@
-from __future__ import division, absolute_import, print_function
-
-import numpydoc.traitsdoc
-
-# No tests at the moment...
diff --git a/doc/sphinxext/numpydoc/traitsdoc.py b/doc/sphinxext/numpydoc/traitsdoc.py
deleted file mode 100755
index 596c54eb389a3..0000000000000
--- a/doc/sphinxext/numpydoc/traitsdoc.py
+++ /dev/null
@@ -1,142 +0,0 @@
-"""
-=========
-traitsdoc
-=========
-
-Sphinx extension that handles docstrings in the Numpy standard format, [1]
-and support Traits [2].
-
-This extension can be used as a replacement for ``numpydoc`` when support
-for Traits is required.
-
-.. [1] http://projects.scipy.org/numpy/wiki/CodingStyleGuidelines#docstring-standard
-.. [2] http://code.enthought.com/projects/traits/
-
-"""
-from __future__ import division, absolute_import, print_function
-
-import inspect
-import os
-import pydoc
-import collections
-
-from . import docscrape
-from . import docscrape_sphinx
-from .docscrape_sphinx import SphinxClassDoc, SphinxFunctionDoc, SphinxDocString
-
-from . import numpydoc
-
-from . import comment_eater
-
-class SphinxTraitsDoc(SphinxClassDoc):
- def __init__(self, cls, modulename='', func_doc=SphinxFunctionDoc):
- if not inspect.isclass(cls):
- raise ValueError("Initialise using a class. Got %r" % cls)
- self._cls = cls
-
- if modulename and not modulename.endswith('.'):
- modulename += '.'
- self._mod = modulename
- self._name = cls.__name__
- self._func_doc = func_doc
-
- docstring = pydoc.getdoc(cls)
- docstring = docstring.split('\n')
-
- # De-indent paragraph
- try:
- indent = min(len(s) - len(s.lstrip()) for s in docstring
- if s.strip())
- except ValueError:
- indent = 0
-
- for n,line in enumerate(docstring):
- docstring[n] = docstring[n][indent:]
-
- self._doc = docscrape.Reader(docstring)
- self._parsed_data = {
- 'Signature': '',
- 'Summary': '',
- 'Description': [],
- 'Extended Summary': [],
- 'Parameters': [],
- 'Returns': [],
- 'Raises': [],
- 'Warns': [],
- 'Other Parameters': [],
- 'Traits': [],
- 'Methods': [],
- 'See Also': [],
- 'Notes': [],
- 'References': '',
- 'Example': '',
- 'Examples': '',
- 'index': {}
- }
-
- self._parse()
-
- def _str_summary(self):
- return self['Summary'] + ['']
-
- def _str_extended_summary(self):
- return self['Description'] + self['Extended Summary'] + ['']
-
- def __str__(self, indent=0, func_role="func"):
- out = []
- out += self._str_signature()
- out += self._str_index() + ['']
- out += self._str_summary()
- out += self._str_extended_summary()
- for param_list in ('Parameters', 'Traits', 'Methods',
- 'Returns','Raises'):
- out += self._str_param_list(param_list)
- out += self._str_see_also("obj")
- out += self._str_section('Notes')
- out += self._str_references()
- out += self._str_section('Example')
- out += self._str_section('Examples')
- out = self._str_indent(out,indent)
- return '\n'.join(out)
-
-def looks_like_issubclass(obj, classname):
- """ Return True if the object has a class or superclass with the given class
- name.
-
- Ignores old-style classes.
- """
- t = obj
- if t.__name__ == classname:
- return True
- for klass in t.__mro__:
- if klass.__name__ == classname:
- return True
- return False
-
-def get_doc_object(obj, what=None, config=None):
- if what is None:
- if inspect.isclass(obj):
- what = 'class'
- elif inspect.ismodule(obj):
- what = 'module'
- elif isinstance(obj, collections.Callable):
- what = 'function'
- else:
- what = 'object'
- if what == 'class':
- doc = SphinxTraitsDoc(obj, '', func_doc=SphinxFunctionDoc, config=config)
- if looks_like_issubclass(obj, 'HasTraits'):
- for name, trait, comment in comment_eater.get_class_traits(obj):
- # Exclude private traits.
- if not name.startswith('_'):
- doc['Traits'].append((name, trait, comment.splitlines()))
- return doc
- elif what in ('function', 'method'):
- return SphinxFunctionDoc(obj, '', config=config)
- else:
- return SphinxDocString(pydoc.getdoc(obj), config=config)
-
-def setup(app):
- # init numpydoc
- numpydoc.setup(app, get_doc_object)
-
| The first commit only updates our vendored version with the upstream one.
Additional commits make it work with our docs:
- edit numpydoc to allow member listing for attributes (this is a change (and the only) to the source code of numpydoc that is not upstream: discussed in https://github.com/numpy/numpydoc/pull/106)
- numpydoc settings: use `numpydoc_use_blockquotes=False` for now for compatibility (it is deprecated). To fix this, we need to fix: css styling of description lists + warnings generated due to `**kwargs`
It is probably worth to look at the separate commits (and would also like to keep them when merging) | https://api.github.com/repos/pandas-dev/pandas/pulls/20383 | 2018-03-16T16:22:14Z | 2018-03-27T19:45:20Z | 2018-03-27T19:45:20Z | 2018-03-27T19:45:29Z |
BUG: Handle all-NA blocks in concat | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index bb6702b50ad3d..a0e122d390240 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -5391,7 +5391,8 @@ def is_uniform_join_units(join_units):
# all blocks need to have the same type
all(type(ju.block) is type(join_units[0].block) for ju in join_units) and # noqa
# no blocks that would get missing values (can lead to type upcasts)
- all(not ju.is_na for ju in join_units) and
+ # unless we're an extension dtype.
+ all(not ju.is_na or ju.block.is_extension for ju in join_units) and
# no blocks with indexers (as then the dimensions do not fit)
all(not ju.indexers for ju in join_units) and
# disregard Panels
diff --git a/pandas/tests/extension/base/reshaping.py b/pandas/tests/extension/base/reshaping.py
index cfb70f2291555..9b9a614889bef 100644
--- a/pandas/tests/extension/base/reshaping.py
+++ b/pandas/tests/extension/base/reshaping.py
@@ -25,6 +25,21 @@ def test_concat(self, data, in_frame):
assert dtype == data.dtype
assert isinstance(result._data.blocks[0], ExtensionBlock)
+ @pytest.mark.parametrize('in_frame', [True, False])
+ def test_concat_all_na_block(self, data_missing, in_frame):
+ valid_block = pd.Series(data_missing.take([1, 1]), index=[0, 1])
+ na_block = pd.Series(data_missing.take([0, 0]), index=[2, 3])
+ if in_frame:
+ valid_block = pd.DataFrame({"a": valid_block})
+ na_block = pd.DataFrame({"a": na_block})
+ result = pd.concat([valid_block, na_block])
+ if in_frame:
+ expected = pd.DataFrame({"a": data_missing.take([1, 1, 0, 0])})
+ self.assert_frame_equal(result, expected)
+ else:
+ expected = pd.Series(data_missing.take([1, 1, 0, 0]))
+ self.assert_series_equal(result, expected)
+
def test_align(self, data, na_value):
a = data[:3]
b = data[2:5]
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index 01ae092bc1521..4c6ef9b4d38c8 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -36,31 +36,22 @@ def na_value():
class BaseDecimal(object):
- @staticmethod
- def assert_series_equal(left, right, *args, **kwargs):
- # tm.assert_series_equal doesn't handle Decimal('NaN').
- # We will ensure that the NA values match, and then
- # drop those values before moving on.
+
+ def assert_series_equal(self, left, right, *args, **kwargs):
left_na = left.isna()
right_na = right.isna()
tm.assert_series_equal(left_na, right_na)
- tm.assert_series_equal(left[~left_na], right[~right_na],
- *args, **kwargs)
-
- @staticmethod
- def assert_frame_equal(left, right, *args, **kwargs):
- # TODO(EA): select_dtypes
- decimals = (left.dtypes == 'decimal').index
-
- for col in decimals:
- BaseDecimal.assert_series_equal(left[col], right[col],
- *args, **kwargs)
-
- left = left.drop(columns=decimals)
- right = right.drop(columns=decimals)
- tm.assert_frame_equal(left, right, *args, **kwargs)
+ return tm.assert_series_equal(left[~left_na],
+ right[~right_na],
+ *args, **kwargs)
+
+ def assert_frame_equal(self, left, right, *args, **kwargs):
+ self.assert_series_equal(left.dtypes, right.dtypes)
+ for col in left.columns:
+ self.assert_series_equal(left[col], right[col],
+ *args, **kwargs)
class TestDtype(BaseDecimal, base.BaseDtypeTests):
| Previously we special cased all-na blocks. We should only do that for
non-extension dtypes. | https://api.github.com/repos/pandas-dev/pandas/pulls/20382 | 2018-03-16T15:43:29Z | 2018-03-18T11:44:04Z | 2018-03-18T11:44:04Z | 2018-03-18T11:44:07Z |
DOC: update pandas.core.resample.Resampler.nearest docstring | diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 36476a8ecb657..0dcc11aa2e193 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -418,23 +418,63 @@ def pad(self, limit=None):
def nearest(self, limit=None):
"""
- Fill values with nearest neighbor starting from center
+ Resample by using the nearest value.
+
+ When resampling data, missing values may appear (e.g., when the
+ resampling frequency is higher than the original frequency).
+ The `nearest` method will replace ``NaN`` values that appeared in
+ the resampled data with the value from the nearest member of the
+ sequence, based on the index value.
+ Missing values that existed in the original data will not be modified.
+ If `limit` is given, fill only this many values in each direction for
+ each of the original values.
Parameters
----------
- limit : integer, optional
- limit of how many values to fill
+ limit : int, optional
+ Limit of how many values to fill.
.. versionadded:: 0.21.0
Returns
-------
- an upsampled Series
+ Series or DataFrame
+ An upsampled Series or DataFrame with ``NaN`` values filled with
+ their nearest value.
See Also
--------
- Series.fillna
- DataFrame.fillna
+ backfill : Backward fill the new missing values in the resampled data.
+ pad : Forward fill ``NaN`` values.
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2],
+ ... index=pd.date_range('20180101',
+ ... periods=2,
+ ... freq='1h'))
+ >>> s
+ 2018-01-01 00:00:00 1
+ 2018-01-01 01:00:00 2
+ Freq: H, dtype: int64
+
+ >>> s.resample('15min').nearest()
+ 2018-01-01 00:00:00 1
+ 2018-01-01 00:15:00 1
+ 2018-01-01 00:30:00 2
+ 2018-01-01 00:45:00 2
+ 2018-01-01 01:00:00 2
+ Freq: 15T, dtype: int64
+
+ Limit the number of upsampled values imputed by the nearest:
+
+ >>> s.resample('15min').nearest(limit=1)
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:15:00 1.0
+ 2018-01-01 00:30:00 NaN
+ 2018-01-01 00:45:00 2.0
+ 2018-01-01 01:00:00 2.0
+ Freq: 15T, dtype: float64
"""
return self._upsample('nearest', limit=limit)
| Checklist for the pandas documentation sprint:
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py pandas.core.resample.Resampler.nearest`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single pandas.core.resample.Resampler.nearest`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
############## Docstring (pandas.core.resample.Resampler.nearest) ##############
################################################################################
Fill the new missing values with their nearest neighbor value, based
on index.
When resampling data, missing values may appear (e.g., when the
resampling frequency is higher than the original frequency).
The nearest fill will replace ``NaN`` values that appeared in
the resampled data with the value from the nearest member of the
sequence, based on the index value.
Missing values that existed in the original data will not be modified.
If `limit` is given, fill only `limit` values in each direction for
each of the original values.
Parameters
----------
limit : integer, optional
Limit of how many values to fill.
.. versionadded:: 0.21.0
Returns
-------
Series, DataFrame
An upsampled Series or DataFrame with ``NaN`` values filled with
their closest neighbor value.
See Also
--------
backfill: Backward fill the new missing values in the resampled data.
fillna : Fill ``NaN`` values using the specified method, which can be
'backfill'.
pad : Forward fill ``NaN`` values.
pandas.Series.fillna : Fill ``NaN`` values in the Series using the
specified method, which can be 'backfill'.
pandas.DataFrame.fillna : Fill ``NaN`` values in the DataFrame using
the specified method, which can be 'backfill'.
Examples
--------
Resampling a Series:
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3,
... freq='1h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
>>> s.resample('20min').nearest()
2018-01-01 00:00:00 1
2018-01-01 00:20:00 1
2018-01-01 00:40:00 2
2018-01-01 01:00:00 2
2018-01-01 01:20:00 2
2018-01-01 01:40:00 3
2018-01-01 02:00:00 3
Freq: 20T, dtype: int64
Resample in the middle:
>>> s.resample('30min').nearest()
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
Limited fill:
>>> s.resample('10min').nearest(limit=1)
2018-01-01 00:00:00 1.0
2018-01-01 00:10:00 1.0
2018-01-01 00:20:00 NaN
2018-01-01 00:30:00 NaN
2018-01-01 00:40:00 NaN
2018-01-01 00:50:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:10:00 2.0
2018-01-01 01:20:00 NaN
2018-01-01 01:30:00 NaN
2018-01-01 01:40:00 NaN
2018-01-01 01:50:00 3.0
2018-01-01 02:00:00 3.0
Freq: 10T, dtype: float64
Resampling a DataFrame that has missing values:
>>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},
... index=pd.date_range('20180101', periods=3,
... freq='h'))
>>> df
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 01:00:00 NaN 3
2018-01-01 02:00:00 6.0 5
>>> df.resample('20min').nearest()
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:20:00 2.0 1
2018-01-01 00:40:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:20:00 NaN 3
2018-01-01 01:40:00 6.0 5
2018-01-01 02:00:00 6.0 5
Resampling a DataFrame with shuffled indexes:
>>> df = pd.DataFrame({'a': [2, 6, 4]},
... index=pd.date_range('20180101', periods=3,
... freq='h'))
>>> df
a
2018-01-01 00:00:00 2
2018-01-01 01:00:00 6
2018-01-01 02:00:00 4
>>> sorted_df = df.sort_values(by=['a'])
>>> sorted_df
a
2018-01-01 00:00:00 2
2018-01-01 02:00:00 4
2018-01-01 01:00:00 6
>>> sorted_df.resample('20min').nearest()
a
2018-01-01 00:00:00 2
2018-01-01 00:20:00 2
2018-01-01 00:40:00 6
2018-01-01 01:00:00 6
2018-01-01 01:20:00 6
2018-01-01 01:40:00 4
2018-01-01 02:00:00 4
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameter "limit" description should finish with "."
```
The error is caused by `.. versionadded:: 0.21.0`, which has no period in the end.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20381 | 2018-03-16T15:18:01Z | 2018-11-20T15:21:11Z | 2018-11-20T15:21:11Z | 2018-11-20T15:21:14Z |
DOC: update the pandas.core.resample.Resampler.fillna docstring | diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 004d572375234..b3ab90fd67de4 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -624,18 +624,160 @@ def backfill(self, limit=None):
def fillna(self, method, limit=None):
"""
- Fill missing values
+ Fill missing values introduced by upsampling.
+
+ In statistics, imputation is the process of replacing missing data with
+ substituted values [1]_. When resampling data, missing values may
+ appear (e.g., when the resampling frequency is higher than the original
+ frequency).
+
+ Missing values that existed in the orginal data will
+ not be modified.
Parameters
----------
- method : str, method of resampling ('ffill', 'bfill')
+ method : {'pad', 'backfill', 'ffill', 'bfill', 'nearest'}
+ Method to use for filling holes in resampled data
+
+ * 'pad' or 'ffill': use previous valid observation to fill gap
+ (forward fill).
+ * 'backfill' or 'bfill': use next valid observation to fill gap.
+ * 'nearest': use nearest valid observation to fill gap.
+
limit : integer, optional
- limit of how many values to fill
+ Limit of how many consecutive missing values to fill.
+
+ Returns
+ -------
+ Series or DataFrame
+ An upsampled Series or DataFrame with missing values filled.
See Also
--------
- Series.fillna
- DataFrame.fillna
+ backfill : Backward fill NaN values in the resampled data.
+ pad : Forward fill NaN values in the resampled data.
+ nearest : Fill NaN values in the resampled data
+ with nearest neighbor starting from center.
+ pandas.Series.fillna : Fill NaN values in the Series using the
+ specified method, which can be 'bfill' and 'ffill'.
+ pandas.DataFrame.fillna : Fill NaN values in the DataFrame using the
+ specified method, which can be 'bfill' and 'ffill'.
+
+ Examples
+ --------
+ Resampling a Series:
+
+ >>> s = pd.Series([1, 2, 3],
+ ... index=pd.date_range('20180101', periods=3, freq='h'))
+ >>> s
+ 2018-01-01 00:00:00 1
+ 2018-01-01 01:00:00 2
+ 2018-01-01 02:00:00 3
+ Freq: H, dtype: int64
+
+ Without filling the missing values you get:
+
+ >>> s.resample("30min").asfreq()
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:30:00 NaN
+ 2018-01-01 01:00:00 2.0
+ 2018-01-01 01:30:00 NaN
+ 2018-01-01 02:00:00 3.0
+ Freq: 30T, dtype: float64
+
+ >>> s.resample('30min').fillna("backfill")
+ 2018-01-01 00:00:00 1
+ 2018-01-01 00:30:00 2
+ 2018-01-01 01:00:00 2
+ 2018-01-01 01:30:00 3
+ 2018-01-01 02:00:00 3
+ Freq: 30T, dtype: int64
+
+ >>> s.resample('15min').fillna("backfill", limit=2)
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:15:00 NaN
+ 2018-01-01 00:30:00 2.0
+ 2018-01-01 00:45:00 2.0
+ 2018-01-01 01:00:00 2.0
+ 2018-01-01 01:15:00 NaN
+ 2018-01-01 01:30:00 3.0
+ 2018-01-01 01:45:00 3.0
+ 2018-01-01 02:00:00 3.0
+ Freq: 15T, dtype: float64
+
+ >>> s.resample('30min').fillna("pad")
+ 2018-01-01 00:00:00 1
+ 2018-01-01 00:30:00 1
+ 2018-01-01 01:00:00 2
+ 2018-01-01 01:30:00 2
+ 2018-01-01 02:00:00 3
+ Freq: 30T, dtype: int64
+
+ >>> s.resample('30min').fillna("nearest")
+ 2018-01-01 00:00:00 1
+ 2018-01-01 00:30:00 2
+ 2018-01-01 01:00:00 2
+ 2018-01-01 01:30:00 3
+ 2018-01-01 02:00:00 3
+ Freq: 30T, dtype: int64
+
+ Missing values present before the upsampling are not affected.
+
+ >>> sm = pd.Series([1, None, 3],
+ ... index=pd.date_range('20180101', periods=3, freq='h'))
+ >>> sm
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 01:00:00 NaN
+ 2018-01-01 02:00:00 3.0
+ Freq: H, dtype: float64
+
+ >>> sm.resample('30min').fillna('backfill')
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:30:00 NaN
+ 2018-01-01 01:00:00 NaN
+ 2018-01-01 01:30:00 3.0
+ 2018-01-01 02:00:00 3.0
+ Freq: 30T, dtype: float64
+
+ >>> sm.resample('30min').fillna('pad')
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:30:00 1.0
+ 2018-01-01 01:00:00 NaN
+ 2018-01-01 01:30:00 NaN
+ 2018-01-01 02:00:00 3.0
+ Freq: 30T, dtype: float64
+
+ >>> sm.resample('30min').fillna('nearest')
+ 2018-01-01 00:00:00 1.0
+ 2018-01-01 00:30:00 NaN
+ 2018-01-01 01:00:00 NaN
+ 2018-01-01 01:30:00 3.0
+ 2018-01-01 02:00:00 3.0
+ Freq: 30T, dtype: float64
+
+ DataFrame resampling is done column-wise. All the same options are
+ available.
+
+ >>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},
+ ... index=pd.date_range('20180101', periods=3,
+ ... freq='h'))
+ >>> df
+ a b
+ 2018-01-01 00:00:00 2.0 1
+ 2018-01-01 01:00:00 NaN 3
+ 2018-01-01 02:00:00 6.0 5
+
+ >>> df.resample('30min').fillna("bfill")
+ a b
+ 2018-01-01 00:00:00 2.0 1
+ 2018-01-01 00:30:00 NaN 3
+ 2018-01-01 01:00:00 NaN 3
+ 2018-01-01 01:30:00 6.0 5
+ 2018-01-01 02:00:00 6.0 5
+
+ References
+ ----------
+ .. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)
"""
return self._upsample(method, limit=limit)
| - [x] PR title is "DOC: update the pandas.core.resample.Resampler.fillna docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py pandas.core.resample.Resampler.fillna`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single pandas.core.resample.Resampler.fillna`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
############## Docstring (pandas.core.resample.Resampler.fillna) ##############
################################################################################
Fill the new missing values in the resampled data using different
methods.
In statistics, imputation is the process of replacing missing data with
substituted values [1]_. When resampling data, missing values may
appear (e.g., when the resampling frequency is higher than the original
frequency).
The backward fill ('bfill') will replace NaN values that appeared in
the resampled data with the next value in the original sequence. The
forward fill ('ffill'), on the other hand, will replace NaN values
that appeared in the resampled data with the previous value in the
original sequence. Missing values that existed in the orginal data will
not be modified.
Parameters
----------
method : str, method of resampling ('ffill', 'bfill')
Method to use for filling holes in resampled data
* ffill: use previous valid observation to fill gap (forward
fill).
* bfill: use next valid observation to fill gap (backward
fill).
limit : integer, optional
Limit of how many values to fill.
Returns
-------
Series, DataFrame
An upsampled Series or DataFrame with backward or forwards filled
NaN values.
Examples
--------
Resampling a Series:
>>> s = pd.Series([1, 2, 3],
... index=pd.date_range('20180101', periods=3, freq='h'))
>>> s
2018-01-01 00:00:00 1
2018-01-01 01:00:00 2
2018-01-01 02:00:00 3
Freq: H, dtype: int64
>>> s.resample('30min').fillna("bfill")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 2
2018-01-01 01:00:00 2
2018-01-01 01:30:00 3
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
>>> s.resample('15min').fillna("bfill", limit=2)
2018-01-01 00:00:00 1.0
2018-01-01 00:15:00 NaN
2018-01-01 00:30:00 2.0
2018-01-01 00:45:00 2.0
2018-01-01 01:00:00 2.0
2018-01-01 01:15:00 NaN
2018-01-01 01:30:00 3.0
2018-01-01 01:45:00 3.0
2018-01-01 02:00:00 3.0
Freq: 15T, dtype: float64
>>> s.resample('30min').fillna("ffill")
2018-01-01 00:00:00 1
2018-01-01 00:30:00 1
2018-01-01 01:00:00 2
2018-01-01 01:30:00 2
2018-01-01 02:00:00 3
Freq: 30T, dtype: int64
Resampling a DataFrame that has missing values:
>>> df = pd.DataFrame({'a': [2, np.nan, 6], 'b': [1, 3, 5]},
... index=pd.date_range('20180101', periods=3,
... freq='h'))
>>> df
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 01:00:00 NaN 3
2018-01-01 02:00:00 6.0 5
>>> df.resample('30min').fillna("bfill")
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 NaN 3
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 6.0 5
2018-01-01 02:00:00 6.0 5
>>> df.resample('15min').fillna("bfill", limit=2)
a b
2018-01-01 00:00:00 2.0 1.0
2018-01-01 00:15:00 NaN NaN
2018-01-01 00:30:00 NaN 3.0
2018-01-01 00:45:00 NaN 3.0
2018-01-01 01:00:00 NaN 3.0
2018-01-01 01:15:00 NaN NaN
2018-01-01 01:30:00 6.0 5.0
2018-01-01 01:45:00 6.0 5.0
2018-01-01 02:00:00 6.0 5.0
>>> df.resample('30min').fillna("ffill")
a b
2018-01-01 00:00:00 2.0 1
2018-01-01 00:30:00 2.0 1
2018-01-01 01:00:00 NaN 3
2018-01-01 01:30:00 NaN 3
2018-01-01 02:00:00 6.0 5
See Also
--------
backfill : Backward fill NaN values in the resampled data.
pad : Forward fill NaN values in the resampled data.
bfill : Alias of backfill.
ffill: Alias of pad.
nearest : Fill NaN values in the resampled data
with nearest neighbor starting from center.
pandas.Series.fillna : Fill NaN values in the Series using the
specified method, which can be 'bfill' and 'ffill'.
pandas.DataFrame.fillna : Fill NaN values in the DataFrame using the
specified method, which can be 'bfill' and 'ffill'.
References
----------
.. [1] https://en.wikipedia.org/wiki/Imputation_(statistics)
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.core.resample.Resampler.fillna" correct. :)
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20379 | 2018-03-16T14:38:10Z | 2018-03-17T19:49:52Z | 2018-03-17T19:49:52Z | 2018-03-17T19:49:55Z |
DOC: update pandas.core.groupby.DataFrameGroupBy.resample docstring. | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 5041449b4d724..16033997f2f88 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1307,12 +1307,111 @@ def describe(self, **kwargs):
return result.T
return result.unstack()
- @Substitution(name='groupby')
- @Appender(_doc_template)
def resample(self, rule, *args, **kwargs):
"""
- Provide resampling when using a TimeGrouper
- Return a new grouper with our resampler appended
+ Provide resampling when using a TimeGrouper.
+
+ Given a grouper, the function resamples it according to a string
+ "string" -> "frequency".
+
+ See the :ref:`frequency aliases <timeseries.offset-aliases>`
+ documentation for more details.
+
+ Parameters
+ ----------
+ rule : str or DateOffset
+ The offset string or object representing target grouper conversion.
+ *args, **kwargs
+ Possible arguments are `how`, `fill_method`, `limit`, `kind` and
+ `on`, and other arguments of `TimeGrouper`.
+
+ Returns
+ -------
+ Grouper
+ Return a new grouper with our resampler appended.
+
+ See Also
+ --------
+ pandas.Grouper : Specify a frequency to resample with when
+ grouping by a key.
+ DatetimeIndex.resample : Frequency conversion and resampling of
+ time series.
+
+ Examples
+ --------
+ >>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
+ >>> df = pd.DataFrame(data=4 * [range(2)],
+ ... index=idx,
+ ... columns=['a', 'b'])
+ >>> df.iloc[2, 0] = 5
+ >>> df
+ a b
+ 2000-01-01 00:00:00 0 1
+ 2000-01-01 00:01:00 0 1
+ 2000-01-01 00:02:00 5 1
+ 2000-01-01 00:03:00 0 1
+
+ Downsample the DataFrame into 3 minute bins and sum the values of
+ the timestamps falling into a bin.
+
+ >>> df.groupby('a').resample('3T').sum()
+ a b
+ a
+ 0 2000-01-01 00:00:00 0 2
+ 2000-01-01 00:03:00 0 1
+ 5 2000-01-01 00:00:00 5 1
+
+ Upsample the series into 30 second bins.
+
+ >>> df.groupby('a').resample('30S').sum()
+ a b
+ a
+ 0 2000-01-01 00:00:00 0 1
+ 2000-01-01 00:00:30 0 0
+ 2000-01-01 00:01:00 0 1
+ 2000-01-01 00:01:30 0 0
+ 2000-01-01 00:02:00 0 0
+ 2000-01-01 00:02:30 0 0
+ 2000-01-01 00:03:00 0 1
+ 5 2000-01-01 00:02:00 5 1
+
+ Resample by month. Values are assigned to the month of the period.
+
+ >>> df.groupby('a').resample('M').sum()
+ a b
+ a
+ 0 2000-01-31 0 3
+ 5 2000-01-31 5 1
+
+ Downsample the series into 3 minute bins as above, but close the right
+ side of the bin interval.
+
+ >>> df.groupby('a').resample('3T', closed='right').sum()
+ a b
+ a
+ 0 1999-12-31 23:57:00 0 1
+ 2000-01-01 00:00:00 0 2
+ 5 2000-01-01 00:00:00 5 1
+
+ Downsample the series into 3 minute bins and close the right side of
+ the bin interval, but label each bin using the right edge instead of
+ the left.
+
+ >>> df.groupby('a').resample('3T', closed='right', label='right').sum()
+ a b
+ a
+ 0 2000-01-01 00:00:00 0 1
+ 2000-01-01 00:03:00 0 2
+ 5 2000-01-01 00:03:00 5 1
+
+ Add an offset of twenty seconds.
+
+ >>> df.groupby('a').resample('3T', loffset='20s').sum()
+ a b
+ a
+ 0 2000-01-01 00:00:20 0 2
+ 2000-01-01 00:03:20 0 1
+ 5 2000-01-01 00:00:20 5 1
"""
from pandas.core.resample import get_resampler_for_grouping
return get_resampler_for_grouping(self, rule, *args, **kwargs)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
########## Docstring (pandas.core.groupby.DataFrameGroupBy.resample) ##########
################################################################################
Provide resampling when using a TimeGrouper.
Given a grouper the function resamples it according to a string
"string" -> "frequency".
See the :ref:`frequency aliases <timeseries.offset-aliases>`
documentation for more details.
Parameters
----------
rule : str or Offset
The offset string or object representing target grouper conversion.
*args, **kwargs : [closed, label, loffset]
For compatibility with other groupby methods. See below for some
example parameters.
closed : {‘right’, ‘left’}
Which side of bin interval is closed.
label : {‘right’, ‘left’}
Which bin edge label to label bucket with.
loffset : timedelta
Adjust the resampled time labels.
Returns
-------
Grouper
Return a new grouper with our resampler appended.
Examples
--------
Start by creating a length-4 DataFrame with minute frequency.
>>> idx = pd.date_range('1/1/2000', periods=4, freq='T')
>>> df = pd.DataFrame(data=4 * [range(2)],
... index=idx,
... columns=['a', 'b'])
>>> df.iloc[2, 0] = 5
>>> df
a b
2000-01-01 00:00:00 0 1
2000-01-01 00:01:00 0 1
2000-01-01 00:02:00 5 1
2000-01-01 00:03:00 0 1
Downsample the DataFrame into 3 minute bins and sum the values of
the timestamps falling into a bin.
>>> df.groupby('a').resample('3T').sum()
a b
a
0 2000-01-01 00:00:00 0 2
2000-01-01 00:03:00 0 1
5 2000-01-01 00:00:00 5 1
Upsample the series into 30 second bins.
>>> df.groupby('a').resample('30S').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:00:30 0 0
2000-01-01 00:01:00 0 1
2000-01-01 00:01:30 0 0
2000-01-01 00:02:00 0 0
2000-01-01 00:02:30 0 0
2000-01-01 00:03:00 0 1
5 2000-01-01 00:02:00 5 1
Resample by month. Values are assigned to the month of the period.
>>> df.groupby('a').resample('M').sum()
a b
a
0 2000-01-31 0 3
5 2000-01-31 5 1
Downsample the series into 3 minute bins as above, but close the right
side of the bin interval.
>>> df.groupby('a').resample('3T', closed='right').sum()
a b
a
0 1999-12-31 23:57:00 0 1
2000-01-01 00:00:00 0 2
5 2000-01-01 00:00:00 5 1
Downsample the series into 3 minute bins and close the right side of
the bin interval, but label each bin using the right edge instead of
the left.
>>> df.groupby('a').resample('3T', closed='right', label='right').sum()
a b
a
0 2000-01-01 00:00:00 0 1
2000-01-01 00:03:00 0 2
5 2000-01-01 00:03:00 5 1
Add an offset of twenty seconds.
>>> df.groupby('a').resample('3T', loffset='20s').sum()
a b
a
0 2000-01-01 00:00:20 0 2
2000-01-01 00:03:20 0 1
5 2000-01-01 00:00:20 5 1
See also
--------
pandas.Series.groupby
pandas.DataFrame.groupby
pandas.Panel.groupby
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Use only one blank line to separate sections or paragraphs
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args, **kwargs', 'loffset', 'label', 'closed'}
```
args and kwargs are described with asterisks as requested, even if the validator does not recognizes it.
`See also` complains of duplication if left as in the commit in the docstring. Not showed above. | https://api.github.com/repos/pandas-dev/pandas/pulls/20374 | 2018-03-15T23:18:12Z | 2018-11-14T20:55:15Z | 2018-11-14T20:55:15Z | 2018-11-14T23:18:31Z |
DOC: update the pandas.DataFrame.plot.box docstring | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index fa2766bb63d55..2cc0944d29019 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -3031,19 +3031,51 @@ def barh(self, x=None, y=None, **kwds):
def box(self, by=None, **kwds):
r"""
- Boxplot
+ Make a box plot of the DataFrame columns.
+
+ A box plot is a method for graphically depicting groups of numerical
+ data through their quartiles.
+ The box extends from the Q1 to Q3 quartile values of the data,
+ with a line at the median (Q2). The whiskers extend from the edges
+ of box to show the range of the data. The position of the whiskers
+ is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the
+ box. Outlier points are those past the end of the whiskers.
+
+ For further details see Wikipedia's
+ entry for `boxplot <https://en.wikipedia.org/wiki/Box_plot>`__.
+
+ A consideration when using this chart is that the box and the whiskers
+ can overlap, which is very common when plotting small sets of data.
Parameters
----------
by : string or sequence
Column in the DataFrame to group by.
- `**kwds` : optional
- Additional keyword arguments are documented in
+ **kwds : optional
+ Additional keywords are documented in
:meth:`pandas.DataFrame.plot`.
Returns
-------
axes : :class:`matplotlib.axes.Axes` or numpy.ndarray of them
+
+ See Also
+ --------
+ pandas.DataFrame.boxplot: Another method to draw a box plot.
+ pandas.Series.plot.box: Draw a box plot from a Series object.
+ matplotlib.pyplot.boxplot: Draw a box plot in matplotlib.
+
+ Examples
+ --------
+ Draw a box plot from a DataFrame with four columns of randomly
+ generated data.
+
+ .. plot::
+ :context: close-figs
+
+ >>> data = np.random.randn(25, 4)
+ >>> df = pd.DataFrame(data, columns=list('ABCD'))
+ >>> ax = df.plot.box()
"""
return self(kind='box', by=by, **kwds)
| A bit late to the party, but I did it.
- [ ] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [ ] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
```
################################################################################
#################### Docstring (pandas.DataFrame.plot.box) ####################
################################################################################
Make a box plot of the DataFrame columns.
A box plot is a method for graphically depicting
groups of numerical data through their quartiles.
The box extends from the Q1 to Q3 quartile values of the data,
with a line at the median (Q2). The whiskers extend from the edges
of box to show the range of the data. The position of the whiskers
is set by default to 1.5*IQR (IQR = Q3 - Q1) from the edges of the
box. Outlier points are those past the end of the whiskers.
For further details see Wikipedia's
entry for `boxplot <https://en.wikipedia.org/wiki/Box_plot>`_.
A consideration when using this chart is that the box and the whiskers
can overlap, which is very common when plotting small sets of data.
Parameters
----------
by : string or sequence
Column in the DataFrame to group by.
**kwds : optional
Additional keywords are documented in
:meth:`pandas.DataFrame.plot`.
Returns
-------
axes : :class:`matplotlib.axes.Axes` or numpy.ndarray of them
See Also
--------
pandas.DataFrame.boxplot: Another method to draw a box plot.
pandas.Series.plot.box: Draw a box plot from a Series object.
matplotlib.pyplot.boxplot: Draw a box plot in matplotlib.
Examples
--------
Draw a box plot from a DataFrame with four columns of randomly
generated data.
.. plot::
:context: close-figs
>>> data = np.random.randn(25, 4)
>>> df = pd.DataFrame(data, columns=list('ABCD'))
>>> plot = df.plot.box()
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'kwds'} not documented
Unknown parameters {'**kwds'}
```
Due to issue #15079 I wasn't sure how to document the `by` argument. I will follow up to add it later.
Changed kwds description according to #20348. | https://api.github.com/repos/pandas-dev/pandas/pulls/20373 | 2018-03-15T22:12:23Z | 2018-03-23T09:04:32Z | 2018-03-23T09:04:32Z | 2018-03-23T09:04:50Z |
remove NaN in categories checking | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index b37f88d8bfdce..c6c46956a6eaf 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1258,7 +1258,7 @@ def isna(self):
"""
Detect missing values
- Both missing values (-1 in .codes) and NA as a category are detected.
+ Missing values (-1 in .codes) are detected.
Returns
-------
@@ -1273,13 +1273,6 @@ def isna(self):
"""
ret = self._codes == -1
-
- # String/object and float categories can hold np.nan
- if self.categories.dtype.kind in ['S', 'O', 'f']:
- if np.nan in self.categories:
- nan_pos = np.where(isna(self.categories))[0]
- # we only have one NA in categories
- ret = np.logical_or(ret, self._codes == nan_pos)
return ret
isnull = isna
@@ -1315,16 +1308,14 @@ def dropna(self):
"""
Return the Categorical without null values.
- Both missing values (-1 in .codes) and NA as a category are detected.
- NA is removed from the categories if present.
+ Missing values (-1 in .codes) are detected.
Returns
-------
valid : Categorical
"""
result = self[self.notna()]
- if isna(result.categories).any():
- result = result.remove_categories([np.nan])
+
return result
def value_counts(self, dropna=True):
@@ -1336,7 +1327,7 @@ def value_counts(self, dropna=True):
Parameters
----------
dropna : boolean, default True
- Don't include counts of NaN, even if NaN is a category.
+ Don't include counts of NaN.
Returns
-------
@@ -1348,11 +1339,9 @@ def value_counts(self, dropna=True):
"""
from numpy import bincount
- from pandas import isna, Series, CategoricalIndex
+ from pandas import Series, CategoricalIndex
- obj = (self.remove_categories([np.nan]) if dropna and
- isna(self.categories).any() else self)
- code, cat = obj._codes, obj.categories
+ code, cat = self._codes, self.categories
ncat, mask = len(cat), 0 <= code
ix, clean = np.arange(ncat), mask.all()
@@ -1627,14 +1616,6 @@ def fillna(self, value=None, method=None, limit=None):
values = self._codes
- # Make sure that we also get NA in categories
- if self.categories.dtype.kind in ['S', 'O', 'f']:
- if np.nan in self.categories:
- values = values.copy()
- nan_pos = np.where(isna(self.categories))[0]
- # we only have one NA in categories
- values[values == nan_pos] = -1
-
# pad / bfill
if method is not None:
@@ -1888,15 +1869,6 @@ def __setitem__(self, key, value):
key = np.asarray(key)
lindexer = self.categories.get_indexer(rvalue)
-
- # FIXME: the following can be removed after GH7820 is fixed:
- # https://github.com/pandas-dev/pandas/issues/7820
- # float categories do currently return -1 for np.nan, even if np.nan is
- # included in the index -> "repair" this here
- if isna(rvalue).any() and isna(self.categories).any():
- nan_pos = np.where(isna(self.categories))[0]
- lindexer[lindexer == -1] = nan_pos
-
lindexer = self._maybe_coerce_indexer(lindexer)
self._codes[key] = lindexer
|
- [ ] closes #20362
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20372 | 2018-03-15T21:45:16Z | 2018-03-16T10:19:01Z | 2018-03-16T10:19:00Z | 2018-03-16T10:19:04Z |
API: Remove integer position args from xy for plotting | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index cf3ae3c0368d3..10a1553d74775 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -1728,6 +1728,11 @@ def _plot(data, x=None, y=None, subplots=False,
klass = _plot_klass[kind]
else:
raise ValueError("%r is not a valid plot kind" % kind)
+ xy_ints = is_integer(x) or is_integer(y)
+ if xy_ints and not data.columns.holds_integer():
+ raise ValueError(
+ "x and y must be labels that match column names in data"
+ )
if kind in _dataframe_kinds:
if isinstance(data, ABCDataFrame):
@@ -1743,8 +1748,6 @@ def _plot(data, x=None, y=None, subplots=False,
msg = "{0} requires either y column or 'subplots=True'"
raise ValueError(msg.format(kind))
elif y is not None:
- if is_integer(y) and not data.columns.holds_integer():
- y = data.columns[y]
# converted to series actually. copy to not modify
data = data[y].copy()
data.index.name = y
@@ -1752,17 +1755,13 @@ def _plot(data, x=None, y=None, subplots=False,
else:
if isinstance(data, ABCDataFrame):
if x is not None:
- if is_integer(x) and not data.columns.holds_integer():
- x = data.columns[x]
- elif not isinstance(data[x], ABCSeries):
- raise ValueError("x must be a label or position")
+ if not isinstance(data[x], ABCSeries):
+ raise ValueError("x must be a label")
data = data.set_index(x)
if y is not None:
- if is_integer(y) and not data.columns.holds_integer():
- y = data.columns[y]
- elif not isinstance(data[y], ABCSeries):
- raise ValueError("y must be a label or position")
+ if not isinstance(data[y], ABCSeries):
+ raise ValueError("y must be a label")
label = kwds['label'] if 'label' in kwds else y
series = data[y].copy() # Don't modify
series.name = label
@@ -1787,8 +1786,8 @@ def _plot(data, x=None, y=None, subplots=False,
- 'hexbin' : hexbin plot"""
series_kind = ""
-df_coord = """x : label or position, default None
- y : label or position, default None
+df_coord = """x : label, default None
+ y : label, default None
Allows plotting of one column versus another"""
series_coord = ""
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index b29afcb404ac6..a39e6921b9803 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -205,9 +205,6 @@ def test_donot_overwrite_index_name(self):
def test_plot_xy(self):
# columns.inferred_type == 'string'
df = self.tdf
- self._check_data(df.plot(x=0, y=1), df.set_index('A')['B'].plot())
- self._check_data(df.plot(x=0), df.set_index('A').plot())
- self._check_data(df.plot(y=0), df.B.plot())
self._check_data(df.plot(x='A', y='B'), df.set_index('A').B.plot())
self._check_data(df.plot(x='A'), df.set_index('A').plot())
self._check_data(df.plot(y='B'), df.B.plot())
@@ -1019,7 +1016,6 @@ def test_plot_scatter(self):
columns=['x', 'y', 'z', 'four'])
_check_plot_works(df.plot.scatter, x='x', y='y')
- _check_plot_works(df.plot.scatter, x=1, y=2)
with pytest.raises(TypeError):
df.plot.scatter(x='x')
@@ -1054,17 +1050,15 @@ def test_plot_scatter_with_c(self):
index=list(string.ascii_letters[:6]),
columns=['x', 'y', 'z', 'four'])
- axes = [df.plot.scatter(x='x', y='y', c='z'),
- df.plot.scatter(x=0, y=1, c=2)]
- for ax in axes:
- # default to Greys
- assert ax.collections[0].cmap.name == 'Greys'
+ axes = df.plot.scatter(x='x', y='y', c='z')
- if self.mpl_ge_1_3_1:
+ # default to Greys
+ assert axes.collections[0].cmap.name == 'Greys'
- # n.b. there appears to be no public method to get the colorbar
- # label
- assert ax.collections[0].colorbar._label == 'z'
+ if self.mpl_ge_1_3_1:
+ # n.b. there appears to be no public method to get the colorbar
+ # label
+ assert axes.collections[0].colorbar._label == 'z'
cm = 'cubehelix'
ax = df.plot.scatter(x='x', y='y', c='z', colormap=cm)
@@ -1075,7 +1069,7 @@ def test_plot_scatter_with_c(self):
assert ax.collections[0].colorbar is None
# verify that we can still plot a solid color
- ax = df.plot.scatter(x=0, y=1, c='red')
+ ax = df.plot.scatter(x='x', y='y', c='red')
assert ax.collections[0].colorbar is None
self._check_colors(ax.collections, facecolors=['r'])
@@ -2172,14 +2166,27 @@ def test_invalid_kind(self):
@pytest.mark.parametrize("x,y", [
(['B', 'C'], 'A'),
- ('A', ['B', 'C'])
+ ('A', ['B', 'C']),
+ (0, 'A'),
+ ('A', 0),
+ (0, 1)
])
def test_invalid_xy_args(self, x, y):
- # GH 18671
+ # GH 18671 and # GH 20056
df = DataFrame({"A": [1, 2], 'B': [3, 4], 'C': [5, 6]})
with pytest.raises(ValueError):
df.plot(x=x, y=y)
+ @pytest.mark.parametrize("x,y,colnames", [
+ (0, 1, [0, 1]),
+ (1, 'A', ['A', 1])
+ ])
+ def test_xy_args_ints(self, x, y, colnames):
+ # GH 20056
+ df = DataFrame({"A": [1, 2], 'B': [3, 4]})
+ df.columns = colnames
+ _check_plot_works(df.plot, x=x, y=y)
+
@pytest.mark.parametrize("x,y", [
('A', 'B'),
('B', 'A')
@@ -2255,9 +2262,6 @@ def test_pie_df(self):
ax = _check_plot_works(df.plot.pie, y='Y')
self._check_text_labels(ax.texts, df.index)
- ax = _check_plot_works(df.plot.pie, y=2)
- self._check_text_labels(ax.texts, df.index)
-
# _check_plot_works adds an ax so catch warning. see GH #13188
with tm.assert_produces_warning(UserWarning):
axes = _check_plot_works(df.plot.pie,
| - [x] closes #20056
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [?] whatsnew entry
This is based on discussion from #20000 where we decided it was confusing to allow `df.plot` to support both labels AND positions. I wasn't sure where to put this in whatsnew, so lmk and I'll update it. | https://api.github.com/repos/pandas-dev/pandas/pulls/20371 | 2018-03-15T21:44:49Z | 2018-03-22T14:41:42Z | null | 2018-03-22T14:41:52Z |
TST: Remove invalid dtype test | diff --git a/pandas/tests/extension/base/interface.py b/pandas/tests/extension/base/interface.py
index e1596f0675f32..2162552e9650d 100644
--- a/pandas/tests/extension/base/interface.py
+++ b/pandas/tests/extension/base/interface.py
@@ -32,9 +32,6 @@ def test_array_interface(self, data):
result = np.array(data)
assert result[0] == data[0]
- def test_as_ndarray_with_dtype_kind(self, data):
- np.array(data, dtype=data.dtype.kind)
-
def test_repr(self, data):
ser = pd.Series(data)
assert data.dtype.name in repr(ser)
| This test was not valid. Not every dtype kind is a valid dtype.
For example, the dtype may be 'uint64', but the kind is 'u'. | https://api.github.com/repos/pandas-dev/pandas/pulls/20370 | 2018-03-15T18:41:11Z | 2018-03-15T23:18:52Z | 2018-03-15T23:18:52Z | 2018-05-02T13:10:12Z |
DOC: update the pandas.DataFrame.clip docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 952e03c9e645a..728f5876acb5e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6141,48 +6141,61 @@ def clip(self, lower=None, upper=None, axis=None, inplace=False,
-------
Series or DataFrame
Same type as calling object with the values outside the
- clip boundaries replaced
+ clip boundaries replaced.
+
+ Notes
+ -----
+ .. [1] Tukey, John W. "The future of data analysis." The annals of
+ mathematical statistics 33.1 (1962): 1-67.
Examples
--------
- >>> data = {'col_0': [9, -3, 0, -1, 5], 'col_1': [-2, -7, 6, 8, -5]}
- >>> df = pd.DataFrame(data)
+ >>> df = pd.DataFrame({'a': [-1, -2, -100],
+ ... 'b': [1, 2, 100]},
+ ... index=['foo', 'bar', 'foobar'])
>>> df
- col_0 col_1
- 0 9 -2
- 1 -3 -7
- 2 0 6
- 3 -1 8
- 4 5 -5
-
- Clips per column using lower and upper thresholds:
-
- >>> df.clip(-4, 6)
- col_0 col_1
- 0 6 -2
- 1 -3 -4
- 2 0 6
- 3 -1 6
- 4 5 -4
-
- Clips using specific lower and upper thresholds per column element:
-
- >>> t = pd.Series([2, -4, -1, 6, 3])
- >>> t
- 0 2
- 1 -4
- 2 -1
- 3 6
- 4 3
- dtype: int64
-
- >>> df.clip(t, t + 4, axis=0)
- col_0 col_1
- 0 6 2
- 1 -3 -4
- 2 0 3
- 3 6 8
- 4 5 3
+ a b
+ foo -1 1
+ bar -2 2
+ foobar -100 100
+
+ >>> df.clip(lower=-10, upper=10)
+ a b
+ foo -1 1
+ bar -2 2
+ foobar -10 10
+
+ You can clip each column or row with different thresholds by passing
+ a ``Series`` to the lower/upper argument. Use the axis argument to clip
+ by column or rows.
+
+ >>> col_thresh = pd.Series({'a': -5, 'b': 5})
+ >>> df.clip(lower=col_thresh, axis='columns')
+ a b
+ foo -1 5
+ bar -2 5
+ foobar -5 100
+
+ Clip the foo, bar, and foobar rows with lower thresholds 5, 7, and 10.
+
+ >>> row_thresh = pd.Series({'foo': 0, 'bar': 1, 'foobar': 10})
+ >>> df.clip(lower=row_thresh, axis='index')
+ a b
+ foo 0 1
+ bar 1 2
+ foobar 10 100
+
+ Winsorizing [1]_ is a related method, whereby the data are clipped at
+ the 5th and 95th percentiles. The ``DataFrame.quantile`` method returns
+ a ``Series`` with column names as index and the quantiles as values.
+ Use ``axis='columns'`` to apply clipping to columns.
+
+ >>> lower, upper = df.quantile(0.05), df.quantile(0.95)
+ >>> df.clip(lower=lower, upper=upper, axis='columns')
+ a b
+ foo -1.1 1.1
+ bar -2.0 2.0
+ foobar -90.2 90.2
"""
if isinstance(self, ABCPanel):
raise NotImplementedError("clip is not supported yet for panels")
| added new examples and reference to winsorization
Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
###################### Docstring (pandas.DataFrame.clip) ######################
################################################################################
Trim values at input threshold(s).
Assigns values outside boundary to boundary values. Thresholds
can be singular values or array like, and in the latter case
the clipping is performed element-wise in the specified axis.
Parameters
----------
lower : float or array_like, default None
Minimum threshold value. All values below this
threshold will be set to it.
upper : float or array_like, default None
Maximum threshold value. All values above this
threshold will be set to it.
axis : int or string axis name, optional
Align object with lower and upper along the given axis.
inplace : boolean, default False
Whether to perform the operation in place on the data.
.. versionadded:: 0.21.0
*args, **kwargs
Additional keywords have no effect but might be accepted
for compatibility with numpy.
See Also
--------
clip_lower : Clip values below specified threshold(s).
clip_upper : Clip values above specified threshold(s).
Returns
-------
Series or DataFrame
Same type as calling object with the values outside the
clip boundaries replaced.
Notes
-----
.. [1] Tukey, John W. "The future of data analysis." The annals of
mathematical statistics 33.1 (1962): 1-67.
Examples
--------
>>> df = pd.DataFrame({'a': [-1, -2, -100],
... 'b': [1, 2, 100]},
... index=['foo', 'bar', 'foobar'])
>>> df
a b
foo -1 1
bar -2 2
foobar -100 100
>>> df.clip(lower=-10, upper=10)
a b
foo -1 1
bar -2 2
foobar -10 10
You can clip each column or row with different thresholds by passing
a ``Series`` to the lower/upper argument. Use the axis argument to clip
by column or rows.
>>> col_thresh = pd.Series({'a': -5, 'b': 5})
>>> df.clip(lower=col_thresh, axis='columns')
a b
foo -1 5
bar -2 5
foobar -5 100
Clip the foo, bar, and foobar rows with lower thresholds 5, 7, and 10.
>>> row_thresh = pd.Series({'foo': 0, 'bar': 1, 'foobar': 10})
>>> df.clip(lower=row_thresh, axis='index')
a b
foo 0 1
bar 1 2
foobar 10 100
Winsorizing [1]_ is a related method, whereby the data are clipped at
the 5th and 95th percentiles. The ``DataFrame.quantile`` method returns
a ``Series`` with column names as index and the quantiles as values.
Use ``axis='columns'`` to apply clipping to columns.
>>> lower, upper = df.quantile(0.05), df.quantile(0.95)
>>> df.clip(lower=lower, upper=upper, axis='columns')
a b
foo -1.1 1.1
bar -2.0 2.0
foobar -90.2 90.2
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args, **kwargs'}
Parameter "inplace" description should finish with "."
Parameter "*args, **kwargs" has no type
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [x] closes #xxxx
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is a new pull request but has the same content of #20212. Was having some problems with git so I just started fresh | https://api.github.com/repos/pandas-dev/pandas/pulls/20368 | 2018-03-15T17:14:36Z | 2018-04-23T01:16:38Z | null | 2018-04-23T01:16:38Z |
DOC: Update cheat sheet and create link in docs | diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet.pdf b/doc/cheatsheet/Pandas_Cheat_Sheet.pdf
index 0492805a1408b..696ed288cf7a6 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet.pdf and b/doc/cheatsheet/Pandas_Cheat_Sheet.pdf differ
diff --git a/doc/cheatsheet/Pandas_Cheat_Sheet.pptx b/doc/cheatsheet/Pandas_Cheat_Sheet.pptx
index 6cca9ac4647f7..f8b98a6f1f8e4 100644
Binary files a/doc/cheatsheet/Pandas_Cheat_Sheet.pptx and b/doc/cheatsheet/Pandas_Cheat_Sheet.pptx differ
diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index 85e455de7d246..895fe595de205 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -13,6 +13,8 @@ pandas' own :ref:`10 Minutes to pandas<10min>`.
More complex recipes are in the :ref:`Cookbook<cookbook>`.
+A handy pandas `cheat sheet <http://pandas.pydata.org/Pandas_Cheat_Sheet.pdf>`_.
+
pandas Cookbook
---------------
| Update pandas cheat sheet to use `drop(columns=['colname'])`.
Provide link to cheat sheet from the tutorials page.
Based on discussion a while back on gitter with @jorisvandenbossche , maintainers may want the link to the cheat sheet in tutorials.rst to not point to the GitHub repository as I've edited it here, but to a place within the `http://pandas.pydata.org/pandas-docs/stable/` tree, which means when you publish the docs, you have to copy the PDF version of the cheat sheet to the server hosting the docs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20366 | 2018-03-15T15:29:33Z | 2018-03-16T20:46:55Z | 2018-03-16T20:46:55Z | 2019-02-07T06:50:39Z |
TST: fixtures for timezones | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 81a039e484cf1..e78f565b0a9af 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -87,3 +87,24 @@ def join_type(request):
Fixture for trying all types of join operations
"""
return request.param
+
+
+TIMEZONES = [None, 'UTC', 'US/Eastern', 'Asia/Tokyo', 'dateutil/US/Pacific']
+
+
+@td.parametrize_fixture_doc(str(TIMEZONES))
+@pytest.fixture(params=TIMEZONES)
+def tz_naive_fixture(request):
+ """
+ Fixture for trying timezones including default (None): {0}
+ """
+ return request.param
+
+
+@td.parametrize_fixture_doc(str(TIMEZONES[1:]))
+@pytest.fixture(params=TIMEZONES[1:])
+def tz_aware_fixture(request):
+ """
+ Fixture for trying explicit timezones: {0}
+ """
+ return request.param
diff --git a/pandas/tests/dtypes/test_cast.py b/pandas/tests/dtypes/test_cast.py
index 96a9e3227b40b..590f28b275aec 100644
--- a/pandas/tests/dtypes/test_cast.py
+++ b/pandas/tests/dtypes/test_cast.py
@@ -144,16 +144,6 @@ def testinfer_dtype_from_scalar(self):
dtype, val = infer_dtype_from_scalar(data)
assert dtype == 'm8[ns]'
- for tz in ['UTC', 'US/Eastern', 'Asia/Tokyo']:
- dt = Timestamp(1, tz=tz)
- dtype, val = infer_dtype_from_scalar(dt, pandas_dtype=True)
- assert dtype == 'datetime64[ns, {0}]'.format(tz)
- assert val == dt.value
-
- dtype, val = infer_dtype_from_scalar(dt)
- assert dtype == np.object_
- assert val == dt
-
for freq in ['M', 'D']:
p = Period('2011-01-01', freq=freq)
dtype, val = infer_dtype_from_scalar(p, pandas_dtype=True)
@@ -171,6 +161,17 @@ def testinfer_dtype_from_scalar(self):
dtype, val = infer_dtype_from_scalar(data)
assert dtype == np.object_
+ @pytest.mark.parametrize('tz', ['UTC', 'US/Eastern', 'Asia/Tokyo'])
+ def testinfer_from_scalar_tz(self, tz):
+ dt = Timestamp(1, tz=tz)
+ dtype, val = infer_dtype_from_scalar(dt, pandas_dtype=True)
+ assert dtype == 'datetime64[ns, {0}]'.format(tz)
+ assert val == dt.value
+
+ dtype, val = infer_dtype_from_scalar(dt)
+ assert dtype == np.object_
+ assert val == dt
+
def testinfer_dtype_from_scalar_errors(self):
with pytest.raises(ValueError):
infer_dtype_from_scalar(np.array([1]))
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 3e0ba26c20eb0..d38e3b2ad9c10 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -269,25 +269,26 @@ def test_set_index_cast_datetimeindex(self):
df.pop('ts')
assert_frame_equal(df, expected)
+ def test_reset_index_tz(self, tz_aware_fixture):
# GH 3950
# reset_index with single level
- for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']:
- idx = pd.date_range('1/1/2011', periods=5,
- freq='D', tz=tz, name='idx')
- df = pd.DataFrame(
- {'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
-
- expected = pd.DataFrame({'idx': [datetime(2011, 1, 1),
- datetime(2011, 1, 2),
- datetime(2011, 1, 3),
- datetime(2011, 1, 4),
- datetime(2011, 1, 5)],
- 'a': range(5),
- 'b': ['A', 'B', 'C', 'D', 'E']},
- columns=['idx', 'a', 'b'])
- expected['idx'] = expected['idx'].apply(
- lambda d: pd.Timestamp(d, tz=tz))
- assert_frame_equal(df.reset_index(), expected)
+ tz = tz_aware_fixture
+ idx = pd.date_range('1/1/2011', periods=5,
+ freq='D', tz=tz, name='idx')
+ df = pd.DataFrame(
+ {'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
+
+ expected = pd.DataFrame({'idx': [datetime(2011, 1, 1),
+ datetime(2011, 1, 2),
+ datetime(2011, 1, 3),
+ datetime(2011, 1, 4),
+ datetime(2011, 1, 5)],
+ 'a': range(5),
+ 'b': ['A', 'B', 'C', 'D', 'E']},
+ columns=['idx', 'a', 'b'])
+ expected['idx'] = expected['idx'].apply(
+ lambda d: pd.Timestamp(d, tz=tz))
+ assert_frame_equal(df.reset_index(), expected)
def test_set_index_timezone(self):
# GH 12358
diff --git a/pandas/tests/indexes/datetimes/test_arithmetic.py b/pandas/tests/indexes/datetimes/test_arithmetic.py
index 8f259a7e78897..4ef7997a53b85 100644
--- a/pandas/tests/indexes/datetimes/test_arithmetic.py
+++ b/pandas/tests/indexes/datetimes/test_arithmetic.py
@@ -876,23 +876,24 @@ def test_dti_with_offset_series(self, tz, names):
res3 = dti - other
tm.assert_series_equal(res3, expected_sub)
- def test_dti_add_offset_tzaware(self):
- dates = date_range('2012-11-01', periods=3, tz='US/Pacific')
- offset = dates + pd.offsets.Hour(5)
- assert dates[0] + pd.offsets.Hour(5) == offset[0]
+ def test_dti_add_offset_tzaware(self, tz_aware_fixture):
+ timezone = tz_aware_fixture
+ if timezone == 'US/Pacific':
+ dates = date_range('2012-11-01', periods=3, tz=timezone)
+ offset = dates + pd.offsets.Hour(5)
+ assert dates[0] + pd.offsets.Hour(5) == offset[0]
- # GH#6818
- for tz in ['UTC', 'US/Pacific', 'Asia/Tokyo']:
- dates = date_range('2010-11-01 00:00', periods=3, tz=tz, freq='H')
- expected = DatetimeIndex(['2010-11-01 05:00', '2010-11-01 06:00',
- '2010-11-01 07:00'], freq='H', tz=tz)
+ dates = date_range('2010-11-01 00:00',
+ periods=3, tz=timezone, freq='H')
+ expected = DatetimeIndex(['2010-11-01 05:00', '2010-11-01 06:00',
+ '2010-11-01 07:00'], freq='H', tz=timezone)
- offset = dates + pd.offsets.Hour(5)
- tm.assert_index_equal(offset, expected)
- offset = dates + np.timedelta64(5, 'h')
- tm.assert_index_equal(offset, expected)
- offset = dates + timedelta(hours=5)
- tm.assert_index_equal(offset, expected)
+ offset = dates + pd.offsets.Hour(5)
+ tm.assert_index_equal(offset, expected)
+ offset = dates + np.timedelta64(5, 'h')
+ tm.assert_index_equal(offset, expected)
+ offset = dates + timedelta(hours=5)
+ tm.assert_index_equal(offset, expected)
@pytest.mark.parametrize('klass,assert_func', [
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 176f5bd0c1a2a..86030a5605395 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -450,9 +450,8 @@ def test_dti_constructor_preserve_dti_freq(self):
rng2 = DatetimeIndex(rng)
assert rng.freq == rng2.freq
- @pytest.mark.parametrize('tz', [None, 'UTC', 'Asia/Tokyo',
- 'dateutil/US/Pacific'])
- def test_dti_constructor_years_only(self, tz):
+ def test_dti_constructor_years_only(self, tz_naive_fixture):
+ tz = tz_naive_fixture
# GH 6961
rng1 = date_range('2014', '2015', freq='M', tz=tz)
expected1 = date_range('2014-01-31', '2014-12-31', freq='M', tz=tz)
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index 51788b3e25507..2d55dfff7a8f3 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -329,8 +329,8 @@ def test_factorize(self):
tm.assert_numpy_array_equal(arr, exp_arr)
tm.assert_index_equal(idx, idx3)
- @pytest.mark.parametrize('tz', [None, 'UTC', 'US/Eastern', 'Asia/Tokyo'])
- def test_factorize_tz(self, tz):
+ def test_factorize_tz(self, tz_naive_fixture):
+ tz = tz_naive_fixture
# GH#13750
base = pd.date_range('2016-11-05', freq='H', periods=100, tz=tz)
idx = base.repeat(5)
diff --git a/pandas/tests/indexes/datetimes/test_ops.py b/pandas/tests/indexes/datetimes/test_ops.py
index ed7e425924097..8986828399a98 100644
--- a/pandas/tests/indexes/datetimes/test_ops.py
+++ b/pandas/tests/indexes/datetimes/test_ops.py
@@ -13,12 +13,17 @@
from pandas.tests.test_base import Ops
+@pytest.fixture(params=[None, 'UTC', 'Asia/Tokyo', 'US/Eastern',
+ 'dateutil/Asia/Singapore',
+ 'dateutil/US/Pacific'])
+def tz_fixture(request):
+ return request.param
+
+
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
class TestDatetimeIndexOps(Ops):
- tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/Asia/Singapore',
- 'dateutil/US/Pacific']
def setup_method(self, method):
super(TestDatetimeIndexOps, self).setup_method(method)
@@ -47,34 +52,35 @@ def test_ops_properties_basic(self):
assert s.day == 10
pytest.raises(AttributeError, lambda: s.weekday)
- def test_minmax(self):
- for tz in self.tz:
- # monotonic
- idx1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
- '2011-01-03'], tz=tz)
- assert idx1.is_monotonic
+ def test_minmax_tz(self, tz_fixture):
+ tz = tz_fixture
+ # monotonic
+ idx1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
+ '2011-01-03'], tz=tz)
+ assert idx1.is_monotonic
- # non-monotonic
- idx2 = pd.DatetimeIndex(['2011-01-01', pd.NaT, '2011-01-03',
- '2011-01-02', pd.NaT], tz=tz)
- assert not idx2.is_monotonic
+ # non-monotonic
+ idx2 = pd.DatetimeIndex(['2011-01-01', pd.NaT, '2011-01-03',
+ '2011-01-02', pd.NaT], tz=tz)
+ assert not idx2.is_monotonic
- for idx in [idx1, idx2]:
- assert idx.min() == Timestamp('2011-01-01', tz=tz)
- assert idx.max() == Timestamp('2011-01-03', tz=tz)
- assert idx.argmin() == 0
- assert idx.argmax() == 2
+ for idx in [idx1, idx2]:
+ assert idx.min() == Timestamp('2011-01-01', tz=tz)
+ assert idx.max() == Timestamp('2011-01-03', tz=tz)
+ assert idx.argmin() == 0
+ assert idx.argmax() == 2
- for op in ['min', 'max']:
- # Return NaT
- obj = DatetimeIndex([])
- assert pd.isna(getattr(obj, op)())
+ @pytest.mark.parametrize('op', ['min', 'max'])
+ def test_minmax_nat(self, op):
+ # Return NaT
+ obj = DatetimeIndex([])
+ assert pd.isna(getattr(obj, op)())
- obj = DatetimeIndex([pd.NaT])
- assert pd.isna(getattr(obj, op)())
+ obj = DatetimeIndex([pd.NaT])
+ assert pd.isna(getattr(obj, op)())
- obj = DatetimeIndex([pd.NaT, pd.NaT, pd.NaT])
- assert pd.isna(getattr(obj, op)())
+ obj = DatetimeIndex([pd.NaT, pd.NaT, pd.NaT])
+ assert pd.isna(getattr(obj, op)())
def test_numpy_minmax(self):
dr = pd.date_range(start='2016-01-15', end='2016-01-20')
@@ -96,8 +102,8 @@ def test_numpy_minmax(self):
tm.assert_raises_regex(
ValueError, errmsg, np.argmax, dr, out=0)
- @pytest.mark.parametrize('tz', tz)
- def test_repeat_range(self, tz):
+ def test_repeat_range(self, tz_fixture):
+ tz = tz_fixture
rng = date_range('1/1/2000', '1/1/2001')
result = rng.repeat(5)
@@ -128,8 +134,8 @@ def test_repeat_range(self, tz):
tm.assert_index_equal(res, exp)
assert res.freq is None
- @pytest.mark.parametrize('tz', tz)
- def test_repeat(self, tz):
+ def test_repeat(self, tz_fixture):
+ tz = tz_fixture
reps = 2
msg = "the 'axis' parameter is not supported"
@@ -151,8 +157,8 @@ def test_repeat(self, tz):
tm.assert_raises_regex(ValueError, msg, np.repeat,
rng, reps, axis=1)
- @pytest.mark.parametrize('tz', tz)
- def test_resolution(self, tz):
+ def test_resolution(self, tz_fixture):
+ tz = tz_fixture
for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T',
'S', 'L', 'U'],
['day', 'day', 'day', 'day', 'hour',
@@ -162,8 +168,8 @@ def test_resolution(self, tz):
tz=tz)
assert idx.resolution == expected
- @pytest.mark.parametrize('tz', tz)
- def test_value_counts_unique(self, tz):
+ def test_value_counts_unique(self, tz_fixture):
+ tz = tz_fixture
# GH 7735
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=10)
# create repeated values, 'n'th element is repeated by n+1 times
@@ -209,86 +215,89 @@ def test_nonunique_contains(self):
['2015', '2015', '2016'], ['2015', '2015', '2014'])):
assert idx[0] in idx
- def test_order(self):
- # with freq
- idx1 = DatetimeIndex(['2011-01-01', '2011-01-02',
- '2011-01-03'], freq='D', name='idx')
- idx2 = DatetimeIndex(['2011-01-01 09:00', '2011-01-01 10:00',
- '2011-01-01 11:00'], freq='H',
- tz='Asia/Tokyo', name='tzidx')
-
- for idx in [idx1, idx2]:
- ordered = idx.sort_values()
- tm.assert_index_equal(ordered, idx)
- assert ordered.freq == idx.freq
-
- ordered = idx.sort_values(ascending=False)
- expected = idx[::-1]
- tm.assert_index_equal(ordered, expected)
- assert ordered.freq == expected.freq
- assert ordered.freq.n == -1
-
- ordered, indexer = idx.sort_values(return_indexer=True)
- tm.assert_index_equal(ordered, idx)
- tm.assert_numpy_array_equal(indexer, np.array([0, 1, 2]),
- check_dtype=False)
- assert ordered.freq == idx.freq
-
- ordered, indexer = idx.sort_values(return_indexer=True,
- ascending=False)
- expected = idx[::-1]
- tm.assert_index_equal(ordered, expected)
- tm.assert_numpy_array_equal(indexer,
- np.array([2, 1, 0]),
- check_dtype=False)
- assert ordered.freq == expected.freq
- assert ordered.freq.n == -1
+ @pytest.mark.parametrize('idx',
+ [
+ DatetimeIndex(
+ ['2011-01-01',
+ '2011-01-02',
+ '2011-01-03'],
+ freq='D', name='idx'),
+ DatetimeIndex(
+ ['2011-01-01 09:00',
+ '2011-01-01 10:00',
+ '2011-01-01 11:00'],
+ freq='H', name='tzidx', tz='Asia/Tokyo')
+ ])
+ def test_order_with_freq(self, idx):
+ ordered = idx.sort_values()
+ tm.assert_index_equal(ordered, idx)
+ assert ordered.freq == idx.freq
+
+ ordered = idx.sort_values(ascending=False)
+ expected = idx[::-1]
+ tm.assert_index_equal(ordered, expected)
+ assert ordered.freq == expected.freq
+ assert ordered.freq.n == -1
+
+ ordered, indexer = idx.sort_values(return_indexer=True)
+ tm.assert_index_equal(ordered, idx)
+ tm.assert_numpy_array_equal(indexer, np.array([0, 1, 2]),
+ check_dtype=False)
+ assert ordered.freq == idx.freq
+
+ ordered, indexer = idx.sort_values(return_indexer=True,
+ ascending=False)
+ expected = idx[::-1]
+ tm.assert_index_equal(ordered, expected)
+ tm.assert_numpy_array_equal(indexer,
+ np.array([2, 1, 0]),
+ check_dtype=False)
+ assert ordered.freq == expected.freq
+ assert ordered.freq.n == -1
+
+ @pytest.mark.parametrize('index_dates,expected_dates', [
+ (['2011-01-01', '2011-01-03', '2011-01-05',
+ '2011-01-02', '2011-01-01'],
+ ['2011-01-01', '2011-01-01', '2011-01-02',
+ '2011-01-03', '2011-01-05']),
+ (['2011-01-01', '2011-01-03', '2011-01-05',
+ '2011-01-02', '2011-01-01'],
+ ['2011-01-01', '2011-01-01', '2011-01-02',
+ '2011-01-03', '2011-01-05']),
+ ([pd.NaT, '2011-01-03', '2011-01-05',
+ '2011-01-02', pd.NaT],
+ [pd.NaT, pd.NaT, '2011-01-02', '2011-01-03',
+ '2011-01-05'])
+ ])
+ def test_order_without_freq(self, index_dates, expected_dates, tz_fixture):
+ tz = tz_fixture
# without freq
- for tz in self.tz:
- idx1 = DatetimeIndex(['2011-01-01', '2011-01-03', '2011-01-05',
- '2011-01-02', '2011-01-01'],
- tz=tz, name='idx1')
- exp1 = DatetimeIndex(['2011-01-01', '2011-01-01', '2011-01-02',
- '2011-01-03', '2011-01-05'],
- tz=tz, name='idx1')
-
- idx2 = DatetimeIndex(['2011-01-01', '2011-01-03', '2011-01-05',
- '2011-01-02', '2011-01-01'],
- tz=tz, name='idx2')
-
- exp2 = DatetimeIndex(['2011-01-01', '2011-01-01', '2011-01-02',
- '2011-01-03', '2011-01-05'],
- tz=tz, name='idx2')
-
- idx3 = DatetimeIndex([pd.NaT, '2011-01-03', '2011-01-05',
- '2011-01-02', pd.NaT], tz=tz, name='idx3')
- exp3 = DatetimeIndex([pd.NaT, pd.NaT, '2011-01-02', '2011-01-03',
- '2011-01-05'], tz=tz, name='idx3')
-
- for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3)]:
- ordered = idx.sort_values()
- tm.assert_index_equal(ordered, expected)
- assert ordered.freq is None
-
- ordered = idx.sort_values(ascending=False)
- tm.assert_index_equal(ordered, expected[::-1])
- assert ordered.freq is None
-
- ordered, indexer = idx.sort_values(return_indexer=True)
- tm.assert_index_equal(ordered, expected)
-
- exp = np.array([0, 4, 3, 1, 2])
- tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
- assert ordered.freq is None
-
- ordered, indexer = idx.sort_values(return_indexer=True,
- ascending=False)
- tm.assert_index_equal(ordered, expected[::-1])
-
- exp = np.array([2, 1, 3, 4, 0])
- tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
- assert ordered.freq is None
+ index = DatetimeIndex(index_dates, tz=tz, name='idx')
+ expected = DatetimeIndex(expected_dates, tz=tz, name='idx')
+
+ ordered = index.sort_values()
+ tm.assert_index_equal(ordered, expected)
+ assert ordered.freq is None
+
+ ordered = index.sort_values(ascending=False)
+ tm.assert_index_equal(ordered, expected[::-1])
+ assert ordered.freq is None
+
+ ordered, indexer = index.sort_values(return_indexer=True)
+ tm.assert_index_equal(ordered, expected)
+
+ exp = np.array([0, 4, 3, 1, 2])
+ tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
+ assert ordered.freq is None
+
+ ordered, indexer = index.sort_values(return_indexer=True,
+ ascending=False)
+ tm.assert_index_equal(ordered, expected[::-1])
+
+ exp = np.array([2, 1, 3, 4, 0])
+ tm.assert_numpy_array_equal(indexer, exp, check_dtype=False)
+ assert ordered.freq is None
def test_drop_duplicates_metadata(self):
# GH 10115
@@ -345,12 +354,12 @@ def test_nat_new(self):
exp = np.array([tslib.iNaT] * 5, dtype=np.int64)
tm.assert_numpy_array_equal(result, exp)
- @pytest.mark.parametrize('tz', [None, 'US/Eastern', 'UTC'])
- def test_nat(self, tz):
+ def test_nat(self, tz_naive_fixture):
+ timezone = tz_naive_fixture
assert pd.DatetimeIndex._na_value is pd.NaT
assert pd.DatetimeIndex([])._na_value is pd.NaT
- idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
+ idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=timezone)
assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, False]))
@@ -358,7 +367,7 @@ def test_nat(self, tz):
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([], dtype=np.intp))
- idx = pd.DatetimeIndex(['2011-01-01', 'NaT'], tz=tz)
+ idx = pd.DatetimeIndex(['2011-01-01', 'NaT'], tz=timezone)
assert idx._can_hold_na
tm.assert_numpy_array_equal(idx._isnan, np.array([False, True]))
@@ -366,8 +375,7 @@ def test_nat(self, tz):
tm.assert_numpy_array_equal(idx._nan_idxs,
np.array([1], dtype=np.intp))
- @pytest.mark.parametrize('tz', [None, 'UTC', 'US/Eastern', 'Asia/Tokyo'])
- def test_equals(self, tz):
+ def test_equals(self):
# GH 13107
idx = pd.DatetimeIndex(['2011-01-01', '2011-01-02', 'NaT'])
assert idx.equals(idx)
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index a8191816238b1..09210d8b64d1b 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -241,9 +241,8 @@ def test_dti_tz_convert_dst(self):
idx = idx.tz_convert('UTC')
tm.assert_index_equal(idx.hour, Index([4, 4]))
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo', 'US/Eastern',
- 'dateutil/US/Pacific'])
- def test_tz_convert_roundtrip(self, tz):
+ def test_tz_convert_roundtrip(self, tz_aware_fixture):
+ tz = tz_aware_fixture
idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M',
tz='UTC')
exp1 = date_range(start='2014-01-01', end='2014-12-31', freq='M')
@@ -431,9 +430,9 @@ def test_dti_tz_localize_utc_conversion(self, tz):
with pytest.raises(pytz.NonExistentTimeError):
rng.tz_localize(tz)
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo', 'US/Eastern',
- 'dateutil/US/Pacific'])
- def test_dti_tz_localize_roundtrip(self, tz):
+ def test_dti_tz_localize_roundtrip(self, tz_aware_fixture):
+ tz = tz_aware_fixture
+
idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M')
idx2 = date_range(start='2014-01-01', end='2014-12-31', freq='D')
idx3 = date_range(start='2014-01-01', end='2014-03-01', freq='H')
@@ -443,7 +442,6 @@ def test_dti_tz_localize_roundtrip(self, tz):
expected = date_range(start=idx[0], end=idx[-1], freq=idx.freq,
tz=tz)
tm.assert_index_equal(localized, expected)
-
with pytest.raises(TypeError):
localized.tz_localize(tz)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index ff9c86fbfe384..7e19de4cca292 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -376,28 +376,27 @@ def test_constructor_dtypes(self):
assert isinstance(idx, Index)
assert idx.dtype == object
- def test_constructor_dtypes_datetime(self):
-
- for tz in [None, 'UTC', 'US/Eastern', 'Asia/Tokyo']:
- idx = pd.date_range('2011-01-01', periods=5, tz=tz)
- dtype = idx.dtype
-
- # pass values without timezone, as DatetimeIndex localizes it
- for values in [pd.date_range('2011-01-01', periods=5).values,
- pd.date_range('2011-01-01', periods=5).asi8]:
-
- for res in [pd.Index(values, tz=tz),
- pd.Index(values, dtype=dtype),
- pd.Index(list(values), tz=tz),
- pd.Index(list(values), dtype=dtype)]:
- tm.assert_index_equal(res, idx)
-
- # check compat with DatetimeIndex
- for res in [pd.DatetimeIndex(values, tz=tz),
- pd.DatetimeIndex(values, dtype=dtype),
- pd.DatetimeIndex(list(values), tz=tz),
- pd.DatetimeIndex(list(values), dtype=dtype)]:
- tm.assert_index_equal(res, idx)
+ def test_constructor_dtypes_datetime(self, tz_naive_fixture):
+ tz = tz_naive_fixture
+ idx = pd.date_range('2011-01-01', periods=5, tz=tz)
+ dtype = idx.dtype
+
+ # pass values without timezone, as DatetimeIndex localizes it
+ for values in [pd.date_range('2011-01-01', periods=5).values,
+ pd.date_range('2011-01-01', periods=5).asi8]:
+
+ for res in [pd.Index(values, tz=tz),
+ pd.Index(values, dtype=dtype),
+ pd.Index(list(values), tz=tz),
+ pd.Index(list(values), dtype=dtype)]:
+ tm.assert_index_equal(res, idx)
+
+ # check compat with DatetimeIndex
+ for res in [pd.DatetimeIndex(values, tz=tz),
+ pd.DatetimeIndex(values, dtype=dtype),
+ pd.DatetimeIndex(list(values), tz=tz),
+ pd.DatetimeIndex(list(values), dtype=dtype)]:
+ tm.assert_index_equal(res, idx)
def test_constructor_dtypes_timedelta(self):
diff --git a/pandas/tests/reshape/test_concat.py b/pandas/tests/reshape/test_concat.py
index 437b4179c580a..ffd37dc4b2f59 100644
--- a/pandas/tests/reshape/test_concat.py
+++ b/pandas/tests/reshape/test_concat.py
@@ -293,88 +293,88 @@ def test_concatlike_common_coerce_to_pandas_object(self):
assert isinstance(res.iloc[0], pd.Timestamp)
assert isinstance(res.iloc[-1], pd.Timedelta)
- def test_concatlike_datetimetz(self):
+ def test_concatlike_datetimetz(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH 7795
- for tz in ['UTC', 'US/Eastern', 'Asia/Tokyo']:
- dti1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
- dti2 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'], tz=tz)
+ dti1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
+ dti2 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'], tz=tz)
- exp = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
- '2012-01-01', '2012-01-02'], tz=tz)
+ exp = pd.DatetimeIndex(['2011-01-01', '2011-01-02',
+ '2012-01-01', '2012-01-02'], tz=tz)
- res = dti1.append(dti2)
- tm.assert_index_equal(res, exp)
+ res = dti1.append(dti2)
+ tm.assert_index_equal(res, exp)
- dts1 = pd.Series(dti1)
- dts2 = pd.Series(dti2)
- res = dts1.append(dts2)
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ dts1 = pd.Series(dti1)
+ dts2 = pd.Series(dti2)
+ res = dts1.append(dts2)
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
- res = pd.concat([dts1, dts2])
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ res = pd.concat([dts1, dts2])
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
- def test_concatlike_datetimetz_short(self):
+ @pytest.mark.parametrize('tz',
+ ['UTC', 'US/Eastern', 'Asia/Tokyo', 'EST5EDT'])
+ def test_concatlike_datetimetz_short(self, tz):
# GH 7795
- for tz in ['UTC', 'US/Eastern', 'Asia/Tokyo', 'EST5EDT']:
-
- ix1 = pd.DatetimeIndex(start='2014-07-15', end='2014-07-17',
- freq='D', tz=tz)
- ix2 = pd.DatetimeIndex(['2014-07-11', '2014-07-21'], tz=tz)
- df1 = pd.DataFrame(0, index=ix1, columns=['A', 'B'])
- df2 = pd.DataFrame(0, index=ix2, columns=['A', 'B'])
-
- exp_idx = pd.DatetimeIndex(['2014-07-15', '2014-07-16',
- '2014-07-17', '2014-07-11',
- '2014-07-21'], tz=tz)
- exp = pd.DataFrame(0, index=exp_idx, columns=['A', 'B'])
-
- tm.assert_frame_equal(df1.append(df2), exp)
- tm.assert_frame_equal(pd.concat([df1, df2]), exp)
-
- def test_concatlike_datetimetz_to_object(self):
+ ix1 = pd.DatetimeIndex(start='2014-07-15', end='2014-07-17',
+ freq='D', tz=tz)
+ ix2 = pd.DatetimeIndex(['2014-07-11', '2014-07-21'], tz=tz)
+ df1 = pd.DataFrame(0, index=ix1, columns=['A', 'B'])
+ df2 = pd.DataFrame(0, index=ix2, columns=['A', 'B'])
+
+ exp_idx = pd.DatetimeIndex(['2014-07-15', '2014-07-16',
+ '2014-07-17', '2014-07-11',
+ '2014-07-21'], tz=tz)
+ exp = pd.DataFrame(0, index=exp_idx, columns=['A', 'B'])
+
+ tm.assert_frame_equal(df1.append(df2), exp)
+ tm.assert_frame_equal(pd.concat([df1, df2]), exp)
+
+ def test_concatlike_datetimetz_to_object(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH 13660
# different tz coerces to object
- for tz in ['UTC', 'US/Eastern', 'Asia/Tokyo']:
- dti1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
- dti2 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'])
+ dti1 = pd.DatetimeIndex(['2011-01-01', '2011-01-02'], tz=tz)
+ dti2 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'])
- exp = pd.Index([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp('2011-01-02', tz=tz),
- pd.Timestamp('2012-01-01'),
- pd.Timestamp('2012-01-02')], dtype=object)
+ exp = pd.Index([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2011-01-02', tz=tz),
+ pd.Timestamp('2012-01-01'),
+ pd.Timestamp('2012-01-02')], dtype=object)
- res = dti1.append(dti2)
- tm.assert_index_equal(res, exp)
+ res = dti1.append(dti2)
+ tm.assert_index_equal(res, exp)
- dts1 = pd.Series(dti1)
- dts2 = pd.Series(dti2)
- res = dts1.append(dts2)
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ dts1 = pd.Series(dti1)
+ dts2 = pd.Series(dti2)
+ res = dts1.append(dts2)
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
- res = pd.concat([dts1, dts2])
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ res = pd.concat([dts1, dts2])
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
- # different tz
- dti3 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'],
- tz='US/Pacific')
+ # different tz
+ dti3 = pd.DatetimeIndex(['2012-01-01', '2012-01-02'],
+ tz='US/Pacific')
- exp = pd.Index([pd.Timestamp('2011-01-01', tz=tz),
- pd.Timestamp('2011-01-02', tz=tz),
- pd.Timestamp('2012-01-01', tz='US/Pacific'),
- pd.Timestamp('2012-01-02', tz='US/Pacific')],
- dtype=object)
+ exp = pd.Index([pd.Timestamp('2011-01-01', tz=tz),
+ pd.Timestamp('2011-01-02', tz=tz),
+ pd.Timestamp('2012-01-01', tz='US/Pacific'),
+ pd.Timestamp('2012-01-02', tz='US/Pacific')],
+ dtype=object)
- res = dti1.append(dti3)
- # tm.assert_index_equal(res, exp)
+ res = dti1.append(dti3)
+ # tm.assert_index_equal(res, exp)
- dts1 = pd.Series(dti1)
- dts3 = pd.Series(dti3)
- res = dts1.append(dts3)
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ dts1 = pd.Series(dti1)
+ dts3 = pd.Series(dti3)
+ res = dts1.append(dts3)
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
- res = pd.concat([dts1, dts3])
- tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
+ res = pd.concat([dts1, dts3])
+ tm.assert_series_equal(res, pd.Series(exp, index=[0, 1, 0, 1]))
def test_concatlike_common_period(self):
# GH 13660
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index cde5baf47c18e..fb989b2f19307 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -124,8 +124,8 @@ def test_names(self, data, time_locale):
assert np.isnan(nan_ts.day_name(time_locale))
assert np.isnan(nan_ts.month_name(time_locale))
- @pytest.mark.parametrize('tz', [None, 'UTC', 'US/Eastern', 'Asia/Tokyo'])
- def test_is_leap_year(self, tz):
+ def test_is_leap_year(self, tz_naive_fixture):
+ tz = tz_naive_fixture
# GH 13727
dt = Timestamp('2000-01-01 00:00:00', tz=tz)
assert dt.is_leap_year
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index f43651dc6f0db..cd0379e7af1a3 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -94,11 +94,10 @@ def test_tz_localize_errors_ambiguous(self):
with pytest.raises(AmbiguousTimeError):
ts.tz_localize('US/Pacific', errors='coerce')
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo',
- 'US/Eastern', 'dateutil/US/Pacific'])
@pytest.mark.parametrize('stamp', ['2014-02-01 09:00', '2014-07-08 09:00',
'2014-11-01 17:00', '2014-11-05 00:00'])
- def test_tz_localize_roundtrip(self, stamp, tz):
+ def test_tz_localize_roundtrip(self, stamp, tz_aware_fixture):
+ tz = tz_aware_fixture
ts = Timestamp(stamp)
localized = ts.tz_localize(tz)
assert localized == Timestamp(stamp, tz=tz)
@@ -162,11 +161,11 @@ def test_timestamp_tz_localize(self, tz):
# ------------------------------------------------------------------
# Timestamp.tz_convert
- @pytest.mark.parametrize('tz', ['UTC', 'Asia/Tokyo',
- 'US/Eastern', 'dateutil/US/Pacific'])
@pytest.mark.parametrize('stamp', ['2014-02-01 09:00', '2014-07-08 09:00',
'2014-11-01 17:00', '2014-11-05 00:00'])
- def test_tz_convert_roundtrip(self, stamp, tz):
+ def test_tz_convert_roundtrip(self, stamp, tz_aware_fixture):
+ tz = tz_aware_fixture
+
ts = Timestamp(stamp, tz='UTC')
converted = ts.tz_convert(tz)
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index 994ff86e6fdf9..aecddab8477fc 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -132,7 +132,6 @@ def test_floor(self):
# --------------------------------------------------------------
# Timestamp.replace
- timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']
def test_replace_naive(self):
# GH#14621, GH#7825
@@ -141,8 +140,8 @@ def test_replace_naive(self):
expected = Timestamp('2016-01-01 00:00:00')
assert result == expected
- @pytest.mark.parametrize('tz', timezones)
- def test_replace_aware(self, tz):
+ def test_replace_aware(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH#14621, GH#7825
# replacing datetime components with and w/o presence of a timezone
ts = Timestamp('2016-01-01 09:00:00', tz=tz)
@@ -150,16 +149,16 @@ def test_replace_aware(self, tz):
expected = Timestamp('2016-01-01 00:00:00', tz=tz)
assert result == expected
- @pytest.mark.parametrize('tz', timezones)
- def test_replace_preserves_nanos(self, tz):
+ def test_replace_preserves_nanos(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH#14621, GH#7825
ts = Timestamp('2016-01-01 09:00:00.000000123', tz=tz)
result = ts.replace(hour=0)
expected = Timestamp('2016-01-01 00:00:00.000000123', tz=tz)
assert result == expected
- @pytest.mark.parametrize('tz', timezones)
- def test_replace_multiple(self, tz):
+ def test_replace_multiple(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH#14621, GH#7825
# replacing datetime components with and w/o presence of a timezone
# test all
@@ -169,15 +168,15 @@ def test_replace_multiple(self, tz):
expected = Timestamp('2015-02-02 00:05:05.000005005', tz=tz)
assert result == expected
- @pytest.mark.parametrize('tz', timezones)
- def test_replace_invalid_kwarg(self, tz):
+ def test_replace_invalid_kwarg(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH#14621, GH#7825
ts = Timestamp('2016-01-01 09:00:00.000000123', tz=tz)
with pytest.raises(TypeError):
ts.replace(foo=5)
- @pytest.mark.parametrize('tz', timezones)
- def test_replace_integer_args(self, tz):
+ def test_replace_integer_args(self, tz_aware_fixture):
+ tz = tz_aware_fixture
# GH#14621, GH#7825
ts = Timestamp('2016-01-01 09:00:00.000000123', tz=tz)
with pytest.raises(ValueError):
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index 8ad73538fbec1..6c23b186ef1c4 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -187,3 +187,28 @@ def decorated_func(func):
"installed->{installed}".format(
enabled=_USE_NUMEXPR,
installed=_NUMEXPR_INSTALLED))
+
+
+def parametrize_fixture_doc(*args):
+ """
+ Intended for use as a decorator for parametrized fixture,
+ this function will wrap the decorated function with a pytest
+ ``parametrize_fixture_doc`` mark. That mark will format
+ initial fixture docstring by replacing placeholders {0}, {1} etc
+ with parameters passed as arguments.
+
+ Parameters:
+ ----------
+ args: iterable
+ Positional arguments for docstring.
+
+ Returns:
+ -------
+ documented_fixture: function
+ The decorated function wrapped within a pytest
+ ``parametrize_fixture_doc`` mark
+ """
+ def documented_fixture(fixture):
+ fixture.__doc__ = fixture.__doc__.format(*args)
+ return fixture
+ return documented_fixture
| This is a follow up for #20287
Different tests use different lists of timezones and it's kind of hard to unify all of them. In this PR I tried to create a fixture that covers most frequent cases. Some tests use naive timezone, while others are designed to use only explicit timezones (and fail in case tz is not specified), that's why I created two fixtures to cover these two group of tests. List of timezones also includes one dateutil-timezone.
I'm not sure if described approach is correct though so I would appreciate any feedback.
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/20365 | 2018-03-15T15:09:59Z | 2018-04-12T19:49:09Z | 2018-04-12T19:49:09Z | 2018-04-12T19:49:12Z |
DOC: remove incorrect example from __add__ et al docstring | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 037c9e31f7157..3dcfab868bf17 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -516,33 +516,6 @@ def _get_op_name(op, special):
Returns
-------
result : DataFrame
-
-Examples
---------
->>> a = pd.DataFrame([1, 1, 1, np.nan], index=['a', 'b', 'c', 'd'],
-... columns=['one'])
->>> a
- one
-a 1.0
-b 1.0
-c 1.0
-d NaN
->>> b = pd.DataFrame(dict(one=[1, np.nan, 1, np.nan],
-... two=[np.nan, 2, np.nan, 2]),
-... index=['a', 'b', 'd', 'e'])
->>> b
- one two
-a 1.0 NaN
-b NaN 2.0
-d 1.0 NaN
-e NaN 2.0
->>> a.add(b, fill_value=0)
- one two
-a 2.0 NaN
-b 1.0 2.0
-c 1.0 NaN
-d 1.0 NaN
-e NaN 2.0
"""
_flex_doc_FRAME = """
| Notices while reviewing the doc PRs. We add a similar example to the `__add__` and similar docstrings (based on ``_arith_doc_FRAME`` template) than to `add` (which use the `_flex_doc_FRAME` template), so the example is incorrect, and also not needed for that docstring I think (nobody should directly use `__add__`?)
So remove it to have less code to maintain.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20364 | 2018-03-15T13:55:19Z | 2018-03-15T15:12:26Z | 2018-03-15T15:12:26Z | 2018-03-15T15:12:33Z |
REF: Changed ExtensionDtype inheritance | diff --git a/pandas/core/dtypes/base.py b/pandas/core/dtypes/base.py
index d54d980d02ffa..6dbed5f138d5d 100644
--- a/pandas/core/dtypes/base.py
+++ b/pandas/core/dtypes/base.py
@@ -5,26 +5,16 @@
from pandas.errors import AbstractMethodError
-class ExtensionDtype(object):
- """A custom data type, to be paired with an ExtensionArray.
-
- Notes
- -----
- The interface includes the following abstract methods that must
- be implemented by subclasses:
-
- * type
- * name
- * construct_from_string
-
- This class does not inherit from 'abc.ABCMeta' for performance reasons.
- Methods and properties required by the interface raise
- ``pandas.errors.AbstractMethodError`` and no ``register`` method is
- provided for registering virtual subclasses.
- """
-
- def __str__(self):
- return self.name
+class _DtypeOpsMixin(object):
+ # Not all of pandas' extension dtypes are compatibile with
+ # the new ExtensionArray interface. This means PandasExtensionDtype
+ # can't subclass ExtensionDtype yet, as is_extension_array_dtype would
+ # incorrectly say that these types are extension types.
+ #
+ # In the interim, we put methods that are shared between the two base
+ # classes ExtensionDtype and PandasExtensionDtype here. Both those base
+ # classes will inherit from this Mixin. Once everything is compatible, this
+ # class's methods can be moved to ExtensionDtype and removed.
def __eq__(self, other):
"""Check whether 'other' is equal to self.
@@ -52,6 +42,74 @@ def __eq__(self, other):
def __ne__(self, other):
return not self.__eq__(other)
+ @property
+ def names(self):
+ # type: () -> Optional[List[str]]
+ """Ordered list of field names, or None if there are no fields.
+
+ This is for compatibility with NumPy arrays, and may be removed in the
+ future.
+ """
+ return None
+
+ @classmethod
+ def is_dtype(cls, dtype):
+ """Check if we match 'dtype'.
+
+ Parameters
+ ----------
+ dtype : object
+ The object to check.
+
+ Returns
+ -------
+ is_dtype : bool
+
+ Notes
+ -----
+ The default implementation is True if
+
+ 1. ``cls.construct_from_string(dtype)`` is an instance
+ of ``cls``.
+ 2. ``dtype`` is an object and is an instance of ``cls``
+ 3. ``dtype`` has a ``dtype`` attribute, and any of the above
+ conditions is true for ``dtype.dtype``.
+ """
+ dtype = getattr(dtype, 'dtype', dtype)
+
+ if isinstance(dtype, np.dtype):
+ return False
+ elif dtype is None:
+ return False
+ elif isinstance(dtype, cls):
+ return True
+ try:
+ return cls.construct_from_string(dtype) is not None
+ except TypeError:
+ return False
+
+
+class ExtensionDtype(_DtypeOpsMixin):
+ """A custom data type, to be paired with an ExtensionArray.
+
+ Notes
+ -----
+ The interface includes the following abstract methods that must
+ be implemented by subclasses:
+
+ * type
+ * name
+ * construct_from_string
+
+ This class does not inherit from 'abc.ABCMeta' for performance reasons.
+ Methods and properties required by the interface raise
+ ``pandas.errors.AbstractMethodError`` and no ``register`` method is
+ provided for registering virtual subclasses.
+ """
+
+ def __str__(self):
+ return self.name
+
@property
def type(self):
# type: () -> type
@@ -87,16 +145,6 @@ def name(self):
"""
raise AbstractMethodError(self)
- @property
- def names(self):
- # type: () -> Optional[List[str]]
- """Ordered list of field names, or None if there are no fields.
-
- This is for compatibility with NumPy arrays, and may be removed in the
- future.
- """
- return None
-
@classmethod
def construct_from_string(cls, string):
"""Attempt to construct this type from a string.
@@ -128,39 +176,3 @@ def construct_from_string(cls, string):
... "'{}'".format(cls, string))
"""
raise AbstractMethodError(cls)
-
- @classmethod
- def is_dtype(cls, dtype):
- """Check if we match 'dtype'.
-
- Parameters
- ----------
- dtype : object
- The object to check.
-
- Returns
- -------
- is_dtype : bool
-
- Notes
- -----
- The default implementation is True if
-
- 1. ``cls.construct_from_string(dtype)`` is an instance
- of ``cls``.
- 2. ``dtype`` is an object and is an instance of ``cls``
- 3. ``dtype`` has a ``dtype`` attribute, and any of the above
- conditions is true for ``dtype.dtype``.
- """
- dtype = getattr(dtype, 'dtype', dtype)
-
- if isinstance(dtype, np.dtype):
- return False
- elif dtype is None:
- return False
- elif isinstance(dtype, cls):
- return True
- try:
- return cls.construct_from_string(dtype) is not None
- except TypeError:
- return False
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index b1d0dc2a2442e..74aaa2c4f00aa 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -26,7 +26,8 @@
_ensure_int32, _ensure_int64,
_NS_DTYPE, _TD_DTYPE, _INT64_DTYPE,
_POSSIBLY_CAST_DTYPES)
-from .dtypes import ExtensionDtype, DatetimeTZDtype, PeriodDtype
+from .dtypes import (ExtensionDtype, PandasExtensionDtype, DatetimeTZDtype,
+ PeriodDtype)
from .generic import (ABCDatetimeIndex, ABCPeriodIndex,
ABCSeries)
from .missing import isna, notna
@@ -1114,7 +1115,8 @@ def find_common_type(types):
if all(is_dtype_equal(first, t) for t in types[1:]):
return first
- if any(isinstance(t, ExtensionDtype) for t in types):
+ if any(isinstance(t, (PandasExtensionDtype, ExtensionDtype))
+ for t in types):
return np.object
# take lowest unit
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 197b35de88896..3a90feb7ccd7d 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -9,7 +9,7 @@
DatetimeTZDtype, DatetimeTZDtypeType,
PeriodDtype, PeriodDtypeType,
IntervalDtype, IntervalDtypeType,
- ExtensionDtype)
+ ExtensionDtype, PandasExtensionDtype)
from .generic import (ABCCategorical, ABCPeriodIndex,
ABCDatetimeIndex, ABCSeries,
ABCSparseArray, ABCSparseSeries, ABCCategoricalIndex,
@@ -2006,7 +2006,7 @@ def pandas_dtype(dtype):
return CategoricalDtype.construct_from_string(dtype)
except TypeError:
pass
- elif isinstance(dtype, ExtensionDtype):
+ elif isinstance(dtype, (PandasExtensionDtype, ExtensionDtype)):
return dtype
try:
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index d262a71933915..708f54f5ca75b 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -5,10 +5,10 @@
from pandas import compat
from pandas.core.dtypes.generic import ABCIndexClass, ABCCategoricalIndex
-from .base import ExtensionDtype
+from .base import ExtensionDtype, _DtypeOpsMixin
-class PandasExtensionDtype(ExtensionDtype):
+class PandasExtensionDtype(_DtypeOpsMixin):
"""
A np.dtype duck-typed class, suitable for holding a custom dtype.
@@ -83,7 +83,7 @@ class CategoricalDtypeType(type):
pass
-class CategoricalDtype(PandasExtensionDtype):
+class CategoricalDtype(PandasExtensionDtype, ExtensionDtype):
"""
Type for categorical data with the categories and orderedness
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 240c9b1f3377c..47db1b0d5383a 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -17,6 +17,7 @@
from pandas.core.dtypes.dtypes import (
ExtensionDtype, DatetimeTZDtype,
+ PandasExtensionDtype,
CategoricalDtype)
from pandas.core.dtypes.common import (
_TD_DTYPE, _NS_DTYPE,
@@ -598,7 +599,8 @@ def _astype(self, dtype, copy=False, errors='raise', values=None,
list(errors_legal_values), errors))
raise ValueError(invalid_arg)
- if inspect.isclass(dtype) and issubclass(dtype, ExtensionDtype):
+ if (inspect.isclass(dtype) and
+ issubclass(dtype, (PandasExtensionDtype, ExtensionDtype))):
msg = ("Expected an instance of {}, but got the class instead. "
"Try instantiating 'dtype'.".format(dtype.__name__))
raise TypeError(msg)
@@ -5005,7 +5007,7 @@ def _interleaved_dtype(blocks):
dtype = find_common_type([b.dtype for b in blocks])
# only numpy compat
- if isinstance(dtype, ExtensionDtype):
+ if isinstance(dtype, (PandasExtensionDtype, ExtensionDtype)):
dtype = np.object
return dtype
diff --git a/pandas/tests/extension/test_common.py b/pandas/tests/extension/test_common.py
index 1f4582f687415..589134632c7e9 100644
--- a/pandas/tests/extension/test_common.py
+++ b/pandas/tests/extension/test_common.py
@@ -5,10 +5,10 @@
import pandas.util.testing as tm
from pandas.core.arrays import ExtensionArray
from pandas.core.dtypes.common import is_extension_array_dtype
-from pandas.core.dtypes.dtypes import ExtensionDtype
+from pandas.core.dtypes import dtypes
-class DummyDtype(ExtensionDtype):
+class DummyDtype(dtypes.ExtensionDtype):
pass
@@ -65,3 +65,21 @@ def test_astype_no_copy():
result = arr.astype(arr.dtype)
assert arr.data is not result
+
+
+@pytest.mark.parametrize('dtype', [
+ dtypes.DatetimeTZDtype('ns', 'US/Central'),
+ dtypes.PeriodDtype("D"),
+ dtypes.IntervalDtype(),
+])
+def test_is_not_extension_array_dtype(dtype):
+ assert not isinstance(dtype, dtypes.ExtensionDtype)
+ assert not is_extension_array_dtype(dtype)
+
+
+@pytest.mark.parametrize('dtype', [
+ dtypes.CategoricalDtype(),
+])
+def test_is_extension_array_dtype(dtype):
+ assert isinstance(dtype, dtypes.ExtensionDtype)
+ assert is_extension_array_dtype(dtype)
| `is_extension_array_dtype(dtype)` was incorrect for dtypes that haven't
implemented the new interface yet. This is because they indirectly subclassed
ExtensionDtype.
This PR changes the hierarchy so that PandasExtensionDtype doesn't subclass
ExtensionDtype. As we implement the interface, like Categorical, we'll add
ExtensionDtype as a base class.
Before:
```
DatetimeTZDtype <- PandasExtensionDtype <- ExtensionDtype (wrong)
CategoricalDtype <- PandasExtensionDtype <- ExtensionDtype (right)
After:
DatetimeTZDtype <- PandasExtensionDtype
\
- _DtypeOpsMixin
/
ExtensionDtype ------
CategoricalDtype - PandasExtensionDtype -
\ \
\ -_DtypeOpsMixin
\ /
ExtensionDtype -------
```
Once all our extension dtypes have implemented the interface we can go back
to the simple, linear inheritance structure. | https://api.github.com/repos/pandas-dev/pandas/pulls/20363 | 2018-03-15T13:54:06Z | 2018-03-16T14:10:12Z | 2018-03-16T14:10:12Z | 2018-03-16T14:10:15Z |
ENH/API: ExtensionArray.factorize | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 45f86f044a4b2..065a5782aced1 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -4,6 +4,8 @@
"""
from __future__ import division
from warnings import warn, catch_warnings
+from textwrap import dedent
+
import numpy as np
from pandas.core.dtypes.cast import (
@@ -34,7 +36,10 @@
from pandas.core import common as com
from pandas._libs import algos, lib, hashtable as htable
from pandas._libs.tslib import iNaT
-from pandas.util._decorators import deprecate_kwarg
+from pandas.util._decorators import (Appender, Substitution,
+ deprecate_kwarg)
+
+_shared_docs = {}
# --------------- #
@@ -146,10 +151,9 @@ def _reconstruct_data(values, dtype, original):
Returns
-------
Index for extension types, otherwise ndarray casted to dtype
-
"""
from pandas import Index
- if is_categorical_dtype(dtype):
+ if is_extension_array_dtype(dtype):
pass
elif is_datetime64tz_dtype(dtype) or is_period_dtype(dtype):
values = Index(original)._shallow_copy(values, name=None)
@@ -469,32 +473,124 @@ def _factorize_array(values, na_sentinel=-1, size_hint=None,
return labels, uniques
-@deprecate_kwarg(old_arg_name='order', new_arg_name=None)
-def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
- """
- Encode input values as an enumerated type or categorical variable
+_shared_docs['factorize'] = """
+ Encode the object as an enumerated type or categorical variable.
+
+ This method is useful for obtaining a numeric representation of an
+ array when all that matters is identifying distinct values. `factorize`
+ is available as both a top-level function :func:`pandas.factorize`,
+ and as a method :meth:`Series.factorize` and :meth:`Index.factorize`.
Parameters
----------
- values : Sequence
- ndarrays must be 1-D. Sequences that aren't pandas objects are
- coereced to ndarrays before factorization.
- sort : boolean, default False
- Sort by values
+ %(values)s%(sort)s%(order)s
na_sentinel : int, default -1
- Value to mark "not found"
- size_hint : hint to the hashtable sizer
+ Value to mark "not found".
+ %(size_hint)s\
Returns
-------
- labels : the indexer to the original array
- uniques : ndarray (1-d) or Index
- the unique values. Index is returned when passed values is Index or
- Series
+ labels : ndarray
+ An integer ndarray that's an indexer into `uniques`.
+ ``uniques.take(labels)`` will have the same values as `values`.
+ uniques : ndarray, Index, or Categorical
+ The unique valid values. When `values` is Categorical, `uniques`
+ is a Categorical. When `values` is some other pandas object, an
+ `Index` is returned. Otherwise, a 1-D ndarray is returned.
+
+ .. note ::
+
+ Even if there's a missing value in `values`, `uniques` will
+ *not* contain an entry for it.
+
+ See Also
+ --------
+ pandas.cut : Discretize continuous-valued array.
+ pandas.unique : Find the unique valuse in an array.
+
+ Examples
+ --------
+ These examples all show factorize as a top-level method like
+ ``pd.factorize(values)``. The results are identical for methods like
+ :meth:`Series.factorize`.
+
+ >>> labels, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'])
+ >>> labels
+ array([0, 0, 1, 2, 0])
+ >>> uniques
+ array(['b', 'a', 'c'], dtype=object)
+
+ With ``sort=True``, the `uniques` will be sorted, and `labels` will be
+ shuffled so that the relationship is the maintained.
+
+ >>> labels, uniques = pd.factorize(['b', 'b', 'a', 'c', 'b'], sort=True)
+ >>> labels
+ array([1, 1, 0, 2, 1])
+ >>> uniques
+ array(['a', 'b', 'c'], dtype=object)
+
+ Missing values are indicated in `labels` with `na_sentinel`
+ (``-1`` by default). Note that missing values are never
+ included in `uniques`.
+
+ >>> labels, uniques = pd.factorize(['b', None, 'a', 'c', 'b'])
+ >>> labels
+ array([ 0, -1, 1, 2, 0])
+ >>> uniques
+ array(['b', 'a', 'c'], dtype=object)
- note: an array of Periods will ignore sort as it returns an always sorted
- PeriodIndex.
+ Thus far, we've only factorized lists (which are internally coerced to
+ NumPy arrays). When factorizing pandas objects, the type of `uniques`
+ will differ. For Categoricals, a `Categorical` is returned.
+
+ >>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
+ >>> labels, uniques = pd.factorize(cat)
+ >>> labels
+ array([0, 0, 1])
+ >>> uniques
+ [a, c]
+ Categories (3, object): [a, b, c]
+
+ Notice that ``'b'`` is in ``uniques.categories``, desipite not being
+ present in ``cat.values``.
+
+ For all other pandas objects, an Index of the appropriate type is
+ returned.
+
+ >>> cat = pd.Series(['a', 'a', 'c'])
+ >>> labels, uniques = pd.factorize(cat)
+ >>> labels
+ array([0, 0, 1])
+ >>> uniques
+ Index(['a', 'c'], dtype='object')
"""
+
+
+@Substitution(
+ values=dedent("""\
+ values : sequence
+ A 1-D seqeunce. Sequences that aren't pandas objects are
+ coereced to ndarrays before factorization.
+ """),
+ order=dedent("""\
+ order
+ .. deprecated:: 0.23.0
+
+ This parameter has no effect and is deprecated.
+ """),
+ sort=dedent("""\
+ sort : bool, default False
+ Sort `uniques` and shuffle `labels` to maintain the
+ relationship.
+ """),
+ size_hint=dedent("""\
+ size_hint : int, optional
+ Hint to the hashtable sizer.
+ """),
+)
+@Appender(_shared_docs['factorize'])
+@deprecate_kwarg(old_arg_name='order', new_arg_name=None)
+def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
# Implementation notes: This method is responsible for 3 things
# 1.) coercing data to array-like (ndarray, Index, extension array)
# 2.) factorizing labels and uniques
@@ -507,9 +603,9 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
values = _ensure_arraylike(values)
original = values
- if is_categorical_dtype(values):
+ if is_extension_array_dtype(values):
values = getattr(values, '_values', values)
- labels, uniques = values.factorize()
+ labels, uniques = values.factorize(na_sentinel=na_sentinel)
dtype = original.dtype
else:
values, dtype, _ = _ensure_data(values)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index d53caa265b9b3..c281bd80cb274 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -77,6 +77,24 @@ def _constructor_from_sequence(cls, scalars):
"""
raise AbstractMethodError(cls)
+ @classmethod
+ def _from_factorized(cls, values, original):
+ """Reconstruct an ExtensionArray after factorization.
+
+ Parameters
+ ----------
+ values : ndarray
+ An integer ndarray with the factorized values.
+ original : ExtensionArray
+ The original ExtensionArray that factorize was called on.
+
+ See Also
+ --------
+ pandas.factorize
+ ExtensionArray.factorize
+ """
+ raise AbstractMethodError(cls)
+
# ------------------------------------------------------------------------
# Must be a Sequence
# ------------------------------------------------------------------------
@@ -353,6 +371,73 @@ def unique(self):
uniques = unique(self.astype(object))
return self._constructor_from_sequence(uniques)
+ def _values_for_factorize(self):
+ # type: () -> Tuple[ndarray, Any]
+ """Return an array and missing value suitable for factorization.
+
+ Returns
+ -------
+ values : ndarray
+ An array suitable for factoraization. This should maintain order
+ and be a supported dtype (Float64, Int64, UInt64, String, Object).
+ By default, the extension array is cast to object dtype.
+ na_value : object
+ The value in `values` to consider missing. This will be treated
+ as NA in the factorization routines, so it will be coded as
+ `na_sentinal` and not included in `uniques`. By default,
+ ``np.nan`` is used.
+ """
+ return self.astype(object), np.nan
+
+ def factorize(self, na_sentinel=-1):
+ # type: (int) -> Tuple[ndarray, ExtensionArray]
+ """Encode the extension array as an enumerated type.
+
+ Parameters
+ ----------
+ na_sentinel : int, default -1
+ Value to use in the `labels` array to indicate missing values.
+
+ Returns
+ -------
+ labels : ndarray
+ An interger NumPy array that's an indexer into the original
+ ExtensionArray.
+ uniques : ExtensionArray
+ An ExtensionArray containing the unique values of `self`.
+
+ .. note::
+
+ uniques will *not* contain an entry for the NA value of
+ the ExtensionArray if there are any missing values present
+ in `self`.
+
+ See Also
+ --------
+ pandas.factorize : Top-level factorize method that dispatches here.
+
+ Notes
+ -----
+ :meth:`pandas.factorize` offers a `sort` keyword as well.
+ """
+ # Impelmentor note: There are two ways to override the behavior of
+ # pandas.factorize
+ # 1. _values_for_factorize and _from_factorize.
+ # Specify the values passed to pandas' internal factorization
+ # routines, and how to convert from those values back to the
+ # original ExtensionArray.
+ # 2. ExtensionArray.factorize.
+ # Complete control over factorization.
+ from pandas.core.algorithms import _factorize_array
+
+ arr, na_value = self._values_for_factorize()
+
+ labels, uniques = _factorize_array(arr, na_sentinel=na_sentinel,
+ na_value=na_value)
+
+ uniques = self._from_factorized(uniques, self)
+ return labels, uniques
+
# ------------------------------------------------------------------------
# Indexing methods
# ------------------------------------------------------------------------
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index ac57660300be4..b5a4785fd98a6 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2118,58 +2118,15 @@ def unique(self):
take_codes = sorted(take_codes)
return cat.set_categories(cat.categories.take(take_codes))
- def factorize(self, na_sentinel=-1):
- """Encode the Categorical as an enumerated type.
-
- Parameters
- ----------
- sort : boolean, default False
- Sort by values
- na_sentinel: int, default -1
- Value to mark "not found"
-
- Returns
- -------
- labels : ndarray
- An integer NumPy array that's an indexer into the original
- Categorical
- uniques : Categorical
- A Categorical whose values are the unique values and
- whose dtype matches the original CategoricalDtype. Note that if
- there any unobserved categories in ``self`` will not be present
- in ``uniques.values``. They will be present in
- ``uniques.categories``
-
- Examples
- --------
- >>> cat = pd.Categorical(['a', 'a', 'c'], categories=['a', 'b', 'c'])
- >>> labels, uniques = cat.factorize()
- >>> labels
- (array([0, 0, 1]),
- >>> uniques
- [a, c]
- Categories (3, object): [a, b, c])
-
- Missing values are handled
-
- >>> labels, uniques = pd.factorize(pd.Categorical(['a', 'b', None]))
- >>> labels
- array([ 0, 1, -1])
- >>> uniques
- [a, b]
- Categories (2, object): [a, b]
- """
- from pandas.core.algorithms import _factorize_array
-
+ def _values_for_factorize(self):
codes = self.codes.astype('int64')
- # We set missing codes, normally -1, to iNaT so that the
- # Int64HashTable treats them as missing values.
- labels, uniques = _factorize_array(codes, na_sentinel=na_sentinel,
- na_value=-1)
- uniques = self._constructor(self.categories.take(uniques),
- categories=self.categories,
- ordered=self.ordered)
- return labels, uniques
+ return codes, -1
+
+ @classmethod
+ def _from_factorized(cls, uniques, original):
+ return original._constructor(original.categories.take(uniques),
+ categories=original.categories,
+ ordered=original.ordered)
def equals(self, other):
"""
diff --git a/pandas/core/base.py b/pandas/core/base.py
index b3eb9a0ae7530..99e2af9fb3aeb 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -2,6 +2,7 @@
Base and utility classes for pandas objects.
"""
import warnings
+import textwrap
from pandas import compat
from pandas.compat import builtins
import numpy as np
@@ -1151,24 +1152,16 @@ def memory_usage(self, deep=False):
v += lib.memory_usage_of_objects(self.values)
return v
+ @Substitution(
+ values='', order='', size_hint='',
+ sort=textwrap.dedent("""\
+ sort : boolean, default False
+ Sort `uniques` and shuffle `labels` to maintain the
+ relationship.
+ """))
+ @Appender(algorithms._shared_docs['factorize'])
def factorize(self, sort=False, na_sentinel=-1):
- """
- Encode the object as an enumerated type or categorical variable
-
- Parameters
- ----------
- sort : boolean, default False
- Sort by values
- na_sentinel: int, default -1
- Value to mark "not found"
-
- Returns
- -------
- labels : the indexer to the original array
- uniques : the unique Index
- """
- from pandas.core.algorithms import factorize
- return factorize(self, sort=sort, na_sentinel=na_sentinel)
+ return algorithms.factorize(self, sort=sort, na_sentinel=na_sentinel)
_shared_docs['searchsorted'] = (
"""Find indices where elements should be inserted to maintain order.
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 4d467d62d0a56..f9f079cb21858 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -2,6 +2,7 @@
import numpy as np
import pandas as pd
+import pandas.util.testing as tm
from .base import BaseExtensionTests
@@ -82,3 +83,23 @@ def test_unique(self, data, box, method):
assert len(result) == 1
assert isinstance(result, type(data))
assert result[0] == duplicated[0]
+
+ @pytest.mark.parametrize('na_sentinel', [-1, -2])
+ def test_factorize(self, data_for_grouping, na_sentinel):
+ labels, uniques = pd.factorize(data_for_grouping,
+ na_sentinel=na_sentinel)
+ expected_labels = np.array([0, 0, na_sentinel,
+ na_sentinel, 1, 1, 0, 2],
+ dtype='int64')
+ expected_uniques = data_for_grouping.take([0, 4, 7])
+
+ tm.assert_numpy_array_equal(labels, expected_labels)
+ self.assert_extension_array_equal(uniques, expected_uniques)
+
+ @pytest.mark.parametrize('na_sentinel', [-1, -2])
+ def test_factorize_equivalence(self, data_for_grouping, na_sentinel):
+ l1, u1 = pd.factorize(data_for_grouping, na_sentinel=na_sentinel)
+ l2, u2 = data_for_grouping.factorize(na_sentinel=na_sentinel)
+
+ tm.assert_numpy_array_equal(l1, l2)
+ self.assert_extension_array_equal(u1, u2)
diff --git a/pandas/tests/extension/category/test_categorical.py b/pandas/tests/extension/category/test_categorical.py
index b602d9ee78e2a..7528299578326 100644
--- a/pandas/tests/extension/category/test_categorical.py
+++ b/pandas/tests/extension/category/test_categorical.py
@@ -46,6 +46,11 @@ def na_value():
return np.nan
+@pytest.fixture
+def data_for_grouping():
+ return Categorical(['a', 'a', None, None, 'b', 'b', 'a', 'c'])
+
+
class TestDtype(base.BaseDtypeTests):
pass
diff --git a/pandas/tests/extension/conftest.py b/pandas/tests/extension/conftest.py
index 04dfb408fc378..4cb4ea21d9be3 100644
--- a/pandas/tests/extension/conftest.py
+++ b/pandas/tests/extension/conftest.py
@@ -66,3 +66,14 @@ def na_cmp():
def na_value():
"""The scalar missing value for this type. Default 'None'"""
return None
+
+
+@pytest.fixture
+def data_for_grouping():
+ """Data for factorization, grouping, and unique tests.
+
+ Expected to be like [B, B, NA, NA, A, A, B, C]
+
+ Where A < B < C and NA is missing
+ """
+ raise NotImplementedError
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index f1852542088ff..b66a14c77a059 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -36,6 +36,10 @@ def __init__(self, values):
def _constructor_from_sequence(cls, scalars):
return cls(scalars)
+ @classmethod
+ def _from_factorized(cls, values, original):
+ return cls(values)
+
def __getitem__(self, item):
if isinstance(item, numbers.Integral):
return self.values[item]
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index b6303ededd0dc..22c1a67a0d60d 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -49,6 +49,15 @@ def na_value():
return decimal.Decimal("NaN")
+@pytest.fixture
+def data_for_grouping():
+ b = decimal.Decimal('1.0')
+ a = decimal.Decimal('0.0')
+ c = decimal.Decimal('2.0')
+ na = decimal.Decimal('NaN')
+ return DecimalArray([b, b, na, na, a, a, b, c])
+
+
class BaseDecimal(object):
def assert_series_equal(self, left, right, *args, **kwargs):
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index ee0951812b8f0..51a68a3701046 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -37,6 +37,10 @@ def __init__(self, values):
def _constructor_from_sequence(cls, scalars):
return cls(scalars)
+ @classmethod
+ def _from_factorized(cls, values, original):
+ return cls([collections.UserDict(x) for x in values if x != ()])
+
def __getitem__(self, item):
if isinstance(item, numbers.Integral):
return self.data[item]
@@ -108,6 +112,10 @@ def _concat_same_type(cls, to_concat):
data = list(itertools.chain.from_iterable([x.data for x in to_concat]))
return cls(data)
+ def _values_for_factorize(self):
+ frozen = tuple(tuple(x.items()) for x in self)
+ return np.array(frozen, dtype=object), ()
+
def _values_for_argsort(self):
# Disable NumPy's shape inference by including an empty tuple...
# If all the elemnts of self are the same size P, NumPy will
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 8083a1ce69092..63d97d5e7a2c5 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -48,6 +48,17 @@ def na_cmp():
return operator.eq
+@pytest.fixture
+def data_for_grouping():
+ return JSONArray([
+ {'b': 1}, {'b': 1},
+ {}, {},
+ {'a': 0, 'c': 2}, {'a': 0, 'c': 2},
+ {'b': 1},
+ {'c': 2},
+ ])
+
+
class TestDtype(base.BaseDtypeTests):
pass
| Adds factorize to the interface for ExtensionArray, with a default
implementation.
This is a stepping stone to groupby.
Categorical already has a custom implementation.
Decimal reuses the default.
JSON can't use the default since an array of dictionaries isn't hashable, so we use a custom implementation. | https://api.github.com/repos/pandas-dev/pandas/pulls/20361 | 2018-03-15T12:10:41Z | 2018-03-27T15:40:50Z | 2018-03-27T15:40:50Z | 2018-03-27T16:06:59Z |
DOC: update the DataFrame.sub docstring | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 3dcfab868bf17..e14f82906cd06 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -370,6 +370,33 @@ def _get_op_name(op, special):
e NaN 2.0
"""
+_sub_example_FRAME = """
+>>> a = pd.DataFrame([2, 1, 1, np.nan], index=['a', 'b', 'c', 'd'],
+... columns=['one'])
+>>> a
+ one
+a 2.0
+b 1.0
+c 1.0
+d NaN
+>>> b = pd.DataFrame(dict(one=[1, np.nan, 1, np.nan],
+... two=[3, 2, np.nan, 2]),
+... index=['a', 'b', 'd', 'e'])
+>>> b
+ one two
+a 1.0 3.0
+b NaN 2.0
+d 1.0 NaN
+e NaN 2.0
+>>> a.sub(b, fill_value=0)
+ one two
+a 1.0 -3.0
+b 1.0 -2.0
+c 1.0 NaN
+d -1.0 NaN
+e NaN -2.0
+"""
+
_op_descriptions = {
# Arithmetic Operators
'add': {'op': '+',
@@ -379,7 +406,7 @@ def _get_op_name(op, special):
'sub': {'op': '-',
'desc': 'Subtraction',
'reverse': 'rsub',
- 'df_examples': None},
+ 'df_examples': _sub_example_FRAME},
'mul': {'op': '*',
'desc': 'Multiplication',
'reverse': 'rmul',
| Checklist for the pandas documentation sprint :
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Use only one blank line to separate sections or paragraphs
Errors in parameters section
Wrong parameters order. Actual: ('other', 'axis', 'level', 'fill_value'). Documented: ('other', 'axis', 'fill_value', 'level')
Parameter "other" has no description
Parameter "axis" description should finish with "."
Parameter "fill_value" description should finish with "."
Parameter "level" description should finish with "."
Missing description for See Also "DataFrame.rsub" reference
```
Validation script returns docstring style errors.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20358 | 2018-03-15T05:32:22Z | 2018-03-16T08:49:14Z | 2018-03-16T08:49:14Z | 2018-03-17T19:19:01Z |
DOC: make deprecation warning more visible with red box | diff --git a/doc/source/themes/nature_with_gtoc/static/nature.css_t b/doc/source/themes/nature_with_gtoc/static/nature.css_t
index b61068ee28bef..4571d97ec50ba 100644
--- a/doc/source/themes/nature_with_gtoc/static/nature.css_t
+++ b/doc/source/themes/nature_with_gtoc/static/nature.css_t
@@ -198,10 +198,18 @@ div.body p, div.body dd, div.body li {
line-height: 1.5em;
}
-div.admonition p.admonition-title + p {
+div.admonition p.admonition-title + p, div.deprecated p {
display: inline;
}
+div.deprecated {
+ margin-bottom: 10px;
+ margin-top: 10px;
+ padding: 7px;
+ background-color: #ffe4e4;
+ border: 1px solid #f66;
+}
+
div.highlight{
background-color: white;
}
| - [x ] closes #20349
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20357 | 2018-03-15T01:05:19Z | 2018-03-23T09:24:14Z | 2018-03-23T09:24:14Z | 2018-03-23T10:30:18Z |
ENH: Allow for join between two multi-index dataframe instances | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 44c467795d1ed..9d030a31c8c83 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -183,6 +183,47 @@ array, but rather an ``ExtensionArray``:
This is the same behavior as ``Series.values`` for categorical data. See
:ref:`whatsnew_0240.api_breaking.interval_values` for more.
+.. _whatsnew_0240.enhancements.join_with_two_multiindexes:
+
+Joining with two multi-indexes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+:func:`Datafame.merge` and :func:`Dataframe.join` can now be used to join multi-indexed ``Dataframe`` instances on the overlaping index levels (:issue:`6360`)
+
+See the :ref:`Merge, join, and concatenate
+<merging.Join_with_two_multi_indexes>` documentation section.
+
+.. ipython:: python
+
+ index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
+ ('K1', 'X2')],
+ names=['key', 'X'])
+
+
+ left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
+ 'B': ['B0', 'B1', 'B2']},
+ index=index_left)
+
+
+ index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
+ ('K2', 'Y2'), ('K2', 'Y3')],
+ names=['key', 'Y'])
+
+
+ right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
+ 'D': ['D0', 'D1', 'D2', 'D3']},
+ index=index_right)
+
+
+ left.join(right)
+
+For earlier versions this can be done using the following.
+
+.. ipython:: python
+
+ pd.merge(left.reset_index(), right.reset_index(),
+ on=['key'], how='inner').set_index(['key', 'X', 'Y'])
+
.. _whatsnew_0240.enhancements.rename_axis:
Renaming names in a MultiIndex
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 2f449b4b33d8d..e5fb7b463c3e7 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3162,8 +3162,8 @@ def get_value(self, series, key):
iloc = self.get_loc(key)
return s[iloc]
except KeyError:
- if (len(self) > 0
- and (self.holds_integer() or self.is_boolean())):
+ if (len(self) > 0 and
+ (self.holds_integer() or self.is_boolean())):
raise
elif is_integer(key):
return s[key]
@@ -3951,46 +3951,72 @@ def join(self, other, how='left', level=None, return_indexers=False,
def _join_multi(self, other, how, return_indexers=True):
from .multi import MultiIndex
+ from pandas.core.reshape.merge import _restore_dropped_levels_multijoin
+
+ # figure out join names
+ self_names = set(com._not_none(*self.names))
+ other_names = set(com._not_none(*other.names))
+ overlap = self_names & other_names
+
+ # need at least 1 in common
+ if not overlap:
+ raise ValueError("cannot join with no overlapping index names")
+
self_is_mi = isinstance(self, MultiIndex)
other_is_mi = isinstance(other, MultiIndex)
- # figure out join names
- self_names = com._not_none(*self.names)
- other_names = com._not_none(*other.names)
- overlap = list(set(self_names) & set(other_names))
-
- # need at least 1 in common, but not more than 1
- if not len(overlap):
- raise ValueError("cannot join with no level specified and no "
- "overlapping names")
- if len(overlap) > 1:
- raise NotImplementedError("merging with more than one level "
- "overlap on a multi-index is not "
- "implemented")
- jl = overlap[0]
+ if self_is_mi and other_is_mi:
+
+ # Drop the non-matching levels from left and right respectively
+ ldrop_names = list(self_names - overlap)
+ rdrop_names = list(other_names - overlap)
+
+ self_jnlevels = self.droplevel(ldrop_names)
+ other_jnlevels = other.droplevel(rdrop_names)
+
+ # Join left and right
+ # Join on same leveled multi-index frames is supported
+ join_idx, lidx, ridx = self_jnlevels.join(other_jnlevels, how,
+ return_indexers=True)
+
+ # Restore the dropped levels
+ # Returned index level order is
+ # common levels, ldrop_names, rdrop_names
+ dropped_names = ldrop_names + rdrop_names
+
+ levels, labels, names = (
+ _restore_dropped_levels_multijoin(self, other,
+ dropped_names,
+ join_idx,
+ lidx, ridx))
+ # Re-create the multi-index
+ multi_join_idx = MultiIndex(levels=levels, labels=labels,
+ names=names, verify_integrity=False)
+
+ multi_join_idx = multi_join_idx.remove_unused_levels()
+
+ return multi_join_idx, lidx, ridx
+
+ jl = list(overlap)[0]
+
+ # Case where only one index is multi
# make the indices into mi's that match
- if not (self_is_mi and other_is_mi):
-
- flip_order = False
- if self_is_mi:
- self, other = other, self
- flip_order = True
- # flip if join method is right or left
- how = {'right': 'left', 'left': 'right'}.get(how, how)
-
- level = other.names.index(jl)
- result = self._join_level(other, level, how=how,
- return_indexers=return_indexers)
-
- if flip_order:
- if isinstance(result, tuple):
- return result[0], result[2], result[1]
- return result
+ flip_order = False
+ if self_is_mi:
+ self, other = other, self
+ flip_order = True
+ # flip if join method is right or left
+ how = {'right': 'left', 'left': 'right'}.get(how, how)
+
+ level = other.names.index(jl)
+ result = self._join_level(other, level, how=how,
+ return_indexers=return_indexers)
- # 2 multi-indexes
- raise NotImplementedError("merging with both multi-indexes is not "
- "implemented")
+ if flip_order:
+ if isinstance(result, tuple):
+ return result[0], result[2], result[1]
+ return result
def _join_non_unique(self, other, how='left', return_indexers=False):
from pandas.core.reshape.merge import _get_join_indexers
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 3d6f55c907269..93a6e4538cbc1 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1122,6 +1122,95 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner',
return join_func(lkey, rkey, count, **kwargs)
+def _restore_dropped_levels_multijoin(left, right, dropped_level_names,
+ join_index, lindexer, rindexer):
+ """
+ *this is an internal non-public method*
+
+ Returns the levels, labels and names of a multi-index to multi-index join.
+ Depending on the type of join, this method restores the appropriate
+ dropped levels of the joined multi-index.
+ The method relies on lidx, rindexer which hold the index positions of
+ left and right, where a join was feasible
+
+ Parameters
+ ----------
+ left : MultiIndex
+ left index
+ right : MultiIndex
+ right index
+ dropped_level_names : str array
+ list of non-common level names
+ join_index : MultiIndex
+ the index of the join between the
+ common levels of left and right
+ lindexer : intp array
+ left indexer
+ rindexer : intp array
+ right indexer
+
+ Returns
+ -------
+ levels : list of Index
+ levels of combined multiindexes
+ labels : intp array
+ labels of combined multiindexes
+ names : str array
+ names of combined multiindexes
+
+ """
+
+ def _convert_to_mulitindex(index):
+ if isinstance(index, MultiIndex):
+ return index
+ else:
+ return MultiIndex.from_arrays([index.values],
+ names=[index.name])
+
+ # For multi-multi joins with one overlapping level,
+ # the returned index if of type Index
+ # Assure that join_index is of type MultiIndex
+ # so that dropped levels can be appended
+ join_index = _convert_to_mulitindex(join_index)
+
+ join_levels = join_index.levels
+ join_labels = join_index.labels
+ join_names = join_index.names
+
+ # lindexer and rindexer hold the indexes where the join occurred
+ # for left and right respectively. If left/right is None then
+ # the join occurred on all indices of left/right
+ if lindexer is None:
+ lindexer = range(left.size)
+
+ if rindexer is None:
+ rindexer = range(right.size)
+
+ # Iterate through the levels that must be restored
+ for dropped_level_name in dropped_level_names:
+ if dropped_level_name in left.names:
+ idx = left
+ indexer = lindexer
+ else:
+ idx = right
+ indexer = rindexer
+
+ # The index of the level name to be restored
+ name_idx = idx.names.index(dropped_level_name)
+
+ restore_levels = idx.levels[name_idx]
+ # Inject -1 in the labels list where a join was not possible
+ # IOW indexer[i]=-1
+ labels = idx.labels[name_idx]
+ restore_labels = algos.take_nd(labels, indexer, fill_value=-1)
+
+ join_levels = join_levels + [restore_levels]
+ join_labels = join_labels + [restore_labels]
+ join_names = join_names + [dropped_level_name]
+
+ return join_levels, join_labels, join_names
+
+
class _OrderedMerge(_MergeOperation):
_merge_type = 'ordered_merge'
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index d9297cdc5ab3e..7ee88f223cd95 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -8,15 +8,14 @@
import numpy as np
import pytest
from numpy import nan
-from numpy.random import randn
import pandas as pd
import pandas.util.testing as tm
from pandas import (Categorical, CategoricalIndex, DataFrame, DatetimeIndex,
- Float64Index, Index, Int64Index, MultiIndex, RangeIndex,
+ Float64Index, Int64Index, MultiIndex, RangeIndex,
Series, UInt64Index)
from pandas.api.types import CategoricalDtype as CDT
-from pandas.compat import lrange, lzip
+from pandas.compat import lrange
from pandas.core.dtypes.common import is_categorical_dtype, is_object_dtype
from pandas.core.dtypes.dtypes import CategoricalDtype
from pandas.core.reshape.concat import concat
@@ -920,521 +919,6 @@ def _check_merge(x, y):
assert_frame_equal(result, expected, check_names=False)
-class TestMergeMulti(object):
-
- def setup_method(self, method):
- self.index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
- ['one', 'two', 'three']],
- labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
- [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
- names=['first', 'second'])
- self.to_join = DataFrame(np.random.randn(10, 3), index=self.index,
- columns=['j_one', 'j_two', 'j_three'])
-
- # a little relevant example with NAs
- key1 = ['bar', 'bar', 'bar', 'foo', 'foo', 'baz', 'baz', 'qux',
- 'qux', 'snap']
- key2 = ['two', 'one', 'three', 'one', 'two', 'one', 'two', 'two',
- 'three', 'one']
-
- data = np.random.randn(len(key1))
- self.data = DataFrame({'key1': key1, 'key2': key2,
- 'data': data})
-
- def test_merge_on_multikey(self):
- joined = self.data.join(self.to_join, on=['key1', 'key2'])
-
- join_key = Index(lzip(self.data['key1'], self.data['key2']))
- indexer = self.to_join.index.get_indexer(join_key)
- ex_values = self.to_join.values.take(indexer, axis=0)
- ex_values[indexer == -1] = np.nan
- expected = self.data.join(DataFrame(ex_values,
- columns=self.to_join.columns))
-
- # TODO: columns aren't in the same order yet
- assert_frame_equal(joined, expected.loc[:, joined.columns])
-
- left = self.data.join(self.to_join, on=['key1', 'key2'], sort=True)
- right = expected.loc[:, joined.columns].sort_values(['key1', 'key2'],
- kind='mergesort')
- assert_frame_equal(left, right)
-
- def test_left_join_multi_index(self):
- icols = ['1st', '2nd', '3rd']
-
- def bind_cols(df):
- iord = lambda a: 0 if a != a else ord(a)
- f = lambda ts: ts.map(iord) - ord('a')
- return (f(df['1st']) + f(df['3rd']) * 1e2 +
- df['2nd'].fillna(0) * 1e4)
-
- def run_asserts(left, right):
- for sort in [False, True]:
- res = left.join(right, on=icols, how='left', sort=sort)
-
- assert len(left) < len(res) + 1
- assert not res['4th'].isna().any()
- assert not res['5th'].isna().any()
-
- tm.assert_series_equal(
- res['4th'], - res['5th'], check_names=False)
- result = bind_cols(res.iloc[:, :-2])
- tm.assert_series_equal(res['4th'], result, check_names=False)
- assert result.name is None
-
- if sort:
- tm.assert_frame_equal(
- res, res.sort_values(icols, kind='mergesort'))
-
- out = merge(left, right.reset_index(), on=icols,
- sort=sort, how='left')
-
- res.index = np.arange(len(res))
- tm.assert_frame_equal(out, res)
-
- lc = list(map(chr, np.arange(ord('a'), ord('z') + 1)))
- left = DataFrame(np.random.choice(lc, (5000, 2)),
- columns=['1st', '3rd'])
- left.insert(1, '2nd', np.random.randint(0, 1000, len(left)))
-
- i = np.random.permutation(len(left))
- right = left.iloc[i].copy()
-
- left['4th'] = bind_cols(left)
- right['5th'] = - bind_cols(right)
- right.set_index(icols, inplace=True)
-
- run_asserts(left, right)
-
- # inject some nulls
- left.loc[1::23, '1st'] = np.nan
- left.loc[2::37, '2nd'] = np.nan
- left.loc[3::43, '3rd'] = np.nan
- left['4th'] = bind_cols(left)
-
- i = np.random.permutation(len(left))
- right = left.iloc[i, :-1]
- right['5th'] = - bind_cols(right)
- right.set_index(icols, inplace=True)
-
- run_asserts(left, right)
-
- def test_merge_right_vs_left(self):
- # compare left vs right merge with multikey
- for sort in [False, True]:
- merged1 = self.data.merge(self.to_join, left_on=['key1', 'key2'],
- right_index=True, how='left', sort=sort)
-
- merged2 = self.to_join.merge(self.data, right_on=['key1', 'key2'],
- left_index=True, how='right',
- sort=sort)
-
- merged2 = merged2.loc[:, merged1.columns]
- assert_frame_equal(merged1, merged2)
-
- def test_compress_group_combinations(self):
-
- # ~ 40000000 possible unique groups
- key1 = tm.rands_array(10, 10000)
- key1 = np.tile(key1, 2)
- key2 = key1[::-1]
-
- df = DataFrame({'key1': key1, 'key2': key2,
- 'value1': np.random.randn(20000)})
-
- df2 = DataFrame({'key1': key1[::2], 'key2': key2[::2],
- 'value2': np.random.randn(10000)})
-
- # just to hit the label compression code path
- merge(df, df2, how='outer')
-
- def test_left_join_index_preserve_order(self):
-
- left = DataFrame({'k1': [0, 1, 2] * 8,
- 'k2': ['foo', 'bar'] * 12,
- 'v': np.array(np.arange(24), dtype=np.int64)})
-
- index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
- right = DataFrame({'v2': [5, 7]}, index=index)
-
- result = left.join(right, on=['k1', 'k2'])
-
- expected = left.copy()
- expected['v2'] = np.nan
- expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5
- expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7
-
- tm.assert_frame_equal(result, expected)
- tm.assert_frame_equal(
- result.sort_values(['k1', 'k2'], kind='mergesort'),
- left.join(right, on=['k1', 'k2'], sort=True))
-
- # test join with multi dtypes blocks
- left = DataFrame({'k1': [0, 1, 2] * 8,
- 'k2': ['foo', 'bar'] * 12,
- 'k3': np.array([0, 1, 2] * 8, dtype=np.float32),
- 'v': np.array(np.arange(24), dtype=np.int32)})
-
- index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
- right = DataFrame({'v2': [5, 7]}, index=index)
-
- result = left.join(right, on=['k1', 'k2'])
-
- expected = left.copy()
- expected['v2'] = np.nan
- expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5
- expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7
-
- tm.assert_frame_equal(result, expected)
- tm.assert_frame_equal(
- result.sort_values(['k1', 'k2'], kind='mergesort'),
- left.join(right, on=['k1', 'k2'], sort=True))
-
- # do a right join for an extra test
- joined = merge(right, left, left_index=True,
- right_on=['k1', 'k2'], how='right')
- tm.assert_frame_equal(joined.loc[:, expected.columns], expected)
-
- def test_left_join_index_multi_match_multiindex(self):
- left = DataFrame([
- ['X', 'Y', 'C', 'a'],
- ['W', 'Y', 'C', 'e'],
- ['V', 'Q', 'A', 'h'],
- ['V', 'R', 'D', 'i'],
- ['X', 'Y', 'D', 'b'],
- ['X', 'Y', 'A', 'c'],
- ['W', 'Q', 'B', 'f'],
- ['W', 'R', 'C', 'g'],
- ['V', 'Y', 'C', 'j'],
- ['X', 'Y', 'B', 'd']],
- columns=['cola', 'colb', 'colc', 'tag'],
- index=[3, 2, 0, 1, 7, 6, 4, 5, 9, 8])
-
- right = DataFrame([
- ['W', 'R', 'C', 0],
- ['W', 'Q', 'B', 3],
- ['W', 'Q', 'B', 8],
- ['X', 'Y', 'A', 1],
- ['X', 'Y', 'A', 4],
- ['X', 'Y', 'B', 5],
- ['X', 'Y', 'C', 6],
- ['X', 'Y', 'C', 9],
- ['X', 'Q', 'C', -6],
- ['X', 'R', 'C', -9],
- ['V', 'Y', 'C', 7],
- ['V', 'R', 'D', 2],
- ['V', 'R', 'D', -1],
- ['V', 'Q', 'A', -3]],
- columns=['col1', 'col2', 'col3', 'val'])
-
- right.set_index(['col1', 'col2', 'col3'], inplace=True)
- result = left.join(right, on=['cola', 'colb', 'colc'], how='left')
-
- expected = DataFrame([
- ['X', 'Y', 'C', 'a', 6],
- ['X', 'Y', 'C', 'a', 9],
- ['W', 'Y', 'C', 'e', nan],
- ['V', 'Q', 'A', 'h', -3],
- ['V', 'R', 'D', 'i', 2],
- ['V', 'R', 'D', 'i', -1],
- ['X', 'Y', 'D', 'b', nan],
- ['X', 'Y', 'A', 'c', 1],
- ['X', 'Y', 'A', 'c', 4],
- ['W', 'Q', 'B', 'f', 3],
- ['W', 'Q', 'B', 'f', 8],
- ['W', 'R', 'C', 'g', 0],
- ['V', 'Y', 'C', 'j', 7],
- ['X', 'Y', 'B', 'd', 5]],
- columns=['cola', 'colb', 'colc', 'tag', 'val'],
- index=[3, 3, 2, 0, 1, 1, 7, 6, 6, 4, 4, 5, 9, 8])
-
- tm.assert_frame_equal(result, expected)
-
- result = left.join(right, on=['cola', 'colb', 'colc'],
- how='left', sort=True)
-
- tm.assert_frame_equal(
- result,
- expected.sort_values(['cola', 'colb', 'colc'], kind='mergesort'))
-
- # GH7331 - maintain left frame order in left merge
- right.reset_index(inplace=True)
- right.columns = left.columns[:3].tolist() + right.columns[-1:].tolist()
- result = merge(left, right, how='left', on=left.columns[:-1].tolist())
- expected.index = np.arange(len(expected))
- tm.assert_frame_equal(result, expected)
-
- def test_left_join_index_multi_match(self):
- left = DataFrame([
- ['c', 0],
- ['b', 1],
- ['a', 2],
- ['b', 3]],
- columns=['tag', 'val'],
- index=[2, 0, 1, 3])
-
- right = DataFrame([
- ['a', 'v'],
- ['c', 'w'],
- ['c', 'x'],
- ['d', 'y'],
- ['a', 'z'],
- ['c', 'r'],
- ['e', 'q'],
- ['c', 's']],
- columns=['tag', 'char'])
-
- right.set_index('tag', inplace=True)
- result = left.join(right, on='tag', how='left')
-
- expected = DataFrame([
- ['c', 0, 'w'],
- ['c', 0, 'x'],
- ['c', 0, 'r'],
- ['c', 0, 's'],
- ['b', 1, nan],
- ['a', 2, 'v'],
- ['a', 2, 'z'],
- ['b', 3, nan]],
- columns=['tag', 'val', 'char'],
- index=[2, 2, 2, 2, 0, 1, 1, 3])
-
- tm.assert_frame_equal(result, expected)
-
- result = left.join(right, on='tag', how='left', sort=True)
- tm.assert_frame_equal(
- result, expected.sort_values('tag', kind='mergesort'))
-
- # GH7331 - maintain left frame order in left merge
- result = merge(left, right.reset_index(), how='left', on='tag')
- expected.index = np.arange(len(expected))
- tm.assert_frame_equal(result, expected)
-
- def test_left_merge_na_buglet(self):
- left = DataFrame({'id': list('abcde'), 'v1': randn(5),
- 'v2': randn(5), 'dummy': list('abcde'),
- 'v3': randn(5)},
- columns=['id', 'v1', 'v2', 'dummy', 'v3'])
- right = DataFrame({'id': ['a', 'b', np.nan, np.nan, np.nan],
- 'sv3': [1.234, 5.678, np.nan, np.nan, np.nan]})
-
- merged = merge(left, right, on='id', how='left')
-
- rdf = right.drop(['id'], axis=1)
- expected = left.join(rdf)
- tm.assert_frame_equal(merged, expected)
-
- def test_merge_na_keys(self):
- data = [[1950, "A", 1.5],
- [1950, "B", 1.5],
- [1955, "B", 1.5],
- [1960, "B", np.nan],
- [1970, "B", 4.],
- [1950, "C", 4.],
- [1960, "C", np.nan],
- [1965, "C", 3.],
- [1970, "C", 4.]]
-
- frame = DataFrame(data, columns=["year", "panel", "data"])
-
- other_data = [[1960, 'A', np.nan],
- [1970, 'A', np.nan],
- [1955, 'A', np.nan],
- [1965, 'A', np.nan],
- [1965, 'B', np.nan],
- [1955, 'C', np.nan]]
- other = DataFrame(other_data, columns=['year', 'panel', 'data'])
-
- result = frame.merge(other, how='outer')
-
- expected = frame.fillna(-999).merge(other.fillna(-999), how='outer')
- expected = expected.replace(-999, np.nan)
-
- tm.assert_frame_equal(result, expected)
-
- def test_join_multi_levels(self):
-
- # GH 3662
- # merge multi-levels
- household = (
- DataFrame(
- dict(household_id=[1, 2, 3],
- male=[0, 1, 0],
- wealth=[196087.3, 316478.7, 294750]),
- columns=['household_id', 'male', 'wealth'])
- .set_index('household_id'))
- portfolio = (
- DataFrame(
- dict(household_id=[1, 2, 2, 3, 3, 3, 4],
- asset_id=["nl0000301109", "nl0000289783", "gb00b03mlx29",
- "gb00b03mlx29", "lu0197800237", "nl0000289965",
- np.nan],
- name=["ABN Amro", "Robeco", "Royal Dutch Shell",
- "Royal Dutch Shell",
- "AAB Eastern Europe Equity Fund",
- "Postbank BioTech Fonds", np.nan],
- share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]),
- columns=['household_id', 'asset_id', 'name', 'share'])
- .set_index(['household_id', 'asset_id']))
- result = household.join(portfolio, how='inner')
- expected = (
- DataFrame(
- dict(male=[0, 1, 1, 0, 0, 0],
- wealth=[196087.3, 316478.7, 316478.7,
- 294750.0, 294750.0, 294750.0],
- name=['ABN Amro', 'Robeco', 'Royal Dutch Shell',
- 'Royal Dutch Shell',
- 'AAB Eastern Europe Equity Fund',
- 'Postbank BioTech Fonds'],
- share=[1.00, 0.40, 0.60, 0.15, 0.60, 0.25],
- household_id=[1, 2, 2, 3, 3, 3],
- asset_id=['nl0000301109', 'nl0000289783', 'gb00b03mlx29',
- 'gb00b03mlx29', 'lu0197800237',
- 'nl0000289965']))
- .set_index(['household_id', 'asset_id'])
- .reindex(columns=['male', 'wealth', 'name', 'share']))
- assert_frame_equal(result, expected)
-
- assert_frame_equal(result, expected)
-
- # equivalency
- result2 = (merge(household.reset_index(), portfolio.reset_index(),
- on=['household_id'], how='inner')
- .set_index(['household_id', 'asset_id']))
- assert_frame_equal(result2, expected)
-
- result = household.join(portfolio, how='outer')
- expected = (concat([
- expected,
- (DataFrame(
- dict(share=[1.00]),
- index=MultiIndex.from_tuples(
- [(4, np.nan)],
- names=['household_id', 'asset_id'])))
- ], axis=0, sort=True).reindex(columns=expected.columns))
- assert_frame_equal(result, expected)
-
- # invalid cases
- household.index.name = 'foo'
-
- def f():
- household.join(portfolio, how='inner')
-
- pytest.raises(ValueError, f)
-
- portfolio2 = portfolio.copy()
- portfolio2.index.set_names(['household_id', 'foo'])
-
- def f():
- portfolio2.join(portfolio, how='inner')
-
- pytest.raises(ValueError, f)
-
- def test_join_multi_levels2(self):
-
- # some more advanced merges
- # GH6360
- household = (
- DataFrame(
- dict(household_id=[1, 2, 2, 3, 3, 3, 4],
- asset_id=["nl0000301109", "nl0000301109", "gb00b03mlx29",
- "gb00b03mlx29", "lu0197800237", "nl0000289965",
- np.nan],
- share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]),
- columns=['household_id', 'asset_id', 'share'])
- .set_index(['household_id', 'asset_id']))
-
- log_return = DataFrame(dict(
- asset_id=["gb00b03mlx29", "gb00b03mlx29",
- "gb00b03mlx29", "lu0197800237", "lu0197800237"],
- t=[233, 234, 235, 180, 181],
- log_return=[.09604978, -.06524096, .03532373, .03025441, .036997]
- )).set_index(["asset_id", "t"])
-
- expected = (
- DataFrame(dict(
- household_id=[2, 2, 2, 3, 3, 3, 3, 3],
- asset_id=["gb00b03mlx29", "gb00b03mlx29",
- "gb00b03mlx29", "gb00b03mlx29",
- "gb00b03mlx29", "gb00b03mlx29",
- "lu0197800237", "lu0197800237"],
- t=[233, 234, 235, 233, 234, 235, 180, 181],
- share=[0.6, 0.6, 0.6, 0.15, 0.15, 0.15, 0.6, 0.6],
- log_return=[.09604978, -.06524096, .03532373,
- .09604978, -.06524096, .03532373,
- .03025441, .036997]
- ))
- .set_index(["household_id", "asset_id", "t"])
- .reindex(columns=['share', 'log_return']))
-
- def f():
- household.join(log_return, how='inner')
-
- pytest.raises(NotImplementedError, f)
-
- # this is the equivalency
- result = (merge(household.reset_index(), log_return.reset_index(),
- on=['asset_id'], how='inner')
- .set_index(['household_id', 'asset_id', 't']))
- assert_frame_equal(result, expected)
-
- expected = (
- DataFrame(dict(
- household_id=[1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4],
- asset_id=["nl0000301109", "nl0000289783", "gb00b03mlx29",
- "gb00b03mlx29", "gb00b03mlx29",
- "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29",
- "lu0197800237", "lu0197800237",
- "nl0000289965", None],
- t=[None, None, 233, 234, 235, 233, 234,
- 235, 180, 181, None, None],
- share=[1.0, 0.4, 0.6, 0.6, 0.6, 0.15,
- 0.15, 0.15, 0.6, 0.6, 0.25, 1.0],
- log_return=[None, None, .09604978, -.06524096, .03532373,
- .09604978, -.06524096, .03532373,
- .03025441, .036997, None, None]
- ))
- .set_index(["household_id", "asset_id", "t"]))
-
- def f():
- household.join(log_return, how='outer')
-
- pytest.raises(NotImplementedError, f)
-
- @pytest.mark.parametrize("klass", [None, np.asarray, Series, Index])
- def test_merge_datetime_index(self, klass):
- # see gh-19038
- df = DataFrame([1, 2, 3],
- ["2016-01-01", "2017-01-01", "2018-01-01"],
- columns=["a"])
- df.index = pd.to_datetime(df.index)
- on_vector = df.index.year
-
- if klass is not None:
- on_vector = klass(on_vector)
-
- expected = DataFrame(
- OrderedDict([
- ("a", [1, 2, 3]),
- ("key_1", [2016, 2017, 2018]),
- ])
- )
-
- result = df.merge(df, on=["a", on_vector], how="inner")
- tm.assert_frame_equal(result, expected)
-
- expected = DataFrame(
- OrderedDict([
- ("key_0", [2016, 2017, 2018]),
- ("a_x", [1, 2, 3]),
- ("a_y", [1, 2, 3]),
- ])
- )
-
- result = df.merge(df, on=[df.index.year], how="inner")
- tm.assert_frame_equal(result, expected)
-
-
class TestMergeDtypes(object):
@pytest.mark.parametrize('right_vals', [
diff --git a/pandas/tests/reshape/merge/test_multi.py b/pandas/tests/reshape/merge/test_multi.py
new file mode 100644
index 0000000000000..a1158201844b0
--- /dev/null
+++ b/pandas/tests/reshape/merge/test_multi.py
@@ -0,0 +1,672 @@
+# pylint: disable=E1103
+
+from collections import OrderedDict
+
+import numpy as np
+from numpy import nan
+from numpy.random import randn
+import pytest
+
+import pandas as pd
+from pandas import DataFrame, Index, MultiIndex, Series
+from pandas.core.reshape.concat import concat
+from pandas.core.reshape.merge import merge
+import pandas.util.testing as tm
+
+
+@pytest.fixture
+def left():
+ """left dataframe (not multi-indexed) for multi-index join tests"""
+ # a little relevant example with NAs
+ key1 = ['bar', 'bar', 'bar', 'foo', 'foo', 'baz', 'baz', 'qux',
+ 'qux', 'snap']
+ key2 = ['two', 'one', 'three', 'one', 'two', 'one', 'two', 'two',
+ 'three', 'one']
+
+ data = np.random.randn(len(key1))
+ return DataFrame({'key1': key1, 'key2': key2, 'data': data})
+
+
+@pytest.fixture
+def right():
+ """right dataframe (multi-indexed) for multi-index join tests"""
+ index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['key1', 'key2'])
+
+ return DataFrame(np.random.randn(10, 3), index=index,
+ columns=['j_one', 'j_two', 'j_three'])
+
+
+@pytest.fixture
+def left_multi():
+ return (
+ DataFrame(
+ dict(Origin=['A', 'A', 'B', 'B', 'C'],
+ Destination=['A', 'B', 'A', 'C', 'A'],
+ Period=['AM', 'AM', 'IP', 'AM', 'OP'],
+ TripPurp=['hbw', 'nhb', 'hbo', 'nhb', 'hbw'],
+ Trips=[1987, 3647, 2470, 4296, 4444]),
+ columns=['Origin', 'Destination', 'Period',
+ 'TripPurp', 'Trips'])
+ .set_index(['Origin', 'Destination', 'Period', 'TripPurp']))
+
+
+@pytest.fixture
+def right_multi():
+ return (
+ DataFrame(
+ dict(Origin=['A', 'A', 'B', 'B', 'C', 'C', 'E'],
+ Destination=['A', 'B', 'A', 'B', 'A', 'B', 'F'],
+ Period=['AM', 'AM', 'IP', 'AM', 'OP', 'IP', 'AM'],
+ LinkType=['a', 'b', 'c', 'b', 'a', 'b', 'a'],
+ Distance=[100, 80, 90, 80, 75, 35, 55]),
+ columns=['Origin', 'Destination', 'Period',
+ 'LinkType', 'Distance'])
+ .set_index(['Origin', 'Destination', 'Period', 'LinkType']))
+
+
+@pytest.fixture
+def on_cols_multi():
+ return ['Origin', 'Destination', 'Period']
+
+
+@pytest.fixture
+def idx_cols_multi():
+ return ['Origin', 'Destination', 'Period', 'TripPurp', 'LinkType']
+
+
+class TestMergeMulti(object):
+
+ def setup_method(self):
+ self.index = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux'],
+ ['one', 'two', 'three']],
+ labels=[[0, 0, 0, 1, 1, 2, 2, 3, 3, 3],
+ [0, 1, 2, 0, 1, 1, 2, 0, 1, 2]],
+ names=['first', 'second'])
+ self.to_join = DataFrame(np.random.randn(10, 3), index=self.index,
+ columns=['j_one', 'j_two', 'j_three'])
+
+ # a little relevant example with NAs
+ key1 = ['bar', 'bar', 'bar', 'foo', 'foo', 'baz', 'baz', 'qux',
+ 'qux', 'snap']
+ key2 = ['two', 'one', 'three', 'one', 'two', 'one', 'two', 'two',
+ 'three', 'one']
+
+ data = np.random.randn(len(key1))
+ self.data = DataFrame({'key1': key1, 'key2': key2,
+ 'data': data})
+
+ def test_merge_on_multikey(self, left, right, join_type):
+ on_cols = ['key1', 'key2']
+ result = (left.join(right, on=on_cols, how=join_type)
+ .reset_index(drop=True))
+
+ expected = pd.merge(left, right.reset_index(),
+ on=on_cols, how=join_type)
+
+ tm.assert_frame_equal(result, expected)
+
+ result = (left.join(right, on=on_cols, how=join_type, sort=True)
+ .reset_index(drop=True))
+
+ expected = pd.merge(left, right.reset_index(),
+ on=on_cols, how=join_type, sort=True)
+
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("sort", [False, True])
+ def test_left_join_multi_index(self, left, right, sort):
+ icols = ['1st', '2nd', '3rd']
+
+ def bind_cols(df):
+ iord = lambda a: 0 if a != a else ord(a)
+ f = lambda ts: ts.map(iord) - ord('a')
+ return (f(df['1st']) + f(df['3rd']) * 1e2 +
+ df['2nd'].fillna(0) * 1e4)
+
+ def run_asserts(left, right, sort):
+ res = left.join(right, on=icols, how='left', sort=sort)
+
+ assert len(left) < len(res) + 1
+ assert not res['4th'].isna().any()
+ assert not res['5th'].isna().any()
+
+ tm.assert_series_equal(
+ res['4th'], - res['5th'], check_names=False)
+ result = bind_cols(res.iloc[:, :-2])
+ tm.assert_series_equal(res['4th'], result, check_names=False)
+ assert result.name is None
+
+ if sort:
+ tm.assert_frame_equal(
+ res, res.sort_values(icols, kind='mergesort'))
+
+ out = merge(left, right.reset_index(), on=icols,
+ sort=sort, how='left')
+
+ res.index = np.arange(len(res))
+ tm.assert_frame_equal(out, res)
+
+ lc = list(map(chr, np.arange(ord('a'), ord('z') + 1)))
+ left = DataFrame(np.random.choice(lc, (5000, 2)),
+ columns=['1st', '3rd'])
+ left.insert(1, '2nd', np.random.randint(0, 1000, len(left)))
+
+ i = np.random.permutation(len(left))
+ right = left.iloc[i].copy()
+
+ left['4th'] = bind_cols(left)
+ right['5th'] = - bind_cols(right)
+ right.set_index(icols, inplace=True)
+
+ run_asserts(left, right, sort)
+
+ # inject some nulls
+ left.loc[1::23, '1st'] = np.nan
+ left.loc[2::37, '2nd'] = np.nan
+ left.loc[3::43, '3rd'] = np.nan
+ left['4th'] = bind_cols(left)
+
+ i = np.random.permutation(len(left))
+ right = left.iloc[i, :-1]
+ right['5th'] = - bind_cols(right)
+ right.set_index(icols, inplace=True)
+
+ run_asserts(left, right, sort)
+
+ @pytest.mark.parametrize("sort", [False, True])
+ def test_merge_right_vs_left(self, left, right, sort):
+ # compare left vs right merge with multikey
+ on_cols = ['key1', 'key2']
+ merged_left_right = left.merge(right,
+ left_on=on_cols, right_index=True,
+ how='left', sort=sort)
+
+ merge_right_left = right.merge(left,
+ right_on=on_cols, left_index=True,
+ how='right', sort=sort)
+
+ # Reorder columns
+ merge_right_left = merge_right_left[merged_left_right.columns]
+
+ tm.assert_frame_equal(merged_left_right, merge_right_left)
+
+ def test_compress_group_combinations(self):
+
+ # ~ 40000000 possible unique groups
+ key1 = tm.rands_array(10, 10000)
+ key1 = np.tile(key1, 2)
+ key2 = key1[::-1]
+
+ df = DataFrame({'key1': key1, 'key2': key2,
+ 'value1': np.random.randn(20000)})
+
+ df2 = DataFrame({'key1': key1[::2], 'key2': key2[::2],
+ 'value2': np.random.randn(10000)})
+
+ # just to hit the label compression code path
+ merge(df, df2, how='outer')
+
+ def test_left_join_index_preserve_order(self):
+
+ on_cols = ['k1', 'k2']
+ left = DataFrame({'k1': [0, 1, 2] * 8,
+ 'k2': ['foo', 'bar'] * 12,
+ 'v': np.array(np.arange(24), dtype=np.int64)})
+
+ index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
+ right = DataFrame({'v2': [5, 7]}, index=index)
+
+ result = left.join(right, on=on_cols)
+
+ expected = left.copy()
+ expected['v2'] = np.nan
+ expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5
+ expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7
+
+ tm.assert_frame_equal(result, expected)
+
+ result.sort_values(on_cols, kind='mergesort', inplace=True)
+ expected = left.join(right, on=on_cols, sort=True)
+
+ tm.assert_frame_equal(result, expected)
+
+ # test join with multi dtypes blocks
+ left = DataFrame({'k1': [0, 1, 2] * 8,
+ 'k2': ['foo', 'bar'] * 12,
+ 'k3': np.array([0, 1, 2] * 8, dtype=np.float32),
+ 'v': np.array(np.arange(24), dtype=np.int32)})
+
+ index = MultiIndex.from_tuples([(2, 'bar'), (1, 'foo')])
+ right = DataFrame({'v2': [5, 7]}, index=index)
+
+ result = left.join(right, on=on_cols)
+
+ expected = left.copy()
+ expected['v2'] = np.nan
+ expected.loc[(expected.k1 == 2) & (expected.k2 == 'bar'), 'v2'] = 5
+ expected.loc[(expected.k1 == 1) & (expected.k2 == 'foo'), 'v2'] = 7
+
+ tm.assert_frame_equal(result, expected)
+
+ result = result.sort_values(on_cols, kind='mergesort')
+ expected = left.join(right, on=on_cols, sort=True)
+
+ tm.assert_frame_equal(result, expected)
+
+ def test_left_join_index_multi_match_multiindex(self):
+ left = DataFrame([
+ ['X', 'Y', 'C', 'a'],
+ ['W', 'Y', 'C', 'e'],
+ ['V', 'Q', 'A', 'h'],
+ ['V', 'R', 'D', 'i'],
+ ['X', 'Y', 'D', 'b'],
+ ['X', 'Y', 'A', 'c'],
+ ['W', 'Q', 'B', 'f'],
+ ['W', 'R', 'C', 'g'],
+ ['V', 'Y', 'C', 'j'],
+ ['X', 'Y', 'B', 'd']],
+ columns=['cola', 'colb', 'colc', 'tag'],
+ index=[3, 2, 0, 1, 7, 6, 4, 5, 9, 8])
+
+ right = (DataFrame([
+ ['W', 'R', 'C', 0],
+ ['W', 'Q', 'B', 3],
+ ['W', 'Q', 'B', 8],
+ ['X', 'Y', 'A', 1],
+ ['X', 'Y', 'A', 4],
+ ['X', 'Y', 'B', 5],
+ ['X', 'Y', 'C', 6],
+ ['X', 'Y', 'C', 9],
+ ['X', 'Q', 'C', -6],
+ ['X', 'R', 'C', -9],
+ ['V', 'Y', 'C', 7],
+ ['V', 'R', 'D', 2],
+ ['V', 'R', 'D', -1],
+ ['V', 'Q', 'A', -3]],
+ columns=['col1', 'col2', 'col3', 'val'])
+ .set_index(['col1', 'col2', 'col3']))
+
+ result = left.join(right, on=['cola', 'colb', 'colc'], how='left')
+
+ expected = DataFrame([
+ ['X', 'Y', 'C', 'a', 6],
+ ['X', 'Y', 'C', 'a', 9],
+ ['W', 'Y', 'C', 'e', nan],
+ ['V', 'Q', 'A', 'h', -3],
+ ['V', 'R', 'D', 'i', 2],
+ ['V', 'R', 'D', 'i', -1],
+ ['X', 'Y', 'D', 'b', nan],
+ ['X', 'Y', 'A', 'c', 1],
+ ['X', 'Y', 'A', 'c', 4],
+ ['W', 'Q', 'B', 'f', 3],
+ ['W', 'Q', 'B', 'f', 8],
+ ['W', 'R', 'C', 'g', 0],
+ ['V', 'Y', 'C', 'j', 7],
+ ['X', 'Y', 'B', 'd', 5]],
+ columns=['cola', 'colb', 'colc', 'tag', 'val'],
+ index=[3, 3, 2, 0, 1, 1, 7, 6, 6, 4, 4, 5, 9, 8])
+
+ tm.assert_frame_equal(result, expected)
+
+ result = left.join(right, on=['cola', 'colb', 'colc'],
+ how='left', sort=True)
+
+ expected = expected.sort_values(['cola', 'colb', 'colc'],
+ kind='mergesort')
+
+ tm.assert_frame_equal(result, expected)
+
+ def test_left_join_index_multi_match(self):
+ left = DataFrame([
+ ['c', 0],
+ ['b', 1],
+ ['a', 2],
+ ['b', 3]],
+ columns=['tag', 'val'],
+ index=[2, 0, 1, 3])
+
+ right = (DataFrame([
+ ['a', 'v'],
+ ['c', 'w'],
+ ['c', 'x'],
+ ['d', 'y'],
+ ['a', 'z'],
+ ['c', 'r'],
+ ['e', 'q'],
+ ['c', 's']],
+ columns=['tag', 'char'])
+ .set_index('tag'))
+
+ result = left.join(right, on='tag', how='left')
+
+ expected = DataFrame([
+ ['c', 0, 'w'],
+ ['c', 0, 'x'],
+ ['c', 0, 'r'],
+ ['c', 0, 's'],
+ ['b', 1, nan],
+ ['a', 2, 'v'],
+ ['a', 2, 'z'],
+ ['b', 3, nan]],
+ columns=['tag', 'val', 'char'],
+ index=[2, 2, 2, 2, 0, 1, 1, 3])
+
+ tm.assert_frame_equal(result, expected)
+
+ result = left.join(right, on='tag', how='left', sort=True)
+ expected2 = expected.sort_values('tag', kind='mergesort')
+
+ tm.assert_frame_equal(result, expected2)
+
+ # GH7331 - maintain left frame order in left merge
+ result = merge(left, right.reset_index(), how='left', on='tag')
+ expected.index = np.arange(len(expected))
+ tm.assert_frame_equal(result, expected)
+
+ def test_left_merge_na_buglet(self):
+ left = DataFrame({'id': list('abcde'), 'v1': randn(5),
+ 'v2': randn(5), 'dummy': list('abcde'),
+ 'v3': randn(5)},
+ columns=['id', 'v1', 'v2', 'dummy', 'v3'])
+ right = DataFrame({'id': ['a', 'b', np.nan, np.nan, np.nan],
+ 'sv3': [1.234, 5.678, np.nan, np.nan, np.nan]})
+
+ result = merge(left, right, on='id', how='left')
+
+ rdf = right.drop(['id'], axis=1)
+ expected = left.join(rdf)
+ tm.assert_frame_equal(result, expected)
+
+ def test_merge_na_keys(self):
+ data = [[1950, "A", 1.5],
+ [1950, "B", 1.5],
+ [1955, "B", 1.5],
+ [1960, "B", np.nan],
+ [1970, "B", 4.],
+ [1950, "C", 4.],
+ [1960, "C", np.nan],
+ [1965, "C", 3.],
+ [1970, "C", 4.]]
+
+ frame = DataFrame(data, columns=["year", "panel", "data"])
+
+ other_data = [[1960, 'A', np.nan],
+ [1970, 'A', np.nan],
+ [1955, 'A', np.nan],
+ [1965, 'A', np.nan],
+ [1965, 'B', np.nan],
+ [1955, 'C', np.nan]]
+ other = DataFrame(other_data, columns=['year', 'panel', 'data'])
+
+ result = frame.merge(other, how='outer')
+
+ expected = frame.fillna(-999).merge(other.fillna(-999), how='outer')
+ expected = expected.replace(-999, np.nan)
+
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("klass", [None, np.asarray, Series, Index])
+ def test_merge_datetime_index(self, klass):
+ # see gh-19038
+ df = DataFrame([1, 2, 3],
+ ["2016-01-01", "2017-01-01", "2018-01-01"],
+ columns=["a"])
+ df.index = pd.to_datetime(df.index)
+ on_vector = df.index.year
+
+ if klass is not None:
+ on_vector = klass(on_vector)
+
+ expected = DataFrame(
+ OrderedDict([
+ ("a", [1, 2, 3]),
+ ("key_1", [2016, 2017, 2018]),
+ ])
+ )
+
+ result = df.merge(df, on=["a", on_vector], how="inner")
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame(
+ OrderedDict([
+ ("key_0", [2016, 2017, 2018]),
+ ("a_x", [1, 2, 3]),
+ ("a_y", [1, 2, 3]),
+ ])
+ )
+
+ result = df.merge(df, on=[df.index.year], how="inner")
+ tm.assert_frame_equal(result, expected)
+
+ def test_join_multi_levels(self):
+
+ # GH 3662
+ # merge multi-levels
+ household = (
+ DataFrame(
+ dict(household_id=[1, 2, 3],
+ male=[0, 1, 0],
+ wealth=[196087.3, 316478.7, 294750]),
+ columns=['household_id', 'male', 'wealth'])
+ .set_index('household_id'))
+ portfolio = (
+ DataFrame(
+ dict(household_id=[1, 2, 2, 3, 3, 3, 4],
+ asset_id=["nl0000301109", "nl0000289783", "gb00b03mlx29",
+ "gb00b03mlx29", "lu0197800237", "nl0000289965",
+ np.nan],
+ name=["ABN Amro", "Robeco", "Royal Dutch Shell",
+ "Royal Dutch Shell",
+ "AAB Eastern Europe Equity Fund",
+ "Postbank BioTech Fonds", np.nan],
+ share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]),
+ columns=['household_id', 'asset_id', 'name', 'share'])
+ .set_index(['household_id', 'asset_id']))
+ result = household.join(portfolio, how='inner')
+ expected = (
+ DataFrame(
+ dict(male=[0, 1, 1, 0, 0, 0],
+ wealth=[196087.3, 316478.7, 316478.7,
+ 294750.0, 294750.0, 294750.0],
+ name=['ABN Amro', 'Robeco', 'Royal Dutch Shell',
+ 'Royal Dutch Shell',
+ 'AAB Eastern Europe Equity Fund',
+ 'Postbank BioTech Fonds'],
+ share=[1.00, 0.40, 0.60, 0.15, 0.60, 0.25],
+ household_id=[1, 2, 2, 3, 3, 3],
+ asset_id=['nl0000301109', 'nl0000289783', 'gb00b03mlx29',
+ 'gb00b03mlx29', 'lu0197800237',
+ 'nl0000289965']))
+ .set_index(['household_id', 'asset_id'])
+ .reindex(columns=['male', 'wealth', 'name', 'share']))
+ tm.assert_frame_equal(result, expected)
+
+ # equivalency
+ result = (merge(household.reset_index(), portfolio.reset_index(),
+ on=['household_id'], how='inner')
+ .set_index(['household_id', 'asset_id']))
+ tm.assert_frame_equal(result, expected)
+
+ result = household.join(portfolio, how='outer')
+ expected = (concat([
+ expected,
+ (DataFrame(
+ dict(share=[1.00]),
+ index=MultiIndex.from_tuples(
+ [(4, np.nan)],
+ names=['household_id', 'asset_id'])))
+ ], axis=0, sort=True).reindex(columns=expected.columns))
+ tm.assert_frame_equal(result, expected)
+
+ # invalid cases
+ household.index.name = 'foo'
+
+ def f():
+ household.join(portfolio, how='inner')
+
+ pytest.raises(ValueError, f)
+
+ portfolio2 = portfolio.copy()
+ portfolio2.index.set_names(['household_id', 'foo'])
+
+ def f():
+ portfolio2.join(portfolio, how='inner')
+
+ pytest.raises(ValueError, f)
+
+ def test_join_multi_levels2(self):
+
+ # some more advanced merges
+ # GH6360
+ household = (
+ DataFrame(
+ dict(household_id=[1, 2, 2, 3, 3, 3, 4],
+ asset_id=["nl0000301109", "nl0000301109", "gb00b03mlx29",
+ "gb00b03mlx29", "lu0197800237", "nl0000289965",
+ np.nan],
+ share=[1.0, 0.4, 0.6, 0.15, 0.6, 0.25, 1.0]),
+ columns=['household_id', 'asset_id', 'share'])
+ .set_index(['household_id', 'asset_id']))
+
+ log_return = DataFrame(dict(
+ asset_id=["gb00b03mlx29", "gb00b03mlx29",
+ "gb00b03mlx29", "lu0197800237", "lu0197800237"],
+ t=[233, 234, 235, 180, 181],
+ log_return=[.09604978, -.06524096, .03532373, .03025441, .036997]
+ )).set_index(["asset_id", "t"])
+
+ expected = (
+ DataFrame(dict(
+ household_id=[2, 2, 2, 3, 3, 3, 3, 3],
+ asset_id=["gb00b03mlx29", "gb00b03mlx29",
+ "gb00b03mlx29", "gb00b03mlx29",
+ "gb00b03mlx29", "gb00b03mlx29",
+ "lu0197800237", "lu0197800237"],
+ t=[233, 234, 235, 233, 234, 235, 180, 181],
+ share=[0.6, 0.6, 0.6, 0.15, 0.15, 0.15, 0.6, 0.6],
+ log_return=[.09604978, -.06524096, .03532373,
+ .09604978, -.06524096, .03532373,
+ .03025441, .036997]
+ ))
+ .set_index(["household_id", "asset_id", "t"])
+ .reindex(columns=['share', 'log_return']))
+
+ # this is the equivalency
+ result = (merge(household.reset_index(), log_return.reset_index(),
+ on=['asset_id'], how='inner')
+ .set_index(['household_id', 'asset_id', 't']))
+ tm.assert_frame_equal(result, expected)
+
+ expected = (
+ DataFrame(dict(
+ household_id=[1, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 4],
+ asset_id=["nl0000301109", "nl0000301109", "gb00b03mlx29",
+ "gb00b03mlx29", "gb00b03mlx29",
+ "gb00b03mlx29", "gb00b03mlx29", "gb00b03mlx29",
+ "lu0197800237", "lu0197800237",
+ "nl0000289965", None],
+ t=[None, None, 233, 234, 235, 233, 234,
+ 235, 180, 181, None, None],
+ share=[1.0, 0.4, 0.6, 0.6, 0.6, 0.15,
+ 0.15, 0.15, 0.6, 0.6, 0.25, 1.0],
+ log_return=[None, None, .09604978, -.06524096, .03532373,
+ .09604978, -.06524096, .03532373,
+ .03025441, .036997, None, None]
+ ))
+ .set_index(["household_id", "asset_id", "t"])
+ .reindex(columns=['share', 'log_return']))
+
+ result = (merge(household.reset_index(), log_return.reset_index(),
+ on=['asset_id'], how='outer')
+ .set_index(['household_id', 'asset_id', 't']))
+
+ tm.assert_frame_equal(result, expected)
+
+
+class TestJoinMultiMulti(object):
+
+ def test_join_multi_multi(self, left_multi, right_multi, join_type,
+ on_cols_multi, idx_cols_multi):
+ # Multi-index join tests
+ expected = (pd.merge(left_multi.reset_index(),
+ right_multi.reset_index(),
+ how=join_type, on=on_cols_multi).
+ set_index(idx_cols_multi).sort_index())
+
+ result = left_multi.join(right_multi, how=join_type).sort_index()
+ tm.assert_frame_equal(result, expected)
+
+ def test_join_multi_empty_frames(self, left_multi, right_multi, join_type,
+ on_cols_multi, idx_cols_multi):
+
+ left_multi = left_multi.drop(columns=left_multi.columns)
+ right_multi = right_multi.drop(columns=right_multi.columns)
+
+ expected = (pd.merge(left_multi.reset_index(),
+ right_multi.reset_index(),
+ how=join_type, on=on_cols_multi)
+ .set_index(idx_cols_multi).sort_index())
+
+ result = left_multi.join(right_multi, how=join_type).sort_index()
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("box", [None, np.asarray, Series, Index])
+ def test_merge_datetime_index(self, box):
+ # see gh-19038
+ df = DataFrame([1, 2, 3],
+ ["2016-01-01", "2017-01-01", "2018-01-01"],
+ columns=["a"])
+ df.index = pd.to_datetime(df.index)
+ on_vector = df.index.year
+
+ if box is not None:
+ on_vector = box(on_vector)
+
+ expected = DataFrame(
+ OrderedDict([
+ ("a", [1, 2, 3]),
+ ("key_1", [2016, 2017, 2018]),
+ ])
+ )
+
+ result = df.merge(df, on=["a", on_vector], how="inner")
+ tm.assert_frame_equal(result, expected)
+
+ expected = DataFrame(
+ OrderedDict([
+ ("key_0", [2016, 2017, 2018]),
+ ("a_x", [1, 2, 3]),
+ ("a_y", [1, 2, 3]),
+ ])
+ )
+
+ result = df.merge(df, on=[df.index.year], how="inner")
+ tm.assert_frame_equal(result, expected)
+
+ def test_single_common_level(self):
+ index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
+ ('K1', 'X2')],
+ names=['key', 'X'])
+
+ left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
+ 'B': ['B0', 'B1', 'B2']},
+ index=index_left)
+
+ index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
+ ('K2', 'Y2'), ('K2', 'Y3')],
+ names=['key', 'Y'])
+
+ right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
+ 'D': ['D0', 'D1', 'D2', 'D3']},
+ index=index_right)
+
+ result = left.join(right)
+ expected = (pd.merge(left.reset_index(), right.reset_index(),
+ on=['key'], how='inner')
+ .set_index(['key', 'X', 'Y']))
+
+ tm.assert_frame_equal(result, expected)
| closes #16162
closes #6360
Allow to join on multiple levels for multi-indexed dataframe instances | https://api.github.com/repos/pandas-dev/pandas/pulls/20356 | 2018-03-15T01:04:14Z | 2018-11-15T13:08:29Z | 2018-11-15T13:08:29Z | 2018-11-15T13:13:23Z |
CI: Fix linting on master | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index de4ea5fcfaefa..d5daece62cba8 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -47,7 +47,6 @@
from pandas.core.base import PandasObject, IndexOpsMixin
import pandas.core.common as com
-import pandas.core.base as base
from pandas.core import ops
from pandas.util._decorators import (
Appender, Substitution, cache_readonly, deprecate_kwarg)
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 332c6613a230c..7b902b92d44a4 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -22,7 +22,6 @@
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.core import accessor
import pandas.core.common as com
-import pandas.core.base as base
import pandas.core.missing as missing
import pandas.core.indexes.base as ibase
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 56f1f3c0bdd67..60eda70714da5 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -23,7 +23,6 @@
from pandas.core.dtypes.missing import isna, array_equivalent
from pandas.errors import PerformanceWarning, UnsortedIndexError
-import pandas.core.base as base
from pandas.util._decorators import Appender, cache_readonly, deprecate_kwarg
import pandas.core.common as com
import pandas.core.missing as missing
| https://github.com/pandas-dev/pandas/commit/92c29106d651c2690878827b07c29d88403c6f23
removed some shared-doc things, but we missed the now unused imports.
Testing locally. Merging when that finishes. | https://api.github.com/repos/pandas-dev/pandas/pulls/20352 | 2018-03-14T19:58:37Z | 2018-03-14T20:06:32Z | 2018-03-14T20:06:32Z | 2018-03-14T20:06:35Z |
CI/CLN: Fix linting | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b82cc3980bae8..1f387dadfb9ae 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1790,7 +1790,7 @@ def freq(self, value):
See Also
--------
quarter : Return the quarter of the date.
- is_quarter_end : Similar property for indicating the start of a quarter.
+ is_quarter_end : Similar property for indicating the quarter start.
Examples
--------
| Breaking master right now. My bad. | https://api.github.com/repos/pandas-dev/pandas/pulls/20351 | 2018-03-14T19:41:38Z | 2018-03-14T19:42:13Z | 2018-03-14T19:42:13Z | 2018-03-14T19:42:16Z |
DOC: fix typo in Dataframe.loc[] docstring | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 39b8aa635f35e..9736bba78ab72 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1439,8 +1439,8 @@ class _LocIndexer(_LocationIndexer):
See Also
--------
- DateFrame.at : Access a single value for a row/column label pair
- DateFrame.iloc : Access group of rows and columns by integer position(s)
+ DataFrame.at : Access a single value for a row/column label pair
+ DataFrame.iloc : Access group of rows and columns by integer position(s)
DataFrame.xs : Returns a cross-section (row(s) or column(s)) from the
Series/DataFrame.
Series.loc : Access group of values using labels
| I apologize, but I realized I had a typo in the PR merged in earlier today.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20350 | 2018-03-14T19:00:06Z | 2018-03-14T20:05:57Z | 2018-03-14T20:05:57Z | 2018-03-14T20:06:08Z |
API: str.cat will align on Series | diff --git a/doc/source/text.rst b/doc/source/text.rst
index bcbf7f0ef78d7..02fa2d882f8b1 100644
--- a/doc/source/text.rst
+++ b/doc/source/text.rst
@@ -247,27 +247,87 @@ Missing values on either side will result in missing values in the result as wel
s.str.cat(t)
s.str.cat(t, na_rep='-')
-Series are *not* aligned on their index before concatenation:
+Concatenating a Series and something array-like into a Series
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. versionadded:: 0.23.0
+
+The parameter ``others`` can also be two-dimensional. In this case, the number or rows must match the lengths of the calling ``Series`` (or ``Index``).
.. ipython:: python
- u = pd.Series(['b', 'd', 'e', 'c'], index=[1, 3, 4, 2])
- # without alignment
+ d = pd.concat([t, s], axis=1)
+ s
+ d
+ s.str.cat(d, na_rep='-')
+
+Concatenating a Series and an indexed object into a Series, with alignment
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. versionadded:: 0.23.0
+
+For concatenation with a ``Series`` or ``DataFrame``, it is possible to align the indexes before concatenation by setting
+the ``join``-keyword.
+
+.. ipython:: python
+
+ u = pd.Series(['b', 'd', 'a', 'c'], index=[1, 3, 0, 2])
+ s
+ u
s.str.cat(u)
- # with separate alignment
- v, w = s.align(u)
- v.str.cat(w, na_rep='-')
+ s.str.cat(u, join='left')
+
+.. warning::
+
+ If the ``join`` keyword is not passed, the method :meth:`~Series.str.cat` will currently fall back to the behavior before version 0.23.0 (i.e. no alignment),
+ but a ``FutureWarning`` will be raised if any of the involved indexes differ, since this default will change to ``join='left'`` in a future version.
+
+The usual options are available for ``join`` (one of ``'left', 'outer', 'inner', 'right'``).
+In particular, alignment also means that the different lengths do not need to coincide anymore.
+
+.. ipython:: python
+
+ v = pd.Series(['z', 'a', 'b', 'd', 'e'], index=[-1, 0, 1, 3, 4])
+ s
+ v
+ s.str.cat(v, join='left', na_rep='-')
+ s.str.cat(v, join='outer', na_rep='-')
+
+The same alignment can be used when ``others`` is a ``DataFrame``:
+
+.. ipython:: python
+
+ f = d.loc[[3, 2, 1, 0], :]
+ s
+ f
+ s.str.cat(f, join='left', na_rep='-')
Concatenating a Series and many objects into a Series
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-List-likes (excluding iterators, ``dict``-views, etc.) can be arbitrarily combined in a list.
-All elements of the list must match in length to the calling ``Series`` (resp. ``Index``):
+All one-dimensional list-likes can be arbitrarily combined in a list-like container (including iterators, ``dict``-views, etc.):
+
+.. ipython:: python
+
+ s
+ u
+ s.str.cat([u, pd.Index(u.values), ['A', 'B', 'C', 'D'], map(int, u.index)], na_rep='-')
+
+All elements must match in length to the calling ``Series`` (or ``Index``), except those having an index if ``join`` is not None:
+
+.. ipython:: python
+
+ v
+ s.str.cat([u, v, ['A', 'B', 'C', 'D']], join='outer', na_rep='-')
+
+If using ``join='right'`` on a list of ``others`` that contains different indexes,
+the union of these indexes will be used as the basis for the final concatenation:
.. ipython:: python
- x = pd.Series([1, 2, 3, 4], index=['A', 'B', 'C', 'D'])
- s.str.cat([['A', 'B', 'C', 'D'], s, s.values, x.index])
+ u.loc[[3]]
+ v.loc[[-1, 0]]
+ s.str.cat([u.loc[[3]], v.loc[[-1, 0]]], join='right', na_rep='-')
Indexing with ``.str``
----------------------
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index c194d98a89789..d3746d9e0b61e 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -308,6 +308,24 @@ The :func:`DataFrame.assign` now accepts dependent keyword arguments for python
df.assign(A=df.A+1, C= lambda df: df.A* -1)
+.. _whatsnew_0230.enhancements.str_cat_align:
+
+``Series.str.cat`` has gained the ``join`` kwarg
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Previously, :meth:`Series.str.cat` did not -- in contrast to most of ``pandas`` -- align :class:`Series` on their index before concatenation (see :issue:`18657`).
+The method has now gained a keyword ``join`` to control the manner of alignment, see examples below and in :ref:`here <text.concatenate>`.
+
+In v.0.23 `join` will default to None (meaning no alignment), but this default will change to ``'left'`` in a future version of pandas.
+
+.. ipython:: python
+
+ s = pd.Series(['a', 'b', 'c', 'd'])
+ t = pd.Series(['b', 'd', 'e', 'c'], index=[1, 3, 4, 2])
+ s.str.cat(t)
+ s.str.cat(t, join='left', na_rep='-')
+
+Furthermore, meth:`Series.str.cat` now works for ``CategoricalIndex`` as well (previously raised a ``ValueError``; see :issue:`20842`).
.. _whatsnew_0230.enhancements.astype_category:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index c6d45ce5413ac..da4845fcdfc8e 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -65,7 +65,7 @@ def _get_array_list(arr, others):
def str_cat(arr, others=None, sep=None, na_rep=None):
"""
- Concatenate strings in the Series/Index with given separator.
+ Auxiliary function for :meth:`str.cat`
If `others` is specified, this function concatenates the Series/Index
and elements of `others` element-wise.
@@ -84,42 +84,9 @@ def str_cat(arr, others=None, sep=None, na_rep=None):
Returns
-------
- concat : Series/Index of objects or str
-
- See Also
- --------
- split : Split each string in the Series/Index
-
- Examples
- --------
- When not passing `other`, all values are concatenated into a single
- string:
-
- >>> s = pd.Series(['a', 'b', np.nan, 'c'])
- >>> s.str.cat(sep=' ')
- 'a b c'
-
- By default, NA values in the Series are ignored. Using `na_rep`, they
- can be given a representation:
-
- >>> pd.Series(['a', 'b', np.nan, 'c']).str.cat(sep=' ', na_rep='?')
- 'a b ? c'
-
- If `others` is specified, corresponding values are
- concatenated with the separator. Result will be a Series of strings.
-
- >>> pd.Series(['a', 'b', 'c']).str.cat(['A', 'B', 'C'], sep=',')
- 0 a,A
- 1 b,B
- 2 c,C
- dtype: object
-
- Also, you can pass a list of list-likes.
-
- >>> pd.Series(['a', 'b']).str.cat([['x', 'y'], ['1', '2']], sep=',')
- 0 a,x,1
- 1 b,y,2
- dtype: object
+ concat
+ ndarray containing concatenated results (if `others is not None`)
+ or str (if `others is None`)
"""
if sep is None:
sep = ''
@@ -174,7 +141,7 @@ def _length_check(others):
elif len(x) != n:
raise ValueError('All arrays must be same length')
except TypeError:
- raise ValueError("Did you mean to supply a `sep` keyword?")
+ raise ValueError('Must pass arrays containing strings to str_cat')
return n
@@ -1833,7 +1800,9 @@ class StringMethods(NoNewAttributesMixin):
def __init__(self, data):
self._validate(data)
self._is_categorical = is_categorical_dtype(data)
- self._data = data.cat.categories if self._is_categorical else data
+
+ # .values.categories works for both Series/Index
+ self._data = data.values.categories if self._is_categorical else data
# save orig to blow up categoricals to the right type
self._orig = data
self._freeze()
@@ -1859,7 +1828,11 @@ def _validate(data):
# see src/inference.pyx which can contain string values
allowed_types = ('string', 'unicode', 'mixed', 'mixed-integer')
- if data.inferred_type not in allowed_types:
+ if is_categorical_dtype(data.dtype):
+ inf_type = data.categories.inferred_type
+ else:
+ inf_type = data.inferred_type
+ if inf_type not in allowed_types:
message = ("Can only use .str accessor with string values "
"(i.e. inferred_type is 'string', 'unicode' or "
"'mixed')")
@@ -1962,11 +1935,298 @@ def cons_row(x):
cons = self._orig._constructor
return cons(result, name=name, index=index)
- @copy(str_cat)
- def cat(self, others=None, sep=None, na_rep=None):
- data = self._orig if self._is_categorical else self._data
- result = str_cat(data, others=others, sep=sep, na_rep=na_rep)
- return self._wrap_result(result, use_codes=(not self._is_categorical))
+ def _get_series_list(self, others, ignore_index=False):
+ """
+ Auxiliary function for :meth:`str.cat`. Turn potentially mixed input
+ into a list of Series (elements without an index must match the length
+ of the calling Series/Index).
+
+ Parameters
+ ----------
+ input : Series, DataFrame, np.ndarray, list-like or list-like of
+ objects that are either Series, np.ndarray (1-dim) or list-like
+ ignore_index : boolean, default False
+ Determines whether to forcefully align with index of the caller
+
+ Returns
+ -------
+ tuple : (input transformed into list of Series,
+ Boolean whether FutureWarning should be raised)
+ """
+
+ # once str.cat defaults to alignment, this function can be simplified;
+ # will not need `ignore_index` and the second boolean output anymore
+
+ from pandas import Index, Series, DataFrame, isnull
+
+ # self._orig is either Series or Index
+ idx = self._orig if isinstance(self._orig, Index) else self._orig.index
+
+ err_msg = ('others must be Series, Index, DataFrame, np.ndarrary or '
+ 'list-like (either containing only strings or containing '
+ 'only objects of type Series/Index/list-like/np.ndarray)')
+
+ if isinstance(others, Series):
+ fu_wrn = not others.index.equals(idx)
+ los = [Series(others.values, index=idx)
+ if ignore_index and fu_wrn else others]
+ return (los, fu_wrn)
+ elif isinstance(others, Index):
+ fu_wrn = not others.equals(idx)
+ los = [Series(others.values,
+ index=(idx if ignore_index else others))]
+ return (los, fu_wrn)
+ elif isinstance(others, DataFrame):
+ fu_wrn = not others.index.equals(idx)
+ if ignore_index and fu_wrn:
+ # without copy, this could change "others"
+ # that was passed to str.cat
+ others = others.copy()
+ others.index = idx
+ return ([others[x] for x in others], fu_wrn)
+ elif isinstance(others, np.ndarray) and others.ndim == 2:
+ others = DataFrame(others, index=idx)
+ return ([others[x] for x in others], False)
+ elif is_list_like(others):
+ others = list(others) # ensure iterators do not get read twice etc
+ if all(is_list_like(x) for x in others):
+ los = []
+ fu_wrn = False
+ while others:
+ nxt = others.pop(0) # list-like as per check above
+ # safety for iterators and other non-persistent list-likes
+ # do not map indexed/typed objects; would lose information
+ if not isinstance(nxt, (DataFrame, Series,
+ Index, np.ndarray)):
+ nxt = list(nxt)
+
+ # known types without deep inspection
+ no_deep = ((isinstance(nxt, np.ndarray) and nxt.ndim == 1)
+ or isinstance(nxt, (Series, Index)))
+ # Nested list-likes are forbidden - elements of nxt must be
+ # strings/NaN/None. Need to robustify NaN-check against
+ # x in nxt being list-like (otherwise ambiguous boolean)
+ is_legal = ((no_deep and nxt.dtype == object)
+ or all((isinstance(x, compat.string_types)
+ or (not is_list_like(x) and isnull(x))
+ or x is None)
+ for x in nxt))
+ # DataFrame is false positive of is_legal
+ # because "x in df" returns column names
+ if not is_legal or isinstance(nxt, DataFrame):
+ raise TypeError(err_msg)
+
+ nxt, fwn = self._get_series_list(nxt,
+ ignore_index=ignore_index)
+ los = los + nxt
+ fu_wrn = fu_wrn or fwn
+ return (los, fu_wrn)
+ # test if there is a mix of list-like and non-list-like (e.g. str)
+ elif (any(is_list_like(x) for x in others)
+ and any(not is_list_like(x) for x in others)):
+ raise TypeError(err_msg)
+ else: # all elements in others are _not_ list-like
+ return ([Series(others, index=idx)], False)
+ raise TypeError(err_msg)
+
+ def cat(self, others=None, sep=None, na_rep=None, join=None):
+ """
+ Concatenate strings in the Series/Index with given separator.
+
+ If `others` is specified, this function concatenates the Series/Index
+ and elements of `others` element-wise.
+ If `others` is not passed, then all values in the Series/Index are
+ concatenated into a single string with a given `sep`.
+
+ Parameters
+ ----------
+ others : Series, Index, DataFrame, np.ndarrary or list-like
+ Series, Index, DataFrame, np.ndarray (one- or two-dimensional) and
+ other list-likes of strings must have the same length as the
+ calling Series/Index, with the exception of indexed objects (i.e.
+ Series/Index/DataFrame) if `join` is not None.
+
+ If others is a list-like that contains a combination of Series,
+ np.ndarray (1-dim) or list-like, then all elements will be unpacked
+ and must satisfy the above criteria individually.
+
+ If others is None, the method returns the concatenation of all
+ strings in the calling Series/Index.
+ sep : string or None, default None
+ If None, concatenates without any separator.
+ na_rep : string or None, default None
+ Representation that is inserted for all missing values:
+
+ - If `na_rep` is None, and `others` is None, missing values in the
+ Series/Index are omitted from the result.
+ - If `na_rep` is None, and `others` is not None, a row containing a
+ missing value in any of the columns (before concatenation) will
+ have a missing value in the result.
+ join : {'left', 'right', 'outer', 'inner'}, default None
+ Determines the join-style between the calling Series/Index and any
+ Series/Index/DataFrame in `others` (objects without an index need
+ to match the length of the calling Series/Index). If None,
+ alignment is disabled, but this option will be removed in a future
+ version of pandas and replaced with a default of `'left'`. To
+ disable alignment, use `.values` on any Series/Index/DataFrame in
+ `others`.
+
+ .. versionadded:: 0.23.0
+
+ Returns
+ -------
+ concat : str if `other is None`, Series/Index of objects if `others is
+ not None`. In the latter case, the result will remain categorical
+ if the calling Series/Index is categorical.
+
+ See Also
+ --------
+ split : Split each string in the Series/Index
+
+ Examples
+ --------
+ When not passing `others`, all values are concatenated into a single
+ string:
+
+ >>> s = pd.Series(['a', 'b', np.nan, 'd'])
+ >>> s.str.cat(sep=' ')
+ 'a b d'
+
+ By default, NA values in the Series are ignored. Using `na_rep`, they
+ can be given a representation:
+
+ >>> s.str.cat(sep=' ', na_rep='?')
+ 'a b ? d'
+
+ If `others` is specified, corresponding values are concatenated with
+ the separator. Result will be a Series of strings.
+
+ >>> s.str.cat(['A', 'B', 'C', 'D'], sep=',')
+ 0 a,A
+ 1 b,B
+ 2 NaN
+ 3 d,D
+ dtype: object
+
+ Missing values will remain missing in the result, but can again be
+ represented using `na_rep`
+
+ >>> s.str.cat(['A', 'B', 'C', 'D'], sep=',', na_rep='-')
+ 0 a,A
+ 1 b,B
+ 2 -,C
+ 3 d,D
+ dtype: object
+
+ If `sep` is not specified, the values are concatenated without
+ separation.
+
+ >>> s.str.cat(['A', 'B', 'C', 'D'], na_rep='-')
+ 0 aA
+ 1 bB
+ 2 -C
+ 3 dD
+ dtype: object
+
+ Series with different indexes can be aligned before concatenation. The
+ `join`-keyword works as in other methods.
+
+ >>> t = pd.Series(['d', 'a', 'e', 'c'], index=[3, 0, 4, 2])
+ >>> s.str.cat(t, join=None, na_rep='-')
+ 0 ad
+ 1 ba
+ 2 -e
+ 3 dc
+ dtype: object
+ >>>
+ >>> s.str.cat(t, join='left', na_rep='-')
+ 0 aa
+ 1 b-
+ 2 -c
+ 3 dd
+ dtype: object
+ >>>
+ >>> s.str.cat(t, join='outer', na_rep='-')
+ 0 aa
+ 1 b-
+ 2 -c
+ 3 dd
+ 4 -e
+ dtype: object
+ >>>
+ >>> s.str.cat(t, join='inner', na_rep='-')
+ 0 aa
+ 2 -c
+ 3 dd
+ dtype: object
+ >>>
+ >>> s.str.cat(t, join='right', na_rep='-')
+ 3 dd
+ 0 aa
+ 4 -e
+ 2 -c
+ dtype: object
+
+ For more examples, see :ref:`here <text.concatenate>`.
+ """
+ from pandas import Index, Series, concat
+
+ if isinstance(others, compat.string_types):
+ raise ValueError("Did you mean to supply a `sep` keyword?")
+
+ if isinstance(self._orig, Index):
+ data = Series(self._orig, index=self._orig)
+ else: # Series
+ data = self._orig
+
+ # concatenate Series/Index with itself if no "others"
+ if others is None:
+ result = str_cat(data, others=others, sep=sep, na_rep=na_rep)
+ return self._wrap_result(result,
+ use_codes=(not self._is_categorical))
+
+ try:
+ # turn anything in "others" into lists of Series
+ others, fu_wrn = self._get_series_list(others,
+ ignore_index=(join is None))
+ except ValueError: # do not catch TypeError raised by _get_series_list
+ if join is None:
+ raise ValueError('All arrays must be same length, except '
+ 'those having an index if `join` is not None')
+ else:
+ raise ValueError('If `others` contains arrays or lists (or '
+ 'other list-likes without an index), these '
+ 'must all be of the same length as the '
+ 'calling Series/Index.')
+
+ if join is None and fu_wrn:
+ warnings.warn("A future version of pandas will perform index "
+ "alignment when `others` is a Series/Index/"
+ "DataFrame (or a list-like containing one). To "
+ "disable alignment (the behavior before v.0.23) and "
+ "silence this warning, use `.values` on any Series/"
+ "Index/DataFrame in `others`. To enable alignment "
+ "and silence this warning, pass `join='left'|"
+ "'outer'|'inner'|'right'`. The future default will "
+ "be `join='left'`.", FutureWarning, stacklevel=2)
+
+ # align if required
+ if join is not None:
+ # Need to add keys for uniqueness in case of duplicate columns
+ others = concat(others, axis=1,
+ join=(join if join == 'inner' else 'outer'),
+ keys=range(len(others)))
+ data, others = data.align(others, join=join)
+ others = [others[x] for x in others] # again list of Series
+
+ # str_cat discards index
+ res = str_cat(data, others=others, sep=sep, na_rep=na_rep)
+
+ if isinstance(self._orig, Index):
+ res = Index(res)
+ else: # Series
+ res = Series(res, index=data.index)
+ return res
@copy(str_split)
def split(self, pat=None, n=-1, expand=False):
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index ac8d269c75f52..1a978cbf6363f 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -11,14 +11,21 @@
from pandas.compat import range, u
import pandas.compat as compat
-from pandas import Index, Series, DataFrame, isna, MultiIndex, notna
+from pandas import Index, Series, DataFrame, isna, MultiIndex, notna, concat
-from pandas.util.testing import assert_series_equal
+from pandas.util.testing import assert_series_equal, assert_index_equal
import pandas.util.testing as tm
import pandas.core.strings as strings
+def assert_series_or_index_equal(left, right):
+ if isinstance(left, Series):
+ assert_series_equal(left, right)
+ else: # Index
+ assert_index_equal(left, right)
+
+
class TestStringMethods(object):
def test_api(self):
@@ -125,6 +132,330 @@ def test_cat(self):
exp = np.array(['aa', NA, 'bb', 'bd', 'cfoo', NA], dtype=np.object_)
tm.assert_almost_equal(result, exp)
+ # error for incorrect lengths
+ rgx = 'All arrays must be same length'
+ three = Series(['1', '2', '3'])
+
+ with tm.assert_raises_regex(ValueError, rgx):
+ strings.str_cat(one, three)
+
+ # error for incorrect type
+ rgx = "Must pass arrays containing strings to str_cat"
+ with tm.assert_raises_regex(ValueError, rgx):
+ strings.str_cat(one, 'three')
+
+ @pytest.mark.parametrize('series_or_index', ['series', 'index'])
+ def test_str_cat(self, series_or_index):
+ # test_cat above tests "str_cat" from ndarray to ndarray;
+ # here testing "str.cat" from Series/Index to Series/Index/ndarray/list
+ s = Index(['a', 'a', 'b', 'b', 'c', np.nan])
+ if series_or_index == 'series':
+ s = Series(s)
+ t = Index(['a', np.nan, 'b', 'd', 'foo', np.nan])
+
+ # single array
+ result = s.str.cat()
+ exp = 'aabbc'
+ assert result == exp
+
+ result = s.str.cat(na_rep='-')
+ exp = 'aabbc-'
+ assert result == exp
+
+ result = s.str.cat(sep='_', na_rep='NA')
+ exp = 'a_a_b_b_c_NA'
+ assert result == exp
+
+ # Series/Index with Index
+ exp = Index(['aa', 'a-', 'bb', 'bd', 'cfoo', '--'])
+ if series_or_index == 'series':
+ exp = Series(exp)
+ # s.index / s is different from t (as Index) -> warning
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
+
+ # Series/Index with Series
+ t = Series(t)
+ # s as Series has same index as t -> no warning
+ # s as Index is different from t.index -> warning (tested below)
+ if series_or_index == 'series':
+ assert_series_equal(s.str.cat(t, na_rep='-'), exp)
+
+ # Series/Index with Series: warning if different indexes
+ t.index = t.index + 1
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
+
+ # Series/Index with array
+ assert_series_or_index_equal(s.str.cat(t.values, na_rep='-'), exp)
+
+ # Series/Index with list
+ assert_series_or_index_equal(s.str.cat(list(t), na_rep='-'), exp)
+
+ # errors for incorrect lengths
+ rgx = 'All arrays must be same length, except.*'
+ z = Series(['1', '2', '3'])
+
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(z)
+
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(z.values)
+
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(list(z))
+
+ @pytest.mark.parametrize('series_or_index', ['series', 'index'])
+ def test_str_cat_raises_intuitive_error(self, series_or_index):
+ # https://github.com/pandas-dev/pandas/issues/11334
+ s = Index(['a', 'b', 'c', 'd'])
+ if series_or_index == 'series':
+ s = Series(s)
+ message = "Did you mean to supply a `sep` keyword?"
+ with tm.assert_raises_regex(ValueError, message):
+ s.str.cat('|')
+ with tm.assert_raises_regex(ValueError, message):
+ s.str.cat(' ')
+
+ @pytest.mark.parametrize('series_or_index, dtype_caller, dtype_target', [
+ ('series', 'object', 'object'),
+ ('series', 'object', 'category'),
+ ('series', 'category', 'object'),
+ ('series', 'category', 'category'),
+ ('index', 'object', 'object'),
+ ('index', 'object', 'category'),
+ ('index', 'category', 'object'),
+ ('index', 'category', 'category')
+ ])
+ def test_str_cat_categorical(self, series_or_index,
+ dtype_caller, dtype_target):
+ s = Index(['a', 'a', 'b', 'a'], dtype=dtype_caller)
+ if series_or_index == 'series':
+ s = Series(s)
+ t = Index(['b', 'a', 'b', 'c'], dtype=dtype_target)
+
+ exp = Index(['ab', 'aa', 'bb', 'ac'])
+ if series_or_index == 'series':
+ exp = Series(exp)
+
+ # Series/Index with Index
+ # s.index / s is different from t (as Index) -> warning
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat(t), exp)
+
+ # Series/Index with Series
+ t = Series(t)
+ # s as Series has same index as t -> no warning
+ # s as Index is different from t.index -> warning (tested below)
+ if series_or_index == 'series':
+ assert_series_equal(s.str.cat(t), exp)
+
+ # Series/Index with Series: warning if different indexes
+ t.index = t.index + 1
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat(t, na_rep='-'), exp)
+
+ @pytest.mark.parametrize('series_or_index', ['series', 'index'])
+ def test_str_cat_mixed_inputs(self, series_or_index):
+ s = Index(['a', 'b', 'c', 'd'])
+ if series_or_index == 'series':
+ s = Series(s)
+ t = Series(['A', 'B', 'C', 'D'])
+ d = concat([t, Series(s)], axis=1)
+
+ exp = Index(['aAa', 'bBb', 'cCc', 'dDd'])
+ if series_or_index == 'series':
+ exp = Series(exp)
+
+ # Series/Index with DataFrame
+ # s as Series has same index as d -> no warning
+ # s as Index is different from d.index -> warning (tested below)
+ if series_or_index == 'series':
+ assert_series_equal(s.str.cat(d), exp)
+
+ # Series/Index with DataFrame: warning if different indexes
+ d.index = d.index + 1
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat(d), exp)
+
+ # Series/Index with two-dimensional ndarray
+ assert_series_or_index_equal(s.str.cat(d.values), exp)
+
+ # Series/Index with list of Series
+ # s as Series has same index as t, s -> no warning
+ # s as Index is different from t.index -> warning (tested below)
+ if series_or_index == 'series':
+ assert_series_equal(s.str.cat([t, s]), exp)
+
+ # Series/Index with list of Series: warning if different indexes
+ tt = t.copy()
+ tt.index = tt.index + 1
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat([tt, s]), exp)
+
+ # Series/Index with list of list-likes
+ assert_series_or_index_equal(s.str.cat([t.values, list(s)]), exp)
+
+ # Series/Index with mixed list of Series/list-like
+ # s as Series has same index as t -> no warning
+ # s as Index is different from t.index -> warning (tested below)
+ if series_or_index == 'series':
+ assert_series_equal(s.str.cat([t, s.values]), exp)
+
+ # Series/Index with mixed list: warning if different indexes
+ with tm.assert_produces_warning(expected_warning=FutureWarning):
+ # FutureWarning to switch to alignment by default
+ assert_series_or_index_equal(s.str.cat([tt, s.values]), exp)
+
+ # Series/Index with iterator of list-likes
+ assert_series_or_index_equal(s.str.cat(iter([t.values, list(s)])), exp)
+
+ # errors for incorrect lengths
+ rgx = 'All arrays must be same length, except.*'
+ z = Series(['1', '2', '3'])
+ e = concat([z, z], axis=1)
+
+ # DataFrame
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(e)
+
+ # two-dimensional ndarray
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(e.values)
+
+ # list of Series
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat([z, s])
+
+ # list of list-likes
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat([z.values, list(s)])
+
+ # mixed list of Series/list-like
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat([z, list(s)])
+
+ # errors for incorrect arguments in list-like
+ rgx = 'others must be Series, Index, DataFrame,.*'
+ # make sure None/NaN do not crash checks in _get_series_list
+ u = Series(['a', np.nan, 'c', None])
+
+ # mix of string and Series
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat([u, 'u'])
+
+ # DataFrame in list
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat([u, d])
+
+ # 2-dim ndarray in list
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat([u, d.values])
+
+ # nested lists
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat([u, [u, d]])
+
+ # forbidden input type, e.g. int
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat(1)
+
+ @pytest.mark.parametrize('series_or_index, join', [
+ ('series', 'left'), ('series', 'outer'),
+ ('series', 'inner'), ('series', 'right'),
+ ('index', 'left'), ('index', 'outer'),
+ ('index', 'inner'), ('index', 'right')
+ ])
+ def test_str_cat_align_indexed(self, series_or_index, join):
+ # https://github.com/pandas-dev/pandas/issues/18657
+ s = Series(['a', 'b', 'c', 'd'], index=['a', 'b', 'c', 'd'])
+ t = Series(['D', 'A', 'E', 'B'], index=['d', 'a', 'e', 'b'])
+ sa, ta = s.align(t, join=join)
+ # result after manual alignment of inputs
+ exp = sa.str.cat(ta, na_rep='-')
+
+ if series_or_index == 'index':
+ s = Index(s)
+ sa = Index(sa)
+ exp = Index(exp)
+
+ assert_series_or_index_equal(s.str.cat(t, join=join, na_rep='-'), exp)
+
+ @pytest.mark.parametrize('join', ['left', 'outer', 'inner', 'right'])
+ def test_str_cat_align_mixed_inputs(self, join):
+ s = Series(['a', 'b', 'c', 'd'])
+ t = Series(['d', 'a', 'e', 'b'], index=[3, 0, 4, 1])
+ d = concat([t, t], axis=1)
+
+ exp_outer = Series(['aaa', 'bbb', 'c--', 'ddd', '-ee'])
+ sa, ta = s.align(t, join=join)
+ exp = exp_outer.loc[ta.index]
+
+ # list of Series
+ tm.assert_series_equal(s.str.cat([t, t], join=join, na_rep='-'), exp)
+
+ # DataFrame
+ tm.assert_series_equal(s.str.cat(d, join=join, na_rep='-'), exp)
+
+ # mixed list of indexed/unindexed
+ u = ['A', 'B', 'C', 'D']
+ exp_outer = Series(['aaA', 'bbB', 'c-C', 'ddD', '-e-'])
+ # u will be forced have index of s -> use s here as placeholder
+ e = concat([t, s], axis=1, join=(join if join == 'inner' else 'outer'))
+ sa, ea = s.align(e, join=join)
+ exp = exp_outer.loc[ea.index]
+ tm.assert_series_equal(s.str.cat([t, u], join=join, na_rep='-'), exp)
+
+ # errors for incorrect lengths
+ rgx = 'If `others` contains arrays or lists.*'
+ z = ['1', '2', '3']
+
+ # unindexed object of wrong length
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat(z, join=join)
+
+ # unindexed object of wrong length in list
+ with tm.assert_raises_regex(ValueError, rgx):
+ s.str.cat([t, z], join=join)
+
+ def test_str_cat_special_cases(self):
+ s = Series(['a', 'b', 'c', 'd'])
+ t = Series(['d', 'a', 'e', 'b'], index=[3, 0, 4, 1])
+
+ # iterator of elements with different types
+ exp = Series(['aaA', 'bbB', 'c-C', 'ddD', '-e-'])
+ tm.assert_series_equal(s.str.cat(iter([t, ['A', 'B', 'C', 'D']]),
+ join='outer', na_rep='-'), exp)
+
+ # right-align with different indexes in others
+ exp = Series(['aa-', 'd-d'], index=[0, 3])
+ tm.assert_series_equal(s.str.cat([t.loc[[0]], t.loc[[3]]],
+ join='right', na_rep='-'), exp)
+
+ def test_cat_on_filtered_index(self):
+ df = DataFrame(index=MultiIndex.from_product(
+ [[2011, 2012], [1, 2, 3]], names=['year', 'month']))
+
+ df = df.reset_index()
+ df = df[df.month > 1]
+
+ str_year = df.year.astype('str')
+ str_month = df.month.astype('str')
+ str_both = str_year.str.cat(str_month, sep=' ', join='left')
+
+ assert str_both.loc[1] == '2011 2'
+
+ str_multiple = str_year.str.cat([str_month, str_month],
+ sep=' ', join='left')
+
+ assert str_multiple.loc[1] == '2011 2 2'
+
def test_count(self):
values = np.array(['foo', 'foofoo', NA, 'foooofooofommmfoo'],
dtype=np.object_)
@@ -1263,7 +1594,7 @@ def test_empty_str_methods(self):
# GH7241
# (extract) on empty series
- tm.assert_series_equal(empty_str, empty.str.cat(empty))
+ tm.assert_series_equal(empty_str, empty.str.cat(empty, join='left'))
assert '' == empty.str.cat()
tm.assert_series_equal(empty_str, empty.str.title())
tm.assert_series_equal(empty_int, empty.str.count('a'))
@@ -2772,32 +3103,6 @@ def test_normalize(self):
result = s.str.normalize('NFKC')
tm.assert_index_equal(result, expected)
- def test_cat_on_filtered_index(self):
- df = DataFrame(index=MultiIndex.from_product(
- [[2011, 2012], [1, 2, 3]], names=['year', 'month']))
-
- df = df.reset_index()
- df = df[df.month > 1]
-
- str_year = df.year.astype('str')
- str_month = df.month.astype('str')
- str_both = str_year.str.cat(str_month, sep=' ')
-
- assert str_both.loc[1] == '2011 2'
-
- str_multiple = str_year.str.cat([str_month, str_month], sep=' ')
-
- assert str_multiple.loc[1] == '2011 2 2'
-
- def test_str_cat_raises_intuitive_error(self):
- # https://github.com/pandas-dev/pandas/issues/11334
- s = Series(['a', 'b', 'c', 'd'])
- message = "Did you mean to supply a `sep` keyword?"
- with tm.assert_raises_regex(ValueError, message):
- s.str.cat('|')
- with tm.assert_raises_regex(ValueError, message):
- s.str.cat(' ')
-
def test_index_str_accessor_visibility(self):
from pandas.core.strings import StringMethods
@@ -2857,9 +3162,9 @@ def test_method_on_bytes(self):
lhs = Series(np.array(list('abc'), 'S1').astype(object))
rhs = Series(np.array(list('def'), 'S1').astype(object))
if compat.PY3:
- pytest.raises(TypeError, lhs.str.cat, rhs)
+ pytest.raises(TypeError, lhs.str.cat, rhs, join='left')
else:
- result = lhs.str.cat(rhs)
+ result = lhs.str.cat(rhs, join='left')
expected = Series(np.array(
['ad', 'be', 'cf'], 'S2').astype(object))
tm.assert_series_equal(result, expected)
| Fixes issue #18657, fixed existing tests, added new test; all pass.
After I pushed everything and thought about it some more, I realised that one may argue about the default alignment-behavior, and whether it should be changed to `join=outer`. The behavior as implemented is compatible with the current requirement that everything be of the same length. To me, it is more intuitive that the concatenated `other` is added to the current series without enlarging it, but I can also see the argument why that restriction is unnecessary.
PS. This is my first PR, tried to follow all the rules. Sorry if I overlooked something.
Edit: Also fixes #20842 | https://api.github.com/repos/pandas-dev/pandas/pulls/20347 | 2018-03-14T14:44:38Z | 2018-05-02T11:13:36Z | 2018-05-02T11:13:36Z | 2018-05-02T11:32:03Z |
DOC: ecosystem: Vaex, Pandas on Ray, alphabetization, pandas-datareader | diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index 4e15f9069de67..82ca3821fc2ed 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -12,10 +12,13 @@ build powerful and more focused data tools.
The creation of libraries that complement pandas' functionality also allows pandas
development to remain focused around it's original requirements.
-This is an in-exhaustive list of projects that build on pandas in order to provide
-tools in the PyData space.
+This is an inexhaustive list of projects that build on pandas in order to provide
+tools in the PyData space. For a list of projects that depend on pandas,
+see the
+`libraries.io usage page for pandas <https://libraries.io/pypi/pandas/usage>`_
+or `search pypi for pandas <https://pypi.org/search/?q=pandas>`_.
-We'd like to make it easier for users to find these project, if you know of other
+We'd like to make it easier for users to find these projects, if you know of other
substantial projects that you feel should be on this list, please let us know.
@@ -48,6 +51,17 @@ Featuretools is a Python library for automated feature engineering built on top
Visualization
-------------
+`Altair <https://altair-viz.github.io/>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Altair is a declarative statistical visualization library for Python.
+With Altair, you can spend more time understanding your data and its
+meaning. Altair's API is simple, friendly and consistent and built on
+top of the powerful Vega-Lite JSON specification. This elegant
+simplicity produces beautiful and effective visualizations with a
+minimal amount of code. Altair works with Pandas DataFrames.
+
+
`Bokeh <http://bokeh.pydata.org>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -68,31 +82,22 @@ also goes beyond matplotlib and pandas with the option to perform statistical
estimation while plotting, aggregating across observations and visualizing the
fit of statistical models to emphasize patterns in a dataset.
-`yhat/ggplot <https://github.com/yhat/ggplot>`__
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+`yhat/ggpy <https://github.com/yhat/ggpy>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Hadley Wickham's `ggplot2 <http://ggplot2.org/>`__ is a foundational exploratory visualization package for the R language.
Based on `"The Grammar of Graphics" <http://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html>`__ it
provides a powerful, declarative and extremely general way to generate bespoke plots of any kind of data.
It's really quite incredible. Various implementations to other languages are available,
but a faithful implementation for Python users has long been missing. Although still young
-(as of Jan-2014), the `yhat/ggplot <https://github.com/yhat/ggplot>`__ project has been
+(as of Jan-2014), the `yhat/ggpy <https://github.com/yhat/ggpy>`__ project has been
progressing quickly in that direction.
-`Vincent <https://github.com/wrobstory/vincent>`__
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The `Vincent <https://github.com/wrobstory/vincent>`__ project leverages `Vega <https://github.com/trifacta/vega>`__
-(that in turn, leverages `d3 <http://d3js.org/>`__) to create
-plots. Although functional, as of Summer 2016 the Vincent project has not been updated
-in over two years and is `unlikely to receive further updates <https://github.com/wrobstory/vincent#2015-08-12-update>`__.
-
`IPython Vega <https://github.com/vega/ipyvega>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Like Vincent, the `IPython Vega <https://github.com/vega/ipyvega>`__ project leverages `Vega
-<https://github.com/trifacta/vega>`__ to create plots, but primarily
-targets the IPython Notebook environment.
+`IPython Vega <https://github.com/vega/ipyvega>`__ leverages `Vega
+<https://github.com/trifacta/vega>`__ to create plots within Jupyter Notebook.
`Plotly <https://plot.ly/python>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -115,20 +120,28 @@ IDE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
IPython is an interactive command shell and distributed computing
-environment.
-IPython Notebook is a web application for creating IPython notebooks.
-An IPython notebook is a JSON document containing an ordered list
+environment. IPython tab completion works with Pandas methods and also
+attributes like DataFrame columns.
+
+`Jupyter Notebook / Jupyter Lab <https://jupyter.org>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Jupyter Notebook is a web application for creating Jupyter notebooks.
+A Jupyter notebook is a JSON document containing an ordered list
of input/output cells which can contain code, text, mathematics, plots
and rich media.
-IPython notebooks can be converted to a number of open standard output formats
+Jupyter notebooks can be converted to a number of open standard output formats
(HTML, HTML presentation slides, LaTeX, PDF, ReStructuredText, Markdown,
-Python) through 'Download As' in the web interface and ``ipython nbconvert``
+Python) through 'Download As' in the web interface and ``jupyter convert``
in a shell.
-Pandas DataFrames implement ``_repr_html_`` methods
-which are utilized by IPython Notebook for displaying
-(abbreviated) HTML tables. (Note: HTML tables may or may not be
-compatible with non-HTML IPython output formats.)
+Pandas DataFrames implement ``_repr_html_``and ``_repr_latex`` methods
+which are utilized by Jupyter Notebook for displaying
+(abbreviated) HTML or LaTeX tables. LaTeX output is properly escaped.
+(Note: HTML tables may or may not be
+compatible with non-HTML Jupyter output formats.)
+
+See :ref:`Options and Settings <options>` and :ref:`<options.available>`
+for pandas ``display.`` settings.
`quantopian/qgrid <https://github.com/quantopian/qgrid>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -144,11 +157,10 @@ editing, testing, debugging, and introspection features.
Spyder can now introspect and display Pandas DataFrames and show
both "column wise min/max and global min/max coloring."
-
.. _ecosystem.api:
API
------
+---
`pandas-datareader <https://github.com/pydata/pandas-datareader>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -159,14 +171,22 @@ See more in the `pandas-datareader docs <https://pandas-datareader.readthedocs.
The following data feeds are available:
-* Yahoo! Finance
-* Google Finance
-* FRED
-* Fama/French
-* World Bank
-* OECD
-* Eurostat
-* EDGAR Index
+ * Google Finance
+ * Tiingo
+ * Morningstar
+ * IEX
+ * Robinhood
+ * Enigma
+ * Quandl
+ * FRED
+ * Fama/French
+ * World Bank
+ * OECD
+ * Eurostat
+ * TSP Fund Data
+ * Nasdaq Trader Symbol Definitions
+ * Stooq Index Data
+ * MOEX Data
`quandl/Python <https://github.com/quandl/Python>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -227,25 +247,24 @@ dimensional arrays, rather than the tabular data for which pandas excels.
Out-of-core
-------------
+`Blaze <http://blaze.pydata.org/>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Blaze provides a standard API for doing computations with various
+in-memory and on-disk backends: NumPy, Pandas, SQLAlchemy, MongoDB, PyTables,
+PySpark.
+
`Dask <https://dask.readthedocs.io/en/latest/>`__
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dask is a flexible parallel computing library for analytics. Dask
provides a familiar ``DataFrame`` interface for out-of-core, parallel and distributed computing.
`Dask-ML <https://dask-ml.readthedocs.io/en/latest/>`__
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dask-ML enables parallel and distributed machine learning using Dask alongside existing machine learning libraries like Scikit-Learn, XGBoost, and TensorFlow.
-
-`Blaze <http://blaze.pydata.org/>`__
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Blaze provides a standard API for doing computations with various
-in-memory and on-disk backends: NumPy, Pandas, SQLAlchemy, MongoDB, PyTables,
-PySpark.
-
`Odo <http://odo.pydata.org>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -255,6 +274,26 @@ PyTables, h5py, and pymongo to move data between non pandas formats. Its graph
based approach is also extensible by end users for custom formats that may be
too specific for the core of odo.
+`Ray <https://ray.readthedocs.io/en/latest/pandas_on_ray.html>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pandas on Ray is an early stage DataFrame library that wraps Pandas and transparently distributes the data and computation. The user does not need to know how many cores their system has, nor do they need to specify how to distribute the data. In fact, users can continue using their previous Pandas notebooks while experiencing a considerable speedup from Pandas on Ray, even on a single machine. Only a modification of the import statement is needed, as we demonstrate below. Once you’ve changed your import statement, you’re ready to use Pandas on Ray just like you would Pandas.
+
+.. code:: python
+
+ # import pandas as pd
+ import ray.dataframe as pd
+
+
+`Vaex <https://docs.vaex.io/>`__
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Increasingly, packages are being built on top of pandas to address specific needs in data preparation, analysis and visualization. Vaex is a python library for Out-of-Core DataFrames (similar to Pandas), to visualize and explore big tabular datasets. It can calculate statistics such as mean, sum, count, standard deviation etc, on an N-dimensional grid up to a billion (10\ :sup:`9`) objects/rows per second. Visualization is done using histograms, density plots and 3d volume rendering, allowing interactive exploration of big data. Vaex uses memory mapping, zero memory copy policy and lazy computations for best performance (no memory wasted).
+
+ * vaex.from_pandas
+ * vaex.to_pandas_df
+
+
.. _ecosystem.data_validation:
Data validation
| closes #20334 - Pandas on Ray
closes #20335 - Vaex | https://api.github.com/repos/pandas-dev/pandas/pulls/20345 | 2018-03-14T14:32:48Z | 2018-07-08T15:15:02Z | 2018-07-08T15:15:01Z | 2018-07-08T15:15:10Z |
DOC: update the pd.Index.asof docstring | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index f69777af31c9c..1cc3c288da479 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2378,12 +2378,59 @@ def identical(self, other):
def asof(self, label):
"""
- For a sorted index, return the most recent label up to and including
- the passed label. Return NaN if not found.
+ Return the label from the index, or, if not present, the previous one.
- See also
+ Assuming that the index is sorted, return the passed index label if it
+ is in the index, or return the previous index label if the passed one
+ is not in the index.
+
+ Parameters
+ ----------
+ label : object
+ The label up to which the method returns the latest index label.
+
+ Returns
+ -------
+ object
+ The passed label if it is in the index. The previous label if the
+ passed label is not in the sorted index or `NaN` if there is no
+ such label.
+
+ See Also
--------
- get_loc : asof is a thin wrapper around get_loc with method='pad'
+ Series.asof : Return the latest value in a Series up to the
+ passed index.
+ merge_asof : Perform an asof merge (similar to left join but it
+ matches on nearest key rather than equal key).
+ Index.get_loc : `asof` is a thin wrapper around `get_loc`
+ with method='pad'.
+
+ Examples
+ --------
+ `Index.asof` returns the latest index label up to the passed label.
+
+ >>> idx = pd.Index(['2013-12-31', '2014-01-02', '2014-01-03'])
+ >>> idx.asof('2014-01-01')
+ '2013-12-31'
+
+ If the label is in the index, the method returns the passed label.
+
+ >>> idx.asof('2014-01-02')
+ '2014-01-02'
+
+ If all of the labels in the index are later than the passed label,
+ NaN is returned.
+
+ >>> idx.asof('1999-01-02')
+ nan
+
+ If the index is not sorted, an error is raised.
+
+ >>> idx_not_sorted = pd.Index(['2013-12-31', '2015-01-02',
+ ... '2014-01-03'])
+ >>> idx_not_sorted.asof('2013-12-31')
+ Traceback (most recent call last):
+ ValueError: index must be monotonic increasing or decreasing
"""
try:
loc = self.get_loc(label, method='pad')
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
######################## Docstring (pandas.Index.asof) ########################
################################################################################
Return the latest index label up to and including the passed label.
For sorted indexes, return the index label that is the latest among
the labels that are not later than the passed index label.
Parameters
----------
label : object
The label up to which the method returns the latest index label.
Returns
-------
object : The index label that is the latest as of the passed label,
or NaN if there is no such label.
See also
--------
Index.get_loc : `asof` is a thin wrapper around `get_loc`
with method='pad'.
Examples
--------
13 is the latest index label up to 14.
>>> pd.Index([13, 18, 20]).asof(14)
13
If the label is in the index, the method returns the passed label.
>>> pd.Index([13, 18, 20]).asof(18)
18
If all of the labels in the index are later than the passed label,
NaN is returned.
>>> pd.Index([13, 18, 20]).asof(1)
nan
If the index is not sorted, an error is raised.
>>> pd.Index([13, 20, 18]).asof(18)
Traceback (most recent call last):
ValueError: index must be monotonic increasing or decreasing
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Index.asof" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20344 | 2018-03-14T14:08:33Z | 2018-07-07T20:41:33Z | 2018-07-07T20:41:33Z | 2018-07-07T20:42:18Z |
CI: builds docs on every commit | diff --git a/ci/build_docs.sh b/ci/build_docs.sh
index a038304fe0f7a..5de9e158bcdb6 100755
--- a/ci/build_docs.sh
+++ b/ci/build_docs.sh
@@ -10,11 +10,11 @@ echo "inside $0"
git show --pretty="format:" --name-only HEAD~5.. --first-parent | grep -P "rst|txt|doc"
-if [ "$?" != "0" ]; then
- echo "Skipping doc build, none were modified"
- # nope, skip docs build
- exit 0
-fi
+# if [ "$?" != "0" ]; then
+# echo "Skipping doc build, none were modified"
+# # nope, skip docs build
+# exit 0
+# fi
if [ "$DOC" ]; then
| As I refer to the dev docs on PRs, let's build them for each commit temporarily.
Will clean it up when I update https://github.com/pandas-dev/pandas/pull/19952 | https://api.github.com/repos/pandas-dev/pandas/pulls/20343 | 2018-03-14T12:48:07Z | 2018-03-14T12:48:30Z | 2018-03-14T12:48:30Z | 2018-04-09T13:15:07Z |
Preliminary format refactor | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index eaab17513aaf4..687705640a467 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -92,8 +92,8 @@
import pandas.core.common as com
import pandas.core.nanops as nanops
import pandas.core.ops as ops
-import pandas.io.formats.format as fmt
import pandas.io.formats.console as console
+import pandas.io.formats.format as fmt
from pandas.io.formats.printing import pprint_thing
import pandas.plotting._core as gfx
@@ -1695,18 +1695,19 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
else:
tupleize_cols = False
- formatter = fmt.CSVFormatter(self, path_or_buf,
- line_terminator=line_terminator, sep=sep,
- encoding=encoding,
- compression=compression, quoting=quoting,
- na_rep=na_rep, float_format=float_format,
- cols=columns, header=header, index=index,
- index_label=index_label, mode=mode,
- chunksize=chunksize, quotechar=quotechar,
- tupleize_cols=tupleize_cols,
- date_format=date_format,
- doublequote=doublequote,
- escapechar=escapechar, decimal=decimal)
+ from pandas.io.formats.csvs import CSVFormatter
+ formatter = CSVFormatter(self, path_or_buf,
+ line_terminator=line_terminator, sep=sep,
+ encoding=encoding,
+ compression=compression, quoting=quoting,
+ na_rep=na_rep, float_format=float_format,
+ cols=columns, header=header, index=index,
+ index_label=index_label, mode=mode,
+ chunksize=chunksize, quotechar=quotechar,
+ tupleize_cols=tupleize_cols,
+ date_format=date_format,
+ doublequote=doublequote,
+ escapechar=escapechar, decimal=decimal)
formatter.save()
if path_or_buf is None:
@@ -1997,7 +1998,6 @@ def info(self, verbose=None, buf=None, max_cols=None, memory_usage=None,
- If False, never show counts.
"""
- from pandas.io.formats.format import _put_lines
if buf is None: # pragma: no cover
buf = sys.stdout
@@ -2009,7 +2009,7 @@ def info(self, verbose=None, buf=None, max_cols=None, memory_usage=None,
if len(self.columns) == 0:
lines.append('Empty %s' % type(self).__name__)
- _put_lines(buf, lines)
+ fmt.buffer_put_lines(buf, lines)
return
cols = self.columns
@@ -2096,7 +2096,7 @@ def _sizeof_fmt(num, size_qualifier):
mem_usage = self.memory_usage(index=True, deep=deep).sum()
lines.append("memory usage: %s\n" %
_sizeof_fmt(mem_usage, size_qualifier))
- _put_lines(buf, lines)
+ fmt.buffer_put_lines(buf, lines)
def memory_usage(self, index=True, deep=False):
"""Memory usage of DataFrame columns.
diff --git a/pandas/io/formats/common.py b/pandas/io/formats/common.py
deleted file mode 100644
index 5cfdf58403cc0..0000000000000
--- a/pandas/io/formats/common.py
+++ /dev/null
@@ -1,44 +0,0 @@
-# -*- coding: utf-8 -*-
-"""
-Common helper methods used in different submodules of pandas.io.formats
-"""
-
-
-def get_level_lengths(levels, sentinel=''):
- """For each index in each level the function returns lengths of indexes.
-
- Parameters
- ----------
- levels : list of lists
- List of values on for level.
- sentinel : string, optional
- Value which states that no new index starts on there.
-
- Returns
- ----------
- Returns list of maps. For each level returns map of indexes (key is index
- in row and value is length of index).
- """
- if len(levels) == 0:
- return []
-
- control = [True for x in levels[0]]
-
- result = []
- for level in levels:
- last_index = 0
-
- lengths = {}
- for i, key in enumerate(level):
- if control[i] and key == sentinel:
- pass
- else:
- control[i] = False
- lengths[last_index] = i - last_index
- last_index = i
-
- lengths[last_index] = len(level) - last_index
-
- result.append(lengths)
-
- return result
diff --git a/pandas/io/formats/csvs.py b/pandas/io/formats/csvs.py
new file mode 100644
index 0000000000000..4e2021bcba72b
--- /dev/null
+++ b/pandas/io/formats/csvs.py
@@ -0,0 +1,280 @@
+# -*- coding: utf-8 -*-
+"""
+Module for formatting output data into CSV files.
+"""
+
+from __future__ import print_function
+
+import csv as csvlib
+import numpy as np
+
+from pandas.core.dtypes.missing import notna
+from pandas.core.index import Index, MultiIndex
+from pandas import compat
+from pandas.compat import (StringIO, range, zip)
+
+from pandas.io.common import (_get_handle, UnicodeWriter, _expand_user,
+ _stringify_path)
+from pandas._libs import writers as libwriters
+from pandas.core.indexes.datetimes import DatetimeIndex
+from pandas.core.indexes.period import PeriodIndex
+
+
+class CSVFormatter(object):
+
+ def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
+ float_format=None, cols=None, header=True, index=True,
+ index_label=None, mode='w', nanRep=None, encoding=None,
+ compression=None, quoting=None, line_terminator='\n',
+ chunksize=None, tupleize_cols=False, quotechar='"',
+ date_format=None, doublequote=True, escapechar=None,
+ decimal='.'):
+
+ self.obj = obj
+
+ if path_or_buf is None:
+ path_or_buf = StringIO()
+
+ self.path_or_buf = _expand_user(_stringify_path(path_or_buf))
+ self.sep = sep
+ self.na_rep = na_rep
+ self.float_format = float_format
+ self.decimal = decimal
+
+ self.header = header
+ self.index = index
+ self.index_label = index_label
+ self.mode = mode
+ self.encoding = encoding
+ self.compression = compression
+
+ if quoting is None:
+ quoting = csvlib.QUOTE_MINIMAL
+ self.quoting = quoting
+
+ if quoting == csvlib.QUOTE_NONE:
+ # prevents crash in _csv
+ quotechar = None
+ self.quotechar = quotechar
+
+ self.doublequote = doublequote
+ self.escapechar = escapechar
+
+ self.line_terminator = line_terminator
+
+ self.date_format = date_format
+
+ self.tupleize_cols = tupleize_cols
+ self.has_mi_columns = (isinstance(obj.columns, MultiIndex) and
+ not self.tupleize_cols)
+
+ # validate mi options
+ if self.has_mi_columns:
+ if cols is not None:
+ raise TypeError("cannot specify cols with a MultiIndex on the "
+ "columns")
+
+ if cols is not None:
+ if isinstance(cols, Index):
+ cols = cols.to_native_types(na_rep=na_rep,
+ float_format=float_format,
+ date_format=date_format,
+ quoting=self.quoting)
+ else:
+ cols = list(cols)
+ self.obj = self.obj.loc[:, cols]
+
+ # update columns to include possible multiplicity of dupes
+ # and make sure sure cols is just a list of labels
+ cols = self.obj.columns
+ if isinstance(cols, Index):
+ cols = cols.to_native_types(na_rep=na_rep,
+ float_format=float_format,
+ date_format=date_format,
+ quoting=self.quoting)
+ else:
+ cols = list(cols)
+
+ # save it
+ self.cols = cols
+
+ # preallocate data 2d list
+ self.blocks = self.obj._data.blocks
+ ncols = sum(b.shape[0] for b in self.blocks)
+ self.data = [None] * ncols
+
+ if chunksize is None:
+ chunksize = (100000 // (len(self.cols) or 1)) or 1
+ self.chunksize = int(chunksize)
+
+ self.data_index = obj.index
+ if (isinstance(self.data_index, (DatetimeIndex, PeriodIndex)) and
+ date_format is not None):
+ self.data_index = Index([x.strftime(date_format) if notna(x) else
+ '' for x in self.data_index])
+
+ self.nlevels = getattr(self.data_index, 'nlevels', 1)
+ if not index:
+ self.nlevels = 0
+
+ def save(self):
+ # create the writer & save
+ if self.encoding is None:
+ if compat.PY2:
+ encoding = 'ascii'
+ else:
+ encoding = 'utf-8'
+ else:
+ encoding = self.encoding
+
+ if hasattr(self.path_or_buf, 'write'):
+ f = self.path_or_buf
+ close = False
+ else:
+ f, handles = _get_handle(self.path_or_buf, self.mode,
+ encoding=encoding,
+ compression=self.compression)
+ close = True
+
+ try:
+ writer_kwargs = dict(lineterminator=self.line_terminator,
+ delimiter=self.sep, quoting=self.quoting,
+ doublequote=self.doublequote,
+ escapechar=self.escapechar,
+ quotechar=self.quotechar)
+ if encoding == 'ascii':
+ self.writer = csvlib.writer(f, **writer_kwargs)
+ else:
+ writer_kwargs['encoding'] = encoding
+ self.writer = UnicodeWriter(f, **writer_kwargs)
+
+ self._save()
+
+ finally:
+ if close:
+ f.close()
+
+ def _save_header(self):
+
+ writer = self.writer
+ obj = self.obj
+ index_label = self.index_label
+ cols = self.cols
+ has_mi_columns = self.has_mi_columns
+ header = self.header
+ encoded_labels = []
+
+ has_aliases = isinstance(header, (tuple, list, np.ndarray, Index))
+ if not (has_aliases or self.header):
+ return
+ if has_aliases:
+ if len(header) != len(cols):
+ raise ValueError(('Writing {ncols} cols but got {nalias} '
+ 'aliases'.format(ncols=len(cols),
+ nalias=len(header))))
+ else:
+ write_cols = header
+ else:
+ write_cols = cols
+
+ if self.index:
+ # should write something for index label
+ if index_label is not False:
+ if index_label is None:
+ if isinstance(obj.index, MultiIndex):
+ index_label = []
+ for i, name in enumerate(obj.index.names):
+ if name is None:
+ name = ''
+ index_label.append(name)
+ else:
+ index_label = obj.index.name
+ if index_label is None:
+ index_label = ['']
+ else:
+ index_label = [index_label]
+ elif not isinstance(index_label,
+ (list, tuple, np.ndarray, Index)):
+ # given a string for a DF with Index
+ index_label = [index_label]
+
+ encoded_labels = list(index_label)
+ else:
+ encoded_labels = []
+
+ if not has_mi_columns or has_aliases:
+ encoded_labels += list(write_cols)
+ writer.writerow(encoded_labels)
+ else:
+ # write out the mi
+ columns = obj.columns
+
+ # write out the names for each level, then ALL of the values for
+ # each level
+ for i in range(columns.nlevels):
+
+ # we need at least 1 index column to write our col names
+ col_line = []
+ if self.index:
+
+ # name is the first column
+ col_line.append(columns.names[i])
+
+ if isinstance(index_label, list) and len(index_label) > 1:
+ col_line.extend([''] * (len(index_label) - 1))
+
+ col_line.extend(columns._get_level_values(i))
+
+ writer.writerow(col_line)
+
+ # Write out the index line if it's not empty.
+ # Otherwise, we will print out an extraneous
+ # blank line between the mi and the data rows.
+ if encoded_labels and set(encoded_labels) != set(['']):
+ encoded_labels.extend([''] * len(columns))
+ writer.writerow(encoded_labels)
+
+ def _save(self):
+
+ self._save_header()
+
+ nrows = len(self.data_index)
+
+ # write in chunksize bites
+ chunksize = self.chunksize
+ chunks = int(nrows / chunksize) + 1
+
+ for i in range(chunks):
+ start_i = i * chunksize
+ end_i = min((i + 1) * chunksize, nrows)
+ if start_i >= end_i:
+ break
+
+ self._save_chunk(start_i, end_i)
+
+ def _save_chunk(self, start_i, end_i):
+
+ data_index = self.data_index
+
+ # create the data for a chunk
+ slicer = slice(start_i, end_i)
+ for i in range(len(self.blocks)):
+ b = self.blocks[i]
+ d = b.to_native_types(slicer=slicer, na_rep=self.na_rep,
+ float_format=self.float_format,
+ decimal=self.decimal,
+ date_format=self.date_format,
+ quoting=self.quoting)
+
+ for col_loc, col in zip(b.mgr_locs, d):
+ # self.data is a preallocated list
+ self.data[col_loc] = col
+
+ ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep,
+ float_format=self.float_format,
+ decimal=self.decimal,
+ date_format=self.date_format,
+ quoting=self.quoting)
+
+ libwriters.write_csv_rows(self.data, ix, self.nlevels,
+ self.cols, self.writer)
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 81e8881f3f06b..76ffd41f93090 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -14,7 +14,7 @@
from pandas.core.dtypes.common import is_float, is_scalar
from pandas.core.dtypes import missing
from pandas import Index, MultiIndex, PeriodIndex
-from pandas.io.formats.common import get_level_lengths
+from pandas.io.formats.format import get_level_lengths
class ExcelCell(object):
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 50b4f11634b78..1731dbb3ac68d 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -5,11 +5,8 @@
"""
from __future__ import print_function
-from distutils.version import LooseVersion
# pylint: disable=W0141
-from textwrap import dedent
-
from pandas.core.dtypes.missing import isna, notna
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -30,15 +27,14 @@
import pandas.core.common as com
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas import compat
-from pandas.compat import (StringIO, lzip, range, map, zip, u,
- OrderedDict, unichr)
+from pandas.compat import (StringIO, lzip, map, zip, u)
+
from pandas.io.formats.terminal import get_terminal_size
from pandas.core.config import get_option, set_option
-from pandas.io.common import (_get_handle, UnicodeWriter, _expand_user,
- _stringify_path)
+from pandas.io.common import (_expand_user, _stringify_path)
from pandas.io.formats.printing import adjoin, justify, pprint_thing
-from pandas.io.formats.common import get_level_lengths
-from pandas._libs import lib, writers as libwriters
+from pandas._libs import lib
+
from pandas._libs.tslib import (iNaT, Timestamp, Timedelta,
format_array_from_datetime)
from pandas.core.indexes.datetimes import DatetimeIndex
@@ -46,7 +42,6 @@
import pandas as pd
import numpy as np
-import csv
from functools import partial
common_docstring = """
@@ -354,6 +349,7 @@ def _get_adjustment():
class TableFormatter(object):
+
is_truncated = False
show_dimensions = None
@@ -698,6 +694,7 @@ def to_latex(self, column_format=None, longtable=False, encoding=None,
Render a DataFrame to a LaTeX tabular/longtable environment output.
"""
+ from pandas.io.formats.latex import LatexFormatter
latex_renderer = LatexFormatter(self, column_format=column_format,
longtable=longtable,
multicolumn=multicolumn,
@@ -742,6 +739,7 @@ def to_html(self, classes=None, notebook=False, border=None):
.. versionadded:: 0.19.0
"""
+ from pandas.io.formats.html import HTMLFormatter
html_renderer = HTMLFormatter(self, classes=classes,
max_rows=self.max_rows,
max_cols=self.max_cols,
@@ -851,964 +849,6 @@ def _get_column_name_list(self):
names.append('' if columns.name is None else columns.name)
return names
-
-class LatexFormatter(TableFormatter):
- """ Used to render a DataFrame to a LaTeX tabular/longtable environment
- output.
-
- Parameters
- ----------
- formatter : `DataFrameFormatter`
- column_format : str, default None
- The columns format as specified in `LaTeX table format
- <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns
- longtable : boolean, default False
- Use a longtable environment instead of tabular.
-
- See also
- --------
- HTMLFormatter
- """
-
- def __init__(self, formatter, column_format=None, longtable=False,
- multicolumn=False, multicolumn_format=None, multirow=False):
- self.fmt = formatter
- self.frame = self.fmt.frame
- self.bold_rows = self.fmt.kwds.get('bold_rows', False)
- self.column_format = column_format
- self.longtable = longtable
- self.multicolumn = multicolumn
- self.multicolumn_format = multicolumn_format
- self.multirow = multirow
-
- def write_result(self, buf):
- """
- Render a DataFrame to a LaTeX tabular/longtable environment output.
- """
-
- # string representation of the columns
- if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
- info_line = (u('Empty {name}\nColumns: {col}\nIndex: {idx}')
- .format(name=type(self.frame).__name__,
- col=self.frame.columns,
- idx=self.frame.index))
- strcols = [[info_line]]
- else:
- strcols = self.fmt._to_str_columns()
-
- def get_col_type(dtype):
- if issubclass(dtype.type, np.number):
- return 'r'
- else:
- return 'l'
-
- # reestablish the MultiIndex that has been joined by _to_str_column
- if self.fmt.index and isinstance(self.frame.index, MultiIndex):
- clevels = self.frame.columns.nlevels
- strcols.pop(0)
- name = any(self.frame.index.names)
- cname = any(self.frame.columns.names)
- lastcol = self.frame.index.nlevels - 1
- previous_lev3 = None
- for i, lev in enumerate(self.frame.index.levels):
- lev2 = lev.format()
- blank = ' ' * len(lev2[0])
- # display column names in last index-column
- if cname and i == lastcol:
- lev3 = [x if x else '{}' for x in self.frame.columns.names]
- else:
- lev3 = [blank] * clevels
- if name:
- lev3.append(lev.name)
- current_idx_val = None
- for level_idx in self.frame.index.labels[i]:
- if ((previous_lev3 is None or
- previous_lev3[len(lev3)].isspace()) and
- lev2[level_idx] == current_idx_val):
- # same index as above row and left index was the same
- lev3.append(blank)
- else:
- # different value than above or left index different
- lev3.append(lev2[level_idx])
- current_idx_val = lev2[level_idx]
- strcols.insert(i, lev3)
- previous_lev3 = lev3
-
- column_format = self.column_format
- if column_format is None:
- dtypes = self.frame.dtypes._values
- column_format = ''.join(map(get_col_type, dtypes))
- if self.fmt.index:
- index_format = 'l' * self.frame.index.nlevels
- column_format = index_format + column_format
- elif not isinstance(column_format,
- compat.string_types): # pragma: no cover
- raise AssertionError('column_format must be str or unicode, '
- 'not {typ}'.format(typ=type(column_format)))
-
- if not self.longtable:
- buf.write('\\begin{{tabular}}{{{fmt}}}\n'
- .format(fmt=column_format))
- buf.write('\\toprule\n')
- else:
- buf.write('\\begin{{longtable}}{{{fmt}}}\n'
- .format(fmt=column_format))
- buf.write('\\toprule\n')
-
- ilevels = self.frame.index.nlevels
- clevels = self.frame.columns.nlevels
- nlevels = clevels
- if any(self.frame.index.names):
- nlevels += 1
- strrows = list(zip(*strcols))
- self.clinebuf = []
-
- for i, row in enumerate(strrows):
- if i == nlevels and self.fmt.header:
- buf.write('\\midrule\n') # End of header
- if self.longtable:
- buf.write('\\endhead\n')
- buf.write('\\midrule\n')
- buf.write('\\multicolumn{{{n}}}{{r}}{{{{Continued on next '
- 'page}}}} \\\\\n'.format(n=len(row)))
- buf.write('\\midrule\n')
- buf.write('\\endfoot\n\n')
- buf.write('\\bottomrule\n')
- buf.write('\\endlastfoot\n')
- if self.fmt.kwds.get('escape', True):
- # escape backslashes first
- crow = [(x.replace('\\', '\\textbackslash').replace('_', '\\_')
- .replace('%', '\\%').replace('$', '\\$')
- .replace('#', '\\#').replace('{', '\\{')
- .replace('}', '\\}').replace('~', '\\textasciitilde')
- .replace('^', '\\textasciicircum').replace('&', '\\&')
- if (x and x != '{}') else '{}') for x in row]
- else:
- crow = [x if x else '{}' for x in row]
- if self.bold_rows and self.fmt.index:
- # bold row labels
- crow = ['\\textbf{{{x}}}'.format(x=x)
- if j < ilevels and x.strip() not in ['', '{}'] else x
- for j, x in enumerate(crow)]
- if i < clevels and self.fmt.header and self.multicolumn:
- # sum up columns to multicolumns
- crow = self._format_multicolumn(crow, ilevels)
- if (i >= nlevels and self.fmt.index and self.multirow and
- ilevels > 1):
- # sum up rows to multirows
- crow = self._format_multirow(crow, ilevels, i, strrows)
- buf.write(' & '.join(crow))
- buf.write(' \\\\\n')
- if self.multirow and i < len(strrows) - 1:
- self._print_cline(buf, i, len(strcols))
-
- if not self.longtable:
- buf.write('\\bottomrule\n')
- buf.write('\\end{tabular}\n')
- else:
- buf.write('\\end{longtable}\n')
-
- def _format_multicolumn(self, row, ilevels):
- r"""
- Combine columns belonging to a group to a single multicolumn entry
- according to self.multicolumn_format
-
- e.g.:
- a & & & b & c &
- will become
- \multicolumn{3}{l}{a} & b & \multicolumn{2}{l}{c}
- """
- row2 = list(row[:ilevels])
- ncol = 1
- coltext = ''
-
- def append_col():
- # write multicolumn if needed
- if ncol > 1:
- row2.append('\\multicolumn{{{ncol:d}}}{{{fmt:s}}}{{{txt:s}}}'
- .format(ncol=ncol, fmt=self.multicolumn_format,
- txt=coltext.strip()))
- # don't modify where not needed
- else:
- row2.append(coltext)
- for c in row[ilevels:]:
- # if next col has text, write the previous
- if c.strip():
- if coltext:
- append_col()
- coltext = c
- ncol = 1
- # if not, add it to the previous multicolumn
- else:
- ncol += 1
- # write last column name
- if coltext:
- append_col()
- return row2
-
- def _format_multirow(self, row, ilevels, i, rows):
- r"""
- Check following rows, whether row should be a multirow
-
- e.g.: becomes:
- a & 0 & \multirow{2}{*}{a} & 0 &
- & 1 & & 1 &
- b & 0 & \cline{1-2}
- b & 0 &
- """
- for j in range(ilevels):
- if row[j].strip():
- nrow = 1
- for r in rows[i + 1:]:
- if not r[j].strip():
- nrow += 1
- else:
- break
- if nrow > 1:
- # overwrite non-multirow entry
- row[j] = '\\multirow{{{nrow:d}}}{{*}}{{{row:s}}}'.format(
- nrow=nrow, row=row[j].strip())
- # save when to end the current block with \cline
- self.clinebuf.append([i + nrow - 1, j + 1])
- return row
-
- def _print_cline(self, buf, i, icol):
- """
- Print clines after multirow-blocks are finished
- """
- for cl in self.clinebuf:
- if cl[0] == i:
- buf.write('\\cline{{{cl:d}-{icol:d}}}\n'
- .format(cl=cl[1], icol=icol))
- # remove entries that have been written to buffer
- self.clinebuf = [x for x in self.clinebuf if x[0] != i]
-
-
-class HTMLFormatter(TableFormatter):
-
- indent_delta = 2
-
- def __init__(self, formatter, classes=None, max_rows=None, max_cols=None,
- notebook=False, border=None, table_id=None):
- self.fmt = formatter
- self.classes = classes
-
- self.frame = self.fmt.frame
- self.columns = self.fmt.tr_frame.columns
- self.elements = []
- self.bold_rows = self.fmt.kwds.get('bold_rows', False)
- self.escape = self.fmt.kwds.get('escape', True)
-
- self.max_rows = max_rows or len(self.fmt.frame)
- self.max_cols = max_cols or len(self.fmt.columns)
- self.show_dimensions = self.fmt.show_dimensions
- self.is_truncated = (self.max_rows < len(self.fmt.frame) or
- self.max_cols < len(self.fmt.columns))
- self.notebook = notebook
- if border is None:
- border = get_option('display.html.border')
- self.border = border
- self.table_id = table_id
-
- def write(self, s, indent=0):
- rs = pprint_thing(s)
- self.elements.append(' ' * indent + rs)
-
- def write_th(self, s, indent=0, tags=None):
- if self.fmt.col_space is not None and self.fmt.col_space > 0:
- tags = (tags or "")
- tags += ('style="min-width: {colspace};"'
- .format(colspace=self.fmt.col_space))
-
- return self._write_cell(s, kind='th', indent=indent, tags=tags)
-
- def write_td(self, s, indent=0, tags=None):
- return self._write_cell(s, kind='td', indent=indent, tags=tags)
-
- def _write_cell(self, s, kind='td', indent=0, tags=None):
- if tags is not None:
- start_tag = '<{kind} {tags}>'.format(kind=kind, tags=tags)
- else:
- start_tag = '<{kind}>'.format(kind=kind)
-
- if self.escape:
- # escape & first to prevent double escaping of &
- esc = OrderedDict([('&', r'&'), ('<', r'<'),
- ('>', r'>')])
- else:
- esc = {}
- rs = pprint_thing(s, escape_chars=esc).strip()
- self.write(u'{start}{rs}</{kind}>'
- .format(start=start_tag, rs=rs, kind=kind), indent)
-
- def write_tr(self, line, indent=0, indent_delta=4, header=False,
- align=None, tags=None, nindex_levels=0):
- if tags is None:
- tags = {}
-
- if align is None:
- self.write('<tr>', indent)
- else:
- self.write('<tr style="text-align: {align};">'
- .format(align=align), indent)
- indent += indent_delta
-
- for i, s in enumerate(line):
- val_tag = tags.get(i, None)
- if header or (self.bold_rows and i < nindex_levels):
- self.write_th(s, indent, tags=val_tag)
- else:
- self.write_td(s, indent, tags=val_tag)
-
- indent -= indent_delta
- self.write('</tr>', indent)
-
- def write_style(self):
- # We use the "scoped" attribute here so that the desired
- # style properties for the data frame are not then applied
- # throughout the entire notebook.
- template_first = """\
- <style scoped>"""
- template_last = """\
- </style>"""
- template_select = """\
- .dataframe %s {
- %s: %s;
- }"""
- element_props = [('tbody tr th:only-of-type',
- 'vertical-align',
- 'middle'),
- ('tbody tr th',
- 'vertical-align',
- 'top')]
- if isinstance(self.columns, MultiIndex):
- element_props.append(('thead tr th',
- 'text-align',
- 'left'))
- if all((self.fmt.has_index_names,
- self.fmt.index,
- self.fmt.show_index_names)):
- element_props.append(('thead tr:last-of-type th',
- 'text-align',
- 'right'))
- else:
- element_props.append(('thead th',
- 'text-align',
- 'right'))
- template_mid = '\n\n'.join(map(lambda t: template_select % t,
- element_props))
- template = dedent('\n'.join((template_first,
- template_mid,
- template_last)))
- if self.notebook:
- self.write(template)
-
- def write_result(self, buf):
- indent = 0
- id_section = ""
- frame = self.frame
-
- _classes = ['dataframe'] # Default class.
- use_mathjax = get_option("display.html.use_mathjax")
- if not use_mathjax:
- _classes.append('tex2jax_ignore')
- if self.classes is not None:
- if isinstance(self.classes, str):
- self.classes = self.classes.split()
- if not isinstance(self.classes, (list, tuple)):
- raise AssertionError('classes must be list or tuple, not {typ}'
- .format(typ=type(self.classes)))
- _classes.extend(self.classes)
-
- if self.notebook:
- div_style = ''
- try:
- import IPython
- if IPython.__version__ < LooseVersion('3.0.0'):
- div_style = ' style="max-width:1500px;overflow:auto;"'
- except (ImportError, AttributeError):
- pass
-
- self.write('<div{style}>'.format(style=div_style))
-
- self.write_style()
-
- if self.table_id is not None:
- id_section = ' id="{table_id}"'.format(table_id=self.table_id)
- self.write('<table border="{border}" class="{cls}"{id_section}>'
- .format(border=self.border, cls=' '.join(_classes),
- id_section=id_section), indent)
-
- indent += self.indent_delta
- indent = self._write_header(indent)
- indent = self._write_body(indent)
-
- self.write('</table>', indent)
- if self.should_show_dimensions:
- by = chr(215) if compat.PY3 else unichr(215) # ×
- self.write(u('<p>{rows} rows {by} {cols} columns</p>')
- .format(rows=len(frame),
- by=by,
- cols=len(frame.columns)))
-
- if self.notebook:
- self.write('</div>')
-
- _put_lines(buf, self.elements)
-
- def _write_header(self, indent):
- truncate_h = self.fmt.truncate_h
- row_levels = self.frame.index.nlevels
- if not self.fmt.header:
- # write nothing
- return indent
-
- def _column_header():
- if self.fmt.index:
- row = [''] * (self.frame.index.nlevels - 1)
- else:
- row = []
-
- if isinstance(self.columns, MultiIndex):
- if self.fmt.has_column_names and self.fmt.index:
- row.append(single_column_table(self.columns.names))
- else:
- row.append('')
- style = "text-align: {just};".format(just=self.fmt.justify)
- row.extend([single_column_table(c, self.fmt.justify, style)
- for c in self.columns])
- else:
- if self.fmt.index:
- row.append(self.columns.name or '')
- row.extend(self.columns)
- return row
-
- self.write('<thead>', indent)
- row = []
-
- indent += self.indent_delta
-
- if isinstance(self.columns, MultiIndex):
- template = 'colspan="{span:d}" halign="left"'
-
- if self.fmt.sparsify:
- # GH3547
- sentinel = com.sentinel_factory()
- else:
- sentinel = None
- levels = self.columns.format(sparsify=sentinel, adjoin=False,
- names=False)
- level_lengths = get_level_lengths(levels, sentinel)
- inner_lvl = len(level_lengths) - 1
- for lnum, (records, values) in enumerate(zip(level_lengths,
- levels)):
- if truncate_h:
- # modify the header lines
- ins_col = self.fmt.tr_col_num
- if self.fmt.sparsify:
- recs_new = {}
- # Increment tags after ... col.
- for tag, span in list(records.items()):
- if tag >= ins_col:
- recs_new[tag + 1] = span
- elif tag + span > ins_col:
- recs_new[tag] = span + 1
- if lnum == inner_lvl:
- values = (values[:ins_col] + (u('...'),) +
- values[ins_col:])
- else:
- # sparse col headers do not receive a ...
- values = (values[:ins_col] +
- (values[ins_col - 1], ) +
- values[ins_col:])
- else:
- recs_new[tag] = span
- # if ins_col lies between tags, all col headers
- # get ...
- if tag + span == ins_col:
- recs_new[ins_col] = 1
- values = (values[:ins_col] + (u('...'),) +
- values[ins_col:])
- records = recs_new
- inner_lvl = len(level_lengths) - 1
- if lnum == inner_lvl:
- records[ins_col] = 1
- else:
- recs_new = {}
- for tag, span in list(records.items()):
- if tag >= ins_col:
- recs_new[tag + 1] = span
- else:
- recs_new[tag] = span
- recs_new[ins_col] = 1
- records = recs_new
- values = (values[:ins_col] + [u('...')] +
- values[ins_col:])
-
- name = self.columns.names[lnum]
- row = [''] * (row_levels - 1) + ['' if name is None else
- pprint_thing(name)]
-
- if row == [""] and self.fmt.index is False:
- row = []
-
- tags = {}
- j = len(row)
- for i, v in enumerate(values):
- if i in records:
- if records[i] > 1:
- tags[j] = template.format(span=records[i])
- else:
- continue
- j += 1
- row.append(v)
- self.write_tr(row, indent, self.indent_delta, tags=tags,
- header=True)
- else:
- col_row = _column_header()
- align = self.fmt.justify
-
- if truncate_h:
- ins_col = row_levels + self.fmt.tr_col_num
- col_row.insert(ins_col, '...')
-
- self.write_tr(col_row, indent, self.indent_delta, header=True,
- align=align)
-
- if all((self.fmt.has_index_names,
- self.fmt.index,
- self.fmt.show_index_names)):
- row = ([x if x is not None else ''
- for x in self.frame.index.names] +
- [''] * min(len(self.columns), self.max_cols))
- if truncate_h:
- ins_col = row_levels + self.fmt.tr_col_num
- row.insert(ins_col, '')
- self.write_tr(row, indent, self.indent_delta, header=True)
-
- indent -= self.indent_delta
- self.write('</thead>', indent)
-
- return indent
-
- def _write_body(self, indent):
- self.write('<tbody>', indent)
- indent += self.indent_delta
-
- fmt_values = {}
- for i in range(min(len(self.columns), self.max_cols)):
- fmt_values[i] = self.fmt._format_col(i)
-
- # write values
- if self.fmt.index:
- if isinstance(self.frame.index, MultiIndex):
- self._write_hierarchical_rows(fmt_values, indent)
- else:
- self._write_regular_rows(fmt_values, indent)
- else:
- for i in range(min(len(self.frame), self.max_rows)):
- row = [fmt_values[j][i] for j in range(len(self.columns))]
- self.write_tr(row, indent, self.indent_delta, tags=None)
-
- indent -= self.indent_delta
- self.write('</tbody>', indent)
- indent -= self.indent_delta
-
- return indent
-
- def _write_regular_rows(self, fmt_values, indent):
- truncate_h = self.fmt.truncate_h
- truncate_v = self.fmt.truncate_v
-
- ncols = len(self.fmt.tr_frame.columns)
- nrows = len(self.fmt.tr_frame)
- fmt = self.fmt._get_formatter('__index__')
- if fmt is not None:
- index_values = self.fmt.tr_frame.index.map(fmt)
- else:
- index_values = self.fmt.tr_frame.index.format()
-
- row = []
- for i in range(nrows):
-
- if truncate_v and i == (self.fmt.tr_row_num):
- str_sep_row = ['...' for ele in row]
- self.write_tr(str_sep_row, indent, self.indent_delta,
- tags=None, nindex_levels=1)
-
- row = []
- row.append(index_values[i])
- row.extend(fmt_values[j][i] for j in range(ncols))
-
- if truncate_h:
- dot_col_ix = self.fmt.tr_col_num + 1
- row.insert(dot_col_ix, '...')
- self.write_tr(row, indent, self.indent_delta, tags=None,
- nindex_levels=1)
-
- def _write_hierarchical_rows(self, fmt_values, indent):
- template = 'rowspan="{span}" valign="top"'
-
- truncate_h = self.fmt.truncate_h
- truncate_v = self.fmt.truncate_v
- frame = self.fmt.tr_frame
- ncols = len(frame.columns)
- nrows = len(frame)
- row_levels = self.frame.index.nlevels
-
- idx_values = frame.index.format(sparsify=False, adjoin=False,
- names=False)
- idx_values = lzip(*idx_values)
-
- if self.fmt.sparsify:
- # GH3547
- sentinel = com.sentinel_factory()
- levels = frame.index.format(sparsify=sentinel, adjoin=False,
- names=False)
-
- level_lengths = get_level_lengths(levels, sentinel)
- inner_lvl = len(level_lengths) - 1
- if truncate_v:
- # Insert ... row and adjust idx_values and
- # level_lengths to take this into account.
- ins_row = self.fmt.tr_row_num
- inserted = False
- for lnum, records in enumerate(level_lengths):
- rec_new = {}
- for tag, span in list(records.items()):
- if tag >= ins_row:
- rec_new[tag + 1] = span
- elif tag + span > ins_row:
- rec_new[tag] = span + 1
-
- # GH 14882 - Make sure insertion done once
- if not inserted:
- dot_row = list(idx_values[ins_row - 1])
- dot_row[-1] = u('...')
- idx_values.insert(ins_row, tuple(dot_row))
- inserted = True
- else:
- dot_row = list(idx_values[ins_row])
- dot_row[inner_lvl - lnum] = u('...')
- idx_values[ins_row] = tuple(dot_row)
- else:
- rec_new[tag] = span
- # If ins_row lies between tags, all cols idx cols
- # receive ...
- if tag + span == ins_row:
- rec_new[ins_row] = 1
- if lnum == 0:
- idx_values.insert(ins_row, tuple(
- [u('...')] * len(level_lengths)))
-
- # GH 14882 - Place ... in correct level
- elif inserted:
- dot_row = list(idx_values[ins_row])
- dot_row[inner_lvl - lnum] = u('...')
- idx_values[ins_row] = tuple(dot_row)
- level_lengths[lnum] = rec_new
-
- level_lengths[inner_lvl][ins_row] = 1
- for ix_col in range(len(fmt_values)):
- fmt_values[ix_col].insert(ins_row, '...')
- nrows += 1
-
- for i in range(nrows):
- row = []
- tags = {}
-
- sparse_offset = 0
- j = 0
- for records, v in zip(level_lengths, idx_values[i]):
- if i in records:
- if records[i] > 1:
- tags[j] = template.format(span=records[i])
- else:
- sparse_offset += 1
- continue
-
- j += 1
- row.append(v)
-
- row.extend(fmt_values[j][i] for j in range(ncols))
- if truncate_h:
- row.insert(row_levels - sparse_offset +
- self.fmt.tr_col_num, '...')
- self.write_tr(row, indent, self.indent_delta, tags=tags,
- nindex_levels=len(levels) - sparse_offset)
- else:
- for i in range(len(frame)):
- idx_values = list(zip(*frame.index.format(
- sparsify=False, adjoin=False, names=False)))
- row = []
- row.extend(idx_values[i])
- row.extend(fmt_values[j][i] for j in range(ncols))
- if truncate_h:
- row.insert(row_levels + self.fmt.tr_col_num, '...')
- self.write_tr(row, indent, self.indent_delta, tags=None,
- nindex_levels=frame.index.nlevels)
-
-
-class CSVFormatter(object):
-
- def __init__(self, obj, path_or_buf=None, sep=",", na_rep='',
- float_format=None, cols=None, header=True, index=True,
- index_label=None, mode='w', nanRep=None, encoding=None,
- compression=None, quoting=None, line_terminator='\n',
- chunksize=None, tupleize_cols=False, quotechar='"',
- date_format=None, doublequote=True, escapechar=None,
- decimal='.'):
-
- self.obj = obj
-
- if path_or_buf is None:
- path_or_buf = StringIO()
-
- self.path_or_buf = _expand_user(_stringify_path(path_or_buf))
- self.sep = sep
- self.na_rep = na_rep
- self.float_format = float_format
- self.decimal = decimal
-
- self.header = header
- self.index = index
- self.index_label = index_label
- self.mode = mode
- self.encoding = encoding
- self.compression = compression
-
- if quoting is None:
- quoting = csv.QUOTE_MINIMAL
- self.quoting = quoting
-
- if quoting == csv.QUOTE_NONE:
- # prevents crash in _csv
- quotechar = None
- self.quotechar = quotechar
-
- self.doublequote = doublequote
- self.escapechar = escapechar
-
- self.line_terminator = line_terminator
-
- self.date_format = date_format
-
- self.tupleize_cols = tupleize_cols
- self.has_mi_columns = (isinstance(obj.columns, MultiIndex) and
- not self.tupleize_cols)
-
- # validate mi options
- if self.has_mi_columns:
- if cols is not None:
- raise TypeError("cannot specify cols with a MultiIndex on the "
- "columns")
-
- if cols is not None:
- if isinstance(cols, Index):
- cols = cols.to_native_types(na_rep=na_rep,
- float_format=float_format,
- date_format=date_format,
- quoting=self.quoting)
- else:
- cols = list(cols)
- self.obj = self.obj.loc[:, cols]
-
- # update columns to include possible multiplicity of dupes
- # and make sure sure cols is just a list of labels
- cols = self.obj.columns
- if isinstance(cols, Index):
- cols = cols.to_native_types(na_rep=na_rep,
- float_format=float_format,
- date_format=date_format,
- quoting=self.quoting)
- else:
- cols = list(cols)
-
- # save it
- self.cols = cols
-
- # preallocate data 2d list
- self.blocks = self.obj._data.blocks
- ncols = sum(b.shape[0] for b in self.blocks)
- self.data = [None] * ncols
-
- if chunksize is None:
- chunksize = (100000 // (len(self.cols) or 1)) or 1
- self.chunksize = int(chunksize)
-
- self.data_index = obj.index
- if (isinstance(self.data_index, (DatetimeIndex, PeriodIndex)) and
- date_format is not None):
- self.data_index = Index([x.strftime(date_format) if notna(x) else
- '' for x in self.data_index])
-
- self.nlevels = getattr(self.data_index, 'nlevels', 1)
- if not index:
- self.nlevels = 0
-
- def save(self):
- # create the writer & save
- if self.encoding is None:
- if compat.PY2:
- encoding = 'ascii'
- else:
- encoding = 'utf-8'
- else:
- encoding = self.encoding
-
- if hasattr(self.path_or_buf, 'write'):
- f = self.path_or_buf
- close = False
- else:
- f, handles = _get_handle(self.path_or_buf, self.mode,
- encoding=encoding,
- compression=self.compression)
- close = True
-
- try:
- writer_kwargs = dict(lineterminator=self.line_terminator,
- delimiter=self.sep, quoting=self.quoting,
- doublequote=self.doublequote,
- escapechar=self.escapechar,
- quotechar=self.quotechar)
- if encoding == 'ascii':
- self.writer = csv.writer(f, **writer_kwargs)
- else:
- writer_kwargs['encoding'] = encoding
- self.writer = UnicodeWriter(f, **writer_kwargs)
-
- self._save()
-
- finally:
- if close:
- f.close()
-
- def _save_header(self):
-
- writer = self.writer
- obj = self.obj
- index_label = self.index_label
- cols = self.cols
- has_mi_columns = self.has_mi_columns
- header = self.header
- encoded_labels = []
-
- has_aliases = isinstance(header, (tuple, list, np.ndarray, Index))
- if not (has_aliases or self.header):
- return
- if has_aliases:
- if len(header) != len(cols):
- raise ValueError(('Writing {ncols} cols but got {nalias} '
- 'aliases'.format(ncols=len(cols),
- nalias=len(header))))
- else:
- write_cols = header
- else:
- write_cols = cols
-
- if self.index:
- # should write something for index label
- if index_label is not False:
- if index_label is None:
- if isinstance(obj.index, MultiIndex):
- index_label = []
- for i, name in enumerate(obj.index.names):
- if name is None:
- name = ''
- index_label.append(name)
- else:
- index_label = obj.index.name
- if index_label is None:
- index_label = ['']
- else:
- index_label = [index_label]
- elif not isinstance(index_label,
- (list, tuple, np.ndarray, Index)):
- # given a string for a DF with Index
- index_label = [index_label]
-
- encoded_labels = list(index_label)
- else:
- encoded_labels = []
-
- if not has_mi_columns or has_aliases:
- encoded_labels += list(write_cols)
- writer.writerow(encoded_labels)
- else:
- # write out the mi
- columns = obj.columns
-
- # write out the names for each level, then ALL of the values for
- # each level
- for i in range(columns.nlevels):
-
- # we need at least 1 index column to write our col names
- col_line = []
- if self.index:
-
- # name is the first column
- col_line.append(columns.names[i])
-
- if isinstance(index_label, list) and len(index_label) > 1:
- col_line.extend([''] * (len(index_label) - 1))
-
- col_line.extend(columns._get_level_values(i))
-
- writer.writerow(col_line)
-
- # Write out the index line if it's not empty.
- # Otherwise, we will print out an extraneous
- # blank line between the mi and the data rows.
- if encoded_labels and set(encoded_labels) != set(['']):
- encoded_labels.extend([''] * len(columns))
- writer.writerow(encoded_labels)
-
- def _save(self):
-
- self._save_header()
-
- nrows = len(self.data_index)
-
- # write in chunksize bites
- chunksize = self.chunksize
- chunks = int(nrows / chunksize) + 1
-
- for i in range(chunks):
- start_i = i * chunksize
- end_i = min((i + 1) * chunksize, nrows)
- if start_i >= end_i:
- break
-
- self._save_chunk(start_i, end_i)
-
- def _save_chunk(self, start_i, end_i):
-
- data_index = self.data_index
-
- # create the data for a chunk
- slicer = slice(start_i, end_i)
- for i in range(len(self.blocks)):
- b = self.blocks[i]
- d = b.to_native_types(slicer=slicer, na_rep=self.na_rep,
- float_format=self.float_format,
- decimal=self.decimal,
- date_format=self.date_format,
- quoting=self.quoting)
-
- for col_loc, col in zip(b.mgr_locs, d):
- # self.data is a preallocated list
- self.data[col_loc] = col
-
- ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep,
- float_format=self.float_format,
- decimal=self.decimal,
- date_format=self.date_format,
- quoting=self.quoting)
-
- libwriters.write_csv_rows(self.data, ix, self.nlevels,
- self.cols, self.writer)
-
-
# ----------------------------------------------------------------------
# Array formatters
@@ -2366,27 +1406,6 @@ def _cond(values):
return [x + "0" if x.endswith('.') and x != na_rep else x for x in trimmed]
-def single_column_table(column, align=None, style=None):
- table = '<table'
- if align is not None:
- table += (' align="{align}"'.format(align=align))
- if style is not None:
- table += (' style="{style}"'.format(style=style))
- table += '><tbody>'
- for i in column:
- table += ('<tr><td>{i!s}</td></tr>'.format(i=i))
- table += '</tbody></table>'
- return table
-
-
-def single_row_table(row): # pragma: no cover
- table = '<table><tbody><tr>'
- for i in row:
- table += ('<td>{i!s}</td>'.format(i=i))
- table += '</tr></tbody></table>'
- return table
-
-
def _has_names(index):
if isinstance(index, MultiIndex):
return com._any_not_none(*index.names)
@@ -2506,12 +1525,6 @@ def set_eng_float_format(accuracy=3, use_eng_prefix=False):
set_option("display.column_space", max(12, accuracy + 9))
-def _put_lines(buf, lines):
- if any(isinstance(x, compat.text_type) for x in lines):
- lines = [compat.text_type(x) for x in lines]
- buf.write('\n'.join(lines))
-
-
def _binify(cols, line_width):
adjoin_width = 1
bins = []
@@ -2530,3 +1543,59 @@ def _binify(cols, line_width):
bins.append(len(cols))
return bins
+
+
+def get_level_lengths(levels, sentinel=''):
+ """For each index in each level the function returns lengths of indexes.
+
+ Parameters
+ ----------
+ levels : list of lists
+ List of values on for level.
+ sentinel : string, optional
+ Value which states that no new index starts on there.
+
+ Returns
+ ----------
+ Returns list of maps. For each level returns map of indexes (key is index
+ in row and value is length of index).
+ """
+ if len(levels) == 0:
+ return []
+
+ control = [True for x in levels[0]]
+
+ result = []
+ for level in levels:
+ last_index = 0
+
+ lengths = {}
+ for i, key in enumerate(level):
+ if control[i] and key == sentinel:
+ pass
+ else:
+ control[i] = False
+ lengths[last_index] = i - last_index
+ last_index = i
+
+ lengths[last_index] = len(level) - last_index
+
+ result.append(lengths)
+
+ return result
+
+
+def buffer_put_lines(buf, lines):
+ """
+ Appends lines to a buffer.
+
+ Parameters
+ ----------
+ buf
+ The buffer to write to
+ lines
+ The lines to append.
+ """
+ if any(isinstance(x, compat.text_type) for x in lines):
+ lines = [compat.text_type(x) for x in lines]
+ buf.write('\n'.join(lines))
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
new file mode 100644
index 0000000000000..a43c55a220292
--- /dev/null
+++ b/pandas/io/formats/html.py
@@ -0,0 +1,506 @@
+# -*- coding: utf-8 -*-
+"""
+Module for formatting output data in HTML.
+"""
+
+from __future__ import print_function
+from distutils.version import LooseVersion
+
+from textwrap import dedent
+
+import pandas.core.common as com
+from pandas.core.index import MultiIndex
+from pandas import compat
+from pandas.compat import (lzip, range, map, zip, u,
+ OrderedDict, unichr)
+from pandas.core.config import get_option
+from pandas.io.formats.printing import pprint_thing
+from pandas.io.formats.format import (get_level_lengths,
+ buffer_put_lines)
+from pandas.io.formats.format import TableFormatter
+
+
+class HTMLFormatter(TableFormatter):
+
+ indent_delta = 2
+
+ def __init__(self, formatter, classes=None, max_rows=None, max_cols=None,
+ notebook=False, border=None, table_id=None):
+ self.fmt = formatter
+ self.classes = classes
+
+ self.frame = self.fmt.frame
+ self.columns = self.fmt.tr_frame.columns
+ self.elements = []
+ self.bold_rows = self.fmt.kwds.get('bold_rows', False)
+ self.escape = self.fmt.kwds.get('escape', True)
+
+ self.max_rows = max_rows or len(self.fmt.frame)
+ self.max_cols = max_cols or len(self.fmt.columns)
+ self.show_dimensions = self.fmt.show_dimensions
+ self.is_truncated = (self.max_rows < len(self.fmt.frame) or
+ self.max_cols < len(self.fmt.columns))
+ self.notebook = notebook
+ if border is None:
+ border = get_option('display.html.border')
+ self.border = border
+ self.table_id = table_id
+
+ def write(self, s, indent=0):
+ rs = pprint_thing(s)
+ self.elements.append(' ' * indent + rs)
+
+ def write_th(self, s, indent=0, tags=None):
+ if self.fmt.col_space is not None and self.fmt.col_space > 0:
+ tags = (tags or "")
+ tags += ('style="min-width: {colspace};"'
+ .format(colspace=self.fmt.col_space))
+
+ return self._write_cell(s, kind='th', indent=indent, tags=tags)
+
+ def write_td(self, s, indent=0, tags=None):
+ return self._write_cell(s, kind='td', indent=indent, tags=tags)
+
+ def _write_cell(self, s, kind='td', indent=0, tags=None):
+ if tags is not None:
+ start_tag = '<{kind} {tags}>'.format(kind=kind, tags=tags)
+ else:
+ start_tag = '<{kind}>'.format(kind=kind)
+
+ if self.escape:
+ # escape & first to prevent double escaping of &
+ esc = OrderedDict([('&', r'&'), ('<', r'<'),
+ ('>', r'>')])
+ else:
+ esc = {}
+ rs = pprint_thing(s, escape_chars=esc).strip()
+ self.write(u'{start}{rs}</{kind}>'
+ .format(start=start_tag, rs=rs, kind=kind), indent)
+
+ def write_tr(self, line, indent=0, indent_delta=4, header=False,
+ align=None, tags=None, nindex_levels=0):
+ if tags is None:
+ tags = {}
+
+ if align is None:
+ self.write('<tr>', indent)
+ else:
+ self.write('<tr style="text-align: {align};">'
+ .format(align=align), indent)
+ indent += indent_delta
+
+ for i, s in enumerate(line):
+ val_tag = tags.get(i, None)
+ if header or (self.bold_rows and i < nindex_levels):
+ self.write_th(s, indent, tags=val_tag)
+ else:
+ self.write_td(s, indent, tags=val_tag)
+
+ indent -= indent_delta
+ self.write('</tr>', indent)
+
+ def write_style(self):
+ # We use the "scoped" attribute here so that the desired
+ # style properties for the data frame are not then applied
+ # throughout the entire notebook.
+ template_first = """\
+ <style scoped>"""
+ template_last = """\
+ </style>"""
+ template_select = """\
+ .dataframe %s {
+ %s: %s;
+ }"""
+ element_props = [('tbody tr th:only-of-type',
+ 'vertical-align',
+ 'middle'),
+ ('tbody tr th',
+ 'vertical-align',
+ 'top')]
+ if isinstance(self.columns, MultiIndex):
+ element_props.append(('thead tr th',
+ 'text-align',
+ 'left'))
+ if all((self.fmt.has_index_names,
+ self.fmt.index,
+ self.fmt.show_index_names)):
+ element_props.append(('thead tr:last-of-type th',
+ 'text-align',
+ 'right'))
+ else:
+ element_props.append(('thead th',
+ 'text-align',
+ 'right'))
+ template_mid = '\n\n'.join(map(lambda t: template_select % t,
+ element_props))
+ template = dedent('\n'.join((template_first,
+ template_mid,
+ template_last)))
+ if self.notebook:
+ self.write(template)
+
+ def write_result(self, buf):
+ indent = 0
+ id_section = ""
+ frame = self.frame
+
+ _classes = ['dataframe'] # Default class.
+ use_mathjax = get_option("display.html.use_mathjax")
+ if not use_mathjax:
+ _classes.append('tex2jax_ignore')
+ if self.classes is not None:
+ if isinstance(self.classes, str):
+ self.classes = self.classes.split()
+ if not isinstance(self.classes, (list, tuple)):
+ raise AssertionError('classes must be list or tuple, not {typ}'
+ .format(typ=type(self.classes)))
+ _classes.extend(self.classes)
+
+ if self.notebook:
+ div_style = ''
+ try:
+ import IPython
+ if IPython.__version__ < LooseVersion('3.0.0'):
+ div_style = ' style="max-width:1500px;overflow:auto;"'
+ except (ImportError, AttributeError):
+ pass
+
+ self.write('<div{style}>'.format(style=div_style))
+
+ self.write_style()
+
+ if self.table_id is not None:
+ id_section = ' id="{table_id}"'.format(table_id=self.table_id)
+ self.write('<table border="{border}" class="{cls}"{id_section}>'
+ .format(border=self.border, cls=' '.join(_classes),
+ id_section=id_section), indent)
+
+ indent += self.indent_delta
+ indent = self._write_header(indent)
+ indent = self._write_body(indent)
+
+ self.write('</table>', indent)
+ if self.should_show_dimensions:
+ by = chr(215) if compat.PY3 else unichr(215) # ×
+ self.write(u('<p>{rows} rows {by} {cols} columns</p>')
+ .format(rows=len(frame),
+ by=by,
+ cols=len(frame.columns)))
+
+ if self.notebook:
+ self.write('</div>')
+
+ buffer_put_lines(buf, self.elements)
+
+ def _write_header(self, indent):
+ truncate_h = self.fmt.truncate_h
+ row_levels = self.frame.index.nlevels
+ if not self.fmt.header:
+ # write nothing
+ return indent
+
+ def _column_header():
+ if self.fmt.index:
+ row = [''] * (self.frame.index.nlevels - 1)
+ else:
+ row = []
+
+ if isinstance(self.columns, MultiIndex):
+ if self.fmt.has_column_names and self.fmt.index:
+ row.append(single_column_table(self.columns.names))
+ else:
+ row.append('')
+ style = "text-align: {just};".format(just=self.fmt.justify)
+ row.extend([single_column_table(c, self.fmt.justify, style)
+ for c in self.columns])
+ else:
+ if self.fmt.index:
+ row.append(self.columns.name or '')
+ row.extend(self.columns)
+ return row
+
+ self.write('<thead>', indent)
+ row = []
+
+ indent += self.indent_delta
+
+ if isinstance(self.columns, MultiIndex):
+ template = 'colspan="{span:d}" halign="left"'
+
+ if self.fmt.sparsify:
+ # GH3547
+ sentinel = com.sentinel_factory()
+ else:
+ sentinel = None
+ levels = self.columns.format(sparsify=sentinel, adjoin=False,
+ names=False)
+ level_lengths = get_level_lengths(levels, sentinel)
+ inner_lvl = len(level_lengths) - 1
+ for lnum, (records, values) in enumerate(zip(level_lengths,
+ levels)):
+ if truncate_h:
+ # modify the header lines
+ ins_col = self.fmt.tr_col_num
+ if self.fmt.sparsify:
+ recs_new = {}
+ # Increment tags after ... col.
+ for tag, span in list(records.items()):
+ if tag >= ins_col:
+ recs_new[tag + 1] = span
+ elif tag + span > ins_col:
+ recs_new[tag] = span + 1
+ if lnum == inner_lvl:
+ values = (values[:ins_col] + (u('...'),) +
+ values[ins_col:])
+ else:
+ # sparse col headers do not receive a ...
+ values = (values[:ins_col] +
+ (values[ins_col - 1], ) +
+ values[ins_col:])
+ else:
+ recs_new[tag] = span
+ # if ins_col lies between tags, all col headers
+ # get ...
+ if tag + span == ins_col:
+ recs_new[ins_col] = 1
+ values = (values[:ins_col] + (u('...'),) +
+ values[ins_col:])
+ records = recs_new
+ inner_lvl = len(level_lengths) - 1
+ if lnum == inner_lvl:
+ records[ins_col] = 1
+ else:
+ recs_new = {}
+ for tag, span in list(records.items()):
+ if tag >= ins_col:
+ recs_new[tag + 1] = span
+ else:
+ recs_new[tag] = span
+ recs_new[ins_col] = 1
+ records = recs_new
+ values = (values[:ins_col] + [u('...')] +
+ values[ins_col:])
+
+ name = self.columns.names[lnum]
+ row = [''] * (row_levels - 1) + ['' if name is None else
+ pprint_thing(name)]
+
+ if row == [""] and self.fmt.index is False:
+ row = []
+
+ tags = {}
+ j = len(row)
+ for i, v in enumerate(values):
+ if i in records:
+ if records[i] > 1:
+ tags[j] = template.format(span=records[i])
+ else:
+ continue
+ j += 1
+ row.append(v)
+ self.write_tr(row, indent, self.indent_delta, tags=tags,
+ header=True)
+ else:
+ col_row = _column_header()
+ align = self.fmt.justify
+
+ if truncate_h:
+ ins_col = row_levels + self.fmt.tr_col_num
+ col_row.insert(ins_col, '...')
+
+ self.write_tr(col_row, indent, self.indent_delta, header=True,
+ align=align)
+
+ if all((self.fmt.has_index_names,
+ self.fmt.index,
+ self.fmt.show_index_names)):
+ row = ([x if x is not None else ''
+ for x in self.frame.index.names] +
+ [''] * min(len(self.columns), self.max_cols))
+ if truncate_h:
+ ins_col = row_levels + self.fmt.tr_col_num
+ row.insert(ins_col, '')
+ self.write_tr(row, indent, self.indent_delta, header=True)
+
+ indent -= self.indent_delta
+ self.write('</thead>', indent)
+
+ return indent
+
+ def _write_body(self, indent):
+ self.write('<tbody>', indent)
+ indent += self.indent_delta
+
+ fmt_values = {}
+ for i in range(min(len(self.columns), self.max_cols)):
+ fmt_values[i] = self.fmt._format_col(i)
+
+ # write values
+ if self.fmt.index:
+ if isinstance(self.frame.index, MultiIndex):
+ self._write_hierarchical_rows(fmt_values, indent)
+ else:
+ self._write_regular_rows(fmt_values, indent)
+ else:
+ for i in range(min(len(self.frame), self.max_rows)):
+ row = [fmt_values[j][i] for j in range(len(self.columns))]
+ self.write_tr(row, indent, self.indent_delta, tags=None)
+
+ indent -= self.indent_delta
+ self.write('</tbody>', indent)
+ indent -= self.indent_delta
+
+ return indent
+
+ def _write_regular_rows(self, fmt_values, indent):
+ truncate_h = self.fmt.truncate_h
+ truncate_v = self.fmt.truncate_v
+
+ ncols = len(self.fmt.tr_frame.columns)
+ nrows = len(self.fmt.tr_frame)
+ fmt = self.fmt._get_formatter('__index__')
+ if fmt is not None:
+ index_values = self.fmt.tr_frame.index.map(fmt)
+ else:
+ index_values = self.fmt.tr_frame.index.format()
+
+ row = []
+ for i in range(nrows):
+
+ if truncate_v and i == (self.fmt.tr_row_num):
+ str_sep_row = ['...' for ele in row]
+ self.write_tr(str_sep_row, indent, self.indent_delta,
+ tags=None, nindex_levels=1)
+
+ row = []
+ row.append(index_values[i])
+ row.extend(fmt_values[j][i] for j in range(ncols))
+
+ if truncate_h:
+ dot_col_ix = self.fmt.tr_col_num + 1
+ row.insert(dot_col_ix, '...')
+ self.write_tr(row, indent, self.indent_delta, tags=None,
+ nindex_levels=1)
+
+ def _write_hierarchical_rows(self, fmt_values, indent):
+ template = 'rowspan="{span}" valign="top"'
+
+ truncate_h = self.fmt.truncate_h
+ truncate_v = self.fmt.truncate_v
+ frame = self.fmt.tr_frame
+ ncols = len(frame.columns)
+ nrows = len(frame)
+ row_levels = self.frame.index.nlevels
+
+ idx_values = frame.index.format(sparsify=False, adjoin=False,
+ names=False)
+ idx_values = lzip(*idx_values)
+
+ if self.fmt.sparsify:
+ # GH3547
+ sentinel = com.sentinel_factory()
+ levels = frame.index.format(sparsify=sentinel, adjoin=False,
+ names=False)
+
+ level_lengths = get_level_lengths(levels, sentinel)
+ inner_lvl = len(level_lengths) - 1
+ if truncate_v:
+ # Insert ... row and adjust idx_values and
+ # level_lengths to take this into account.
+ ins_row = self.fmt.tr_row_num
+ inserted = False
+ for lnum, records in enumerate(level_lengths):
+ rec_new = {}
+ for tag, span in list(records.items()):
+ if tag >= ins_row:
+ rec_new[tag + 1] = span
+ elif tag + span > ins_row:
+ rec_new[tag] = span + 1
+
+ # GH 14882 - Make sure insertion done once
+ if not inserted:
+ dot_row = list(idx_values[ins_row - 1])
+ dot_row[-1] = u('...')
+ idx_values.insert(ins_row, tuple(dot_row))
+ inserted = True
+ else:
+ dot_row = list(idx_values[ins_row])
+ dot_row[inner_lvl - lnum] = u('...')
+ idx_values[ins_row] = tuple(dot_row)
+ else:
+ rec_new[tag] = span
+ # If ins_row lies between tags, all cols idx cols
+ # receive ...
+ if tag + span == ins_row:
+ rec_new[ins_row] = 1
+ if lnum == 0:
+ idx_values.insert(ins_row, tuple(
+ [u('...')] * len(level_lengths)))
+
+ # GH 14882 - Place ... in correct level
+ elif inserted:
+ dot_row = list(idx_values[ins_row])
+ dot_row[inner_lvl - lnum] = u('...')
+ idx_values[ins_row] = tuple(dot_row)
+ level_lengths[lnum] = rec_new
+
+ level_lengths[inner_lvl][ins_row] = 1
+ for ix_col in range(len(fmt_values)):
+ fmt_values[ix_col].insert(ins_row, '...')
+ nrows += 1
+
+ for i in range(nrows):
+ row = []
+ tags = {}
+
+ sparse_offset = 0
+ j = 0
+ for records, v in zip(level_lengths, idx_values[i]):
+ if i in records:
+ if records[i] > 1:
+ tags[j] = template.format(span=records[i])
+ else:
+ sparse_offset += 1
+ continue
+
+ j += 1
+ row.append(v)
+
+ row.extend(fmt_values[j][i] for j in range(ncols))
+ if truncate_h:
+ row.insert(row_levels - sparse_offset +
+ self.fmt.tr_col_num, '...')
+ self.write_tr(row, indent, self.indent_delta, tags=tags,
+ nindex_levels=len(levels) - sparse_offset)
+ else:
+ for i in range(len(frame)):
+ idx_values = list(zip(*frame.index.format(
+ sparsify=False, adjoin=False, names=False)))
+ row = []
+ row.extend(idx_values[i])
+ row.extend(fmt_values[j][i] for j in range(ncols))
+ if truncate_h:
+ row.insert(row_levels + self.fmt.tr_col_num, '...')
+ self.write_tr(row, indent, self.indent_delta, tags=None,
+ nindex_levels=frame.index.nlevels)
+
+
+def single_column_table(column, align=None, style=None):
+ table = '<table'
+ if align is not None:
+ table += (' align="{align}"'.format(align=align))
+ if style is not None:
+ table += (' style="{style}"'.format(style=style))
+ table += '><tbody>'
+ for i in column:
+ table += ('<tr><td>{i!s}</td></tr>'.format(i=i))
+ table += '</tbody></table>'
+ return table
+
+
+def single_row_table(row): # pragma: no cover
+ table = '<table><tbody><tr>'
+ for i in row:
+ table += ('<td>{i!s}</td>'.format(i=i))
+ table += '</tr></tbody></table>'
+ return table
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
new file mode 100644
index 0000000000000..67b0a4f0e034e
--- /dev/null
+++ b/pandas/io/formats/latex.py
@@ -0,0 +1,244 @@
+# -*- coding: utf-8 -*-
+"""
+Module for formatting output data in Latex.
+"""
+
+from __future__ import print_function
+
+from pandas.core.index import MultiIndex
+from pandas import compat
+from pandas.compat import range, map, zip, u
+from pandas.io.formats.format import TableFormatter
+import numpy as np
+
+
+class LatexFormatter(TableFormatter):
+ """ Used to render a DataFrame to a LaTeX tabular/longtable environment
+ output.
+
+ Parameters
+ ----------
+ formatter : `DataFrameFormatter`
+ column_format : str, default None
+ The columns format as specified in `LaTeX table format
+ <https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3 columns
+ longtable : boolean, default False
+ Use a longtable environment instead of tabular.
+
+ See Also
+ --------
+ HTMLFormatter
+ """
+
+ def __init__(self, formatter, column_format=None, longtable=False,
+ multicolumn=False, multicolumn_format=None, multirow=False):
+ self.fmt = formatter
+ self.frame = self.fmt.frame
+ self.bold_rows = self.fmt.kwds.get('bold_rows', False)
+ self.column_format = column_format
+ self.longtable = longtable
+ self.multicolumn = multicolumn
+ self.multicolumn_format = multicolumn_format
+ self.multirow = multirow
+
+ def write_result(self, buf):
+ """
+ Render a DataFrame to a LaTeX tabular/longtable environment output.
+ """
+
+ # string representation of the columns
+ if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
+ info_line = (u('Empty {name}\nColumns: {col}\nIndex: {idx}')
+ .format(name=type(self.frame).__name__,
+ col=self.frame.columns,
+ idx=self.frame.index))
+ strcols = [[info_line]]
+ else:
+ strcols = self.fmt._to_str_columns()
+
+ def get_col_type(dtype):
+ if issubclass(dtype.type, np.number):
+ return 'r'
+ else:
+ return 'l'
+
+ # reestablish the MultiIndex that has been joined by _to_str_column
+ if self.fmt.index and isinstance(self.frame.index, MultiIndex):
+ clevels = self.frame.columns.nlevels
+ strcols.pop(0)
+ name = any(self.frame.index.names)
+ cname = any(self.frame.columns.names)
+ lastcol = self.frame.index.nlevels - 1
+ previous_lev3 = None
+ for i, lev in enumerate(self.frame.index.levels):
+ lev2 = lev.format()
+ blank = ' ' * len(lev2[0])
+ # display column names in last index-column
+ if cname and i == lastcol:
+ lev3 = [x if x else '{}' for x in self.frame.columns.names]
+ else:
+ lev3 = [blank] * clevels
+ if name:
+ lev3.append(lev.name)
+ current_idx_val = None
+ for level_idx in self.frame.index.labels[i]:
+ if ((previous_lev3 is None or
+ previous_lev3[len(lev3)].isspace()) and
+ lev2[level_idx] == current_idx_val):
+ # same index as above row and left index was the same
+ lev3.append(blank)
+ else:
+ # different value than above or left index different
+ lev3.append(lev2[level_idx])
+ current_idx_val = lev2[level_idx]
+ strcols.insert(i, lev3)
+ previous_lev3 = lev3
+
+ column_format = self.column_format
+ if column_format is None:
+ dtypes = self.frame.dtypes._values
+ column_format = ''.join(map(get_col_type, dtypes))
+ if self.fmt.index:
+ index_format = 'l' * self.frame.index.nlevels
+ column_format = index_format + column_format
+ elif not isinstance(column_format,
+ compat.string_types): # pragma: no cover
+ raise AssertionError('column_format must be str or unicode, '
+ 'not {typ}'.format(typ=type(column_format)))
+
+ if not self.longtable:
+ buf.write('\\begin{{tabular}}{{{fmt}}}\n'
+ .format(fmt=column_format))
+ buf.write('\\toprule\n')
+ else:
+ buf.write('\\begin{{longtable}}{{{fmt}}}\n'
+ .format(fmt=column_format))
+ buf.write('\\toprule\n')
+
+ ilevels = self.frame.index.nlevels
+ clevels = self.frame.columns.nlevels
+ nlevels = clevels
+ if any(self.frame.index.names):
+ nlevels += 1
+ strrows = list(zip(*strcols))
+ self.clinebuf = []
+
+ for i, row in enumerate(strrows):
+ if i == nlevels and self.fmt.header:
+ buf.write('\\midrule\n') # End of header
+ if self.longtable:
+ buf.write('\\endhead\n')
+ buf.write('\\midrule\n')
+ buf.write('\\multicolumn{{{n}}}{{r}}{{{{Continued on next '
+ 'page}}}} \\\\\n'.format(n=len(row)))
+ buf.write('\\midrule\n')
+ buf.write('\\endfoot\n\n')
+ buf.write('\\bottomrule\n')
+ buf.write('\\endlastfoot\n')
+ if self.fmt.kwds.get('escape', True):
+ # escape backslashes first
+ crow = [(x.replace('\\', '\\textbackslash').replace('_', '\\_')
+ .replace('%', '\\%').replace('$', '\\$')
+ .replace('#', '\\#').replace('{', '\\{')
+ .replace('}', '\\}').replace('~', '\\textasciitilde')
+ .replace('^', '\\textasciicircum').replace('&', '\\&')
+ if (x and x != '{}') else '{}') for x in row]
+ else:
+ crow = [x if x else '{}' for x in row]
+ if self.bold_rows and self.fmt.index:
+ # bold row labels
+ crow = ['\\textbf{{{x}}}'.format(x=x)
+ if j < ilevels and x.strip() not in ['', '{}'] else x
+ for j, x in enumerate(crow)]
+ if i < clevels and self.fmt.header and self.multicolumn:
+ # sum up columns to multicolumns
+ crow = self._format_multicolumn(crow, ilevels)
+ if (i >= nlevels and self.fmt.index and self.multirow and
+ ilevels > 1):
+ # sum up rows to multirows
+ crow = self._format_multirow(crow, ilevels, i, strrows)
+ buf.write(' & '.join(crow))
+ buf.write(' \\\\\n')
+ if self.multirow and i < len(strrows) - 1:
+ self._print_cline(buf, i, len(strcols))
+
+ if not self.longtable:
+ buf.write('\\bottomrule\n')
+ buf.write('\\end{tabular}\n')
+ else:
+ buf.write('\\end{longtable}\n')
+
+ def _format_multicolumn(self, row, ilevels):
+ r"""
+ Combine columns belonging to a group to a single multicolumn entry
+ according to self.multicolumn_format
+
+ e.g.:
+ a & & & b & c &
+ will become
+ \multicolumn{3}{l}{a} & b & \multicolumn{2}{l}{c}
+ """
+ row2 = list(row[:ilevels])
+ ncol = 1
+ coltext = ''
+
+ def append_col():
+ # write multicolumn if needed
+ if ncol > 1:
+ row2.append('\\multicolumn{{{ncol:d}}}{{{fmt:s}}}{{{txt:s}}}'
+ .format(ncol=ncol, fmt=self.multicolumn_format,
+ txt=coltext.strip()))
+ # don't modify where not needed
+ else:
+ row2.append(coltext)
+ for c in row[ilevels:]:
+ # if next col has text, write the previous
+ if c.strip():
+ if coltext:
+ append_col()
+ coltext = c
+ ncol = 1
+ # if not, add it to the previous multicolumn
+ else:
+ ncol += 1
+ # write last column name
+ if coltext:
+ append_col()
+ return row2
+
+ def _format_multirow(self, row, ilevels, i, rows):
+ r"""
+ Check following rows, whether row should be a multirow
+
+ e.g.: becomes:
+ a & 0 & \multirow{2}{*}{a} & 0 &
+ & 1 & & 1 &
+ b & 0 & \cline{1-2}
+ b & 0 &
+ """
+ for j in range(ilevels):
+ if row[j].strip():
+ nrow = 1
+ for r in rows[i + 1:]:
+ if not r[j].strip():
+ nrow += 1
+ else:
+ break
+ if nrow > 1:
+ # overwrite non-multirow entry
+ row[j] = '\\multirow{{{nrow:d}}}{{*}}{{{row:s}}}'.format(
+ nrow=nrow, row=row[j].strip())
+ # save when to end the current block with \cline
+ self.clinebuf.append([i + nrow - 1, j + 1])
+ return row
+
+ def _print_cline(self, buf, i, icol):
+ """
+ Print clines after multirow-blocks are finished
+ """
+ for cl in self.clinebuf:
+ if cl[0] == i:
+ buf.write('\\cline{{{cl:d}-{icol:d}}}\n'
+ .format(cl=cl[1], icol=icol))
+ # remove entries that have been written to buffer
+ self.clinebuf = [x for x in self.clinebuf if x[0] != i]
| The branch for the other PR was messed up, so I created a new one.
- [x] based on discussion here: https://github.com/pandas-dev/pandas/pull/20032#pullrequestreview-101917292
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This PR splits the various formatters into different files so that it's easier to refactor a specific formatter. | https://api.github.com/repos/pandas-dev/pandas/pulls/20341 | 2018-03-14T09:08:04Z | 2018-03-14T11:02:11Z | 2018-03-14T11:02:11Z | 2018-03-14T11:02:16Z |
TST: Parametrize dtypes tests - test_common.py and test_concat.py | diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index bfec229d32b22..2960a12b133d2 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -16,44 +16,46 @@ class TestPandasDtype(object):
# Passing invalid dtype, both as a string or object, must raise TypeError
# Per issue GH15520
- def test_invalid_dtype_error(self):
- msg = 'not understood'
- invalid_list = [pd.Timestamp, 'pd.Timestamp', list]
- for dtype in invalid_list:
- with tm.assert_raises_regex(TypeError, msg):
- com.pandas_dtype(dtype)
-
- valid_list = [object, 'float64', np.object_, np.dtype('object'), 'O',
- np.float64, float, np.dtype('float64')]
- for dtype in valid_list:
- com.pandas_dtype(dtype)
-
- def test_numpy_dtype(self):
- for dtype in ['M8[ns]', 'm8[ns]', 'object', 'float64', 'int64']:
- assert com.pandas_dtype(dtype) == np.dtype(dtype)
+ @pytest.mark.parametrize('box', [pd.Timestamp, 'pd.Timestamp', list])
+ def test_invalid_dtype_error(self, box):
+ with tm.assert_raises_regex(TypeError, 'not understood'):
+ com.pandas_dtype(box)
+
+ @pytest.mark.parametrize('dtype', [
+ object, 'float64', np.object_, np.dtype('object'), 'O',
+ np.float64, float, np.dtype('float64')])
+ def test_pandas_dtype_valid(self, dtype):
+ assert com.pandas_dtype(dtype) == dtype
+
+ @pytest.mark.parametrize('dtype', [
+ 'M8[ns]', 'm8[ns]', 'object', 'float64', 'int64'])
+ def test_numpy_dtype(self, dtype):
+ assert com.pandas_dtype(dtype) == np.dtype(dtype)
def test_numpy_string_dtype(self):
# do not parse freq-like string as period dtype
assert com.pandas_dtype('U') == np.dtype('U')
assert com.pandas_dtype('S') == np.dtype('S')
- def test_datetimetz_dtype(self):
- for dtype in ['datetime64[ns, US/Eastern]',
- 'datetime64[ns, Asia/Tokyo]',
- 'datetime64[ns, UTC]']:
- assert com.pandas_dtype(dtype) is DatetimeTZDtype(dtype)
- assert com.pandas_dtype(dtype) == DatetimeTZDtype(dtype)
- assert com.pandas_dtype(dtype) == dtype
+ @pytest.mark.parametrize('dtype', [
+ 'datetime64[ns, US/Eastern]',
+ 'datetime64[ns, Asia/Tokyo]',
+ 'datetime64[ns, UTC]'])
+ def test_datetimetz_dtype(self, dtype):
+ assert com.pandas_dtype(dtype) is DatetimeTZDtype(dtype)
+ assert com.pandas_dtype(dtype) == DatetimeTZDtype(dtype)
+ assert com.pandas_dtype(dtype) == dtype
def test_categorical_dtype(self):
assert com.pandas_dtype('category') == CategoricalDtype()
- def test_period_dtype(self):
- for dtype in ['period[D]', 'period[3M]', 'period[U]',
- 'Period[D]', 'Period[3M]', 'Period[U]']:
- assert com.pandas_dtype(dtype) is PeriodDtype(dtype)
- assert com.pandas_dtype(dtype) == PeriodDtype(dtype)
- assert com.pandas_dtype(dtype) == dtype
+ @pytest.mark.parametrize('dtype', [
+ 'period[D]', 'period[3M]', 'period[U]',
+ 'Period[D]', 'Period[3M]', 'Period[U]'])
+ def test_period_dtype(self, dtype):
+ assert com.pandas_dtype(dtype) is PeriodDtype(dtype)
+ assert com.pandas_dtype(dtype) == PeriodDtype(dtype)
+ assert com.pandas_dtype(dtype) == dtype
dtypes = dict(datetime_tz=com.pandas_dtype('datetime64[ns, US/Eastern]'),
diff --git a/pandas/tests/dtypes/test_concat.py b/pandas/tests/dtypes/test_concat.py
index ca579e2dc9390..b6c5c119ffb6f 100644
--- a/pandas/tests/dtypes/test_concat.py
+++ b/pandas/tests/dtypes/test_concat.py
@@ -1,77 +1,53 @@
# -*- coding: utf-8 -*-
-import pandas as pd
+import pytest
import pandas.core.dtypes.concat as _concat
-
-
-class TestConcatCompat(object):
-
- def check_concat(self, to_concat, exp):
- for klass in [pd.Index, pd.Series]:
- to_concat_klass = [klass(c) for c in to_concat]
- res = _concat.get_dtype_kinds(to_concat_klass)
- assert res == set(exp)
-
- def test_get_dtype_kinds(self):
- to_concat = [['a'], [1, 2]]
- self.check_concat(to_concat, ['i', 'object'])
-
- to_concat = [[3, 4], [1, 2]]
- self.check_concat(to_concat, ['i'])
-
- to_concat = [[3, 4], [1, 2.1]]
- self.check_concat(to_concat, ['i', 'f'])
-
- def test_get_dtype_kinds_datetimelike(self):
- to_concat = [pd.DatetimeIndex(['2011-01-01']),
- pd.DatetimeIndex(['2011-01-02'])]
- self.check_concat(to_concat, ['datetime'])
-
- to_concat = [pd.TimedeltaIndex(['1 days']),
- pd.TimedeltaIndex(['2 days'])]
- self.check_concat(to_concat, ['timedelta'])
-
- def test_get_dtype_kinds_datetimelike_object(self):
- to_concat = [pd.DatetimeIndex(['2011-01-01']),
- pd.DatetimeIndex(['2011-01-02'], tz='US/Eastern')]
- self.check_concat(to_concat,
- ['datetime', 'datetime64[ns, US/Eastern]'])
-
- to_concat = [pd.DatetimeIndex(['2011-01-01'], tz='Asia/Tokyo'),
- pd.DatetimeIndex(['2011-01-02'], tz='US/Eastern')]
- self.check_concat(to_concat,
- ['datetime64[ns, Asia/Tokyo]',
- 'datetime64[ns, US/Eastern]'])
-
- # timedelta has single type
- to_concat = [pd.TimedeltaIndex(['1 days']),
- pd.TimedeltaIndex(['2 hours'])]
- self.check_concat(to_concat, ['timedelta'])
-
- to_concat = [pd.DatetimeIndex(['2011-01-01'], tz='Asia/Tokyo'),
- pd.TimedeltaIndex(['1 days'])]
- self.check_concat(to_concat,
- ['datetime64[ns, Asia/Tokyo]', 'timedelta'])
-
- def test_get_dtype_kinds_period(self):
- # because we don't have Period dtype (yet),
- # Series results in object dtype
- to_concat = [pd.PeriodIndex(['2011-01'], freq='M'),
- pd.PeriodIndex(['2011-01'], freq='M')]
- res = _concat.get_dtype_kinds(to_concat)
- assert res == set(['period[M]'])
-
- to_concat = [pd.Series([pd.Period('2011-01', freq='M')]),
- pd.Series([pd.Period('2011-02', freq='M')])]
- res = _concat.get_dtype_kinds(to_concat)
- assert res == set(['object'])
-
- to_concat = [pd.PeriodIndex(['2011-01'], freq='M'),
- pd.PeriodIndex(['2011-01'], freq='D')]
- res = _concat.get_dtype_kinds(to_concat)
- assert res == set(['period[M]', 'period[D]'])
-
- to_concat = [pd.Series([pd.Period('2011-01', freq='M')]),
- pd.Series([pd.Period('2011-02', freq='D')])]
- res = _concat.get_dtype_kinds(to_concat)
- assert res == set(['object'])
+from pandas import (
+ Index, DatetimeIndex, PeriodIndex, TimedeltaIndex, Series, Period)
+
+
+@pytest.mark.parametrize('to_concat, expected', [
+ # int/float/str
+ ([['a'], [1, 2]], ['i', 'object']),
+ ([[3, 4], [1, 2]], ['i']),
+ ([[3, 4], [1, 2.1]], ['i', 'f']),
+
+ # datetimelike
+ ([DatetimeIndex(['2011-01-01']), DatetimeIndex(['2011-01-02'])],
+ ['datetime']),
+ ([TimedeltaIndex(['1 days']), TimedeltaIndex(['2 days'])],
+ ['timedelta']),
+
+ # datetimelike object
+ ([DatetimeIndex(['2011-01-01']),
+ DatetimeIndex(['2011-01-02'], tz='US/Eastern')],
+ ['datetime', 'datetime64[ns, US/Eastern]']),
+ ([DatetimeIndex(['2011-01-01'], tz='Asia/Tokyo'),
+ DatetimeIndex(['2011-01-02'], tz='US/Eastern')],
+ ['datetime64[ns, Asia/Tokyo]', 'datetime64[ns, US/Eastern]']),
+ ([TimedeltaIndex(['1 days']), TimedeltaIndex(['2 hours'])],
+ ['timedelta']),
+ ([DatetimeIndex(['2011-01-01'], tz='Asia/Tokyo'),
+ TimedeltaIndex(['1 days'])],
+ ['datetime64[ns, Asia/Tokyo]', 'timedelta'])])
+@pytest.mark.parametrize('klass', [Index, Series])
+def test_get_dtype_kinds(klass, to_concat, expected):
+ to_concat_klass = [klass(c) for c in to_concat]
+ result = _concat.get_dtype_kinds(to_concat_klass)
+ assert result == set(expected)
+
+
+@pytest.mark.parametrize('to_concat, expected', [
+ # because we don't have Period dtype (yet),
+ # Series results in object dtype
+ ([PeriodIndex(['2011-01'], freq='M'),
+ PeriodIndex(['2011-01'], freq='M')], ['period[M]']),
+ ([Series([Period('2011-01', freq='M')]),
+ Series([Period('2011-02', freq='M')])], ['object']),
+ ([PeriodIndex(['2011-01'], freq='M'),
+ PeriodIndex(['2011-01'], freq='D')], ['period[M]', 'period[D]']),
+ ([Series([Period('2011-01', freq='M')]),
+ Series([Period('2011-02', freq='D')])], ['object'])])
+def test_get_dtype_kinds_period(to_concat, expected):
+ result = _concat.get_dtype_kinds(to_concat)
+ assert result == set(expected)
| Parametrized some straightforward dtypes related tests.
Relatively minor changes to `test_common.py`:
- mostly just extracted `for` loops
- split one test into two separate tests (see code comment)
Made larger changes to `test_concat.py`:
- removed the test class in favor of plain functions, since there was only one class in the file
- combined a few different tests into a single test, since they were all using test procedure
- removed the `check_concat` validation function since it's only called once now
- further parametrized over a `for` loop previously in the `check_concat` validation function
- removed the `import pandas as pd` in favor of directly importing the necessary classes
| https://api.github.com/repos/pandas-dev/pandas/pulls/20340 | 2018-03-14T05:55:27Z | 2018-03-15T10:26:21Z | 2018-03-15T10:26:21Z | 2018-03-15T16:55:34Z |
BUG: xtick labels didn't work some times | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index c5b22effc6486..4e2df5ce11632 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -390,7 +390,7 @@ def get_label(i):
if self.orientation == 'vertical' or self.orientation is None:
if self._need_to_set_index:
- xticklabels = [get_label(x) for x in ax.get_xticks()]
+ xticklabels = [get_label(x) for x in self._get_xticks()]
ax.set_xticklabels(xticklabels)
self._apply_axis_properties(ax.xaxis, rot=self.rot,
fontsize=self.fontsize)
| use locally defined self._get_xticks instead of matplotlib default ax.get_xticks()
actually fixes #18755 the other is unrelated
| https://api.github.com/repos/pandas-dev/pandas/pulls/20338 | 2018-03-13T23:24:39Z | 2018-03-14T01:56:55Z | null | 2018-03-14T01:56:55Z |
TST: Parameterize some compression tests | diff --git a/pandas/tests/io/parser/compression.py b/pandas/tests/io/parser/compression.py
index 4291d59123e8b..01c6620e50d37 100644
--- a/pandas/tests/io/parser/compression.py
+++ b/pandas/tests/io/parser/compression.py
@@ -12,6 +12,13 @@
import pandas.util.testing as tm
import pandas.util._test_decorators as td
+import gzip
+import bz2
+try:
+ lzma = compat.import_lzma()
+except ImportError:
+ lzma = None
+
class CompressionTests(object):
@@ -64,83 +71,36 @@ def test_zip(self):
pytest.raises(zipfile.BadZipfile, self.read_csv,
f, compression='zip')
- def test_gzip(self):
- import gzip
-
- with open(self.csv1, 'rb') as data_file:
- data = data_file.read()
- expected = self.read_csv(self.csv1)
-
- with tm.ensure_clean() as path:
- tmp = gzip.GzipFile(path, mode='wb')
- tmp.write(data)
- tmp.close()
-
- result = self.read_csv(path, compression='gzip')
- tm.assert_frame_equal(result, expected)
-
- with open(path, 'rb') as f:
- result = self.read_csv(f, compression='gzip')
- tm.assert_frame_equal(result, expected)
-
- with tm.ensure_clean('test.gz') as path:
- tmp = gzip.GzipFile(path, mode='wb')
- tmp.write(data)
- tmp.close()
- result = self.read_csv(path, compression='infer')
- tm.assert_frame_equal(result, expected)
-
- def test_bz2(self):
- import bz2
+ @pytest.mark.parametrize('compress_type, compress_method, ext', [
+ ('gzip', gzip.GzipFile, 'gz'),
+ ('bz2', bz2.BZ2File, 'bz2'),
+ pytest.param('xz', getattr(lzma, 'LZMAFile', None), 'xz',
+ marks=td.skip_if_no_lzma)
+ ])
+ def test_other_compression(self, compress_type, compress_method, ext):
with open(self.csv1, 'rb') as data_file:
data = data_file.read()
expected = self.read_csv(self.csv1)
with tm.ensure_clean() as path:
- tmp = bz2.BZ2File(path, mode='wb')
+ tmp = compress_method(path, mode='wb')
tmp.write(data)
tmp.close()
- result = self.read_csv(path, compression='bz2')
+ result = self.read_csv(path, compression=compress_type)
tm.assert_frame_equal(result, expected)
- pytest.raises(ValueError, self.read_csv,
- path, compression='bz3')
+ if compress_type == 'bz2':
+ pytest.raises(ValueError, self.read_csv,
+ path, compression='bz3')
with open(path, 'rb') as fin:
- result = self.read_csv(fin, compression='bz2')
- tm.assert_frame_equal(result, expected)
-
- with tm.ensure_clean('test.bz2') as path:
- tmp = bz2.BZ2File(path, mode='wb')
- tmp.write(data)
- tmp.close()
- result = self.read_csv(path, compression='infer')
- tm.assert_frame_equal(result, expected)
-
- @td.skip_if_no_lzma
- def test_xz(self):
- lzma = compat.import_lzma()
-
- with open(self.csv1, 'rb') as data_file:
- data = data_file.read()
- expected = self.read_csv(self.csv1)
-
- with tm.ensure_clean() as path:
- tmp = lzma.LZMAFile(path, mode='wb')
- tmp.write(data)
- tmp.close()
-
- result = self.read_csv(path, compression='xz')
- tm.assert_frame_equal(result, expected)
-
- with open(path, 'rb') as f:
- result = self.read_csv(f, compression='xz')
+ result = self.read_csv(fin, compression=compress_type)
tm.assert_frame_equal(result, expected)
- with tm.ensure_clean('test.xz') as path:
- tmp = lzma.LZMAFile(path, mode='wb')
+ with tm.ensure_clean('test.{}'.format(ext)) as path:
+ tmp = compress_method(path, mode='wb')
tmp.write(data)
tmp.close()
result = self.read_csv(path, compression='infer')
| closes #19226
Parameterizing some tests in ``pandas/tests/io/parser/compression.py``. I left the zip test alone because it was quite different to the others and I couldn't see an easy way to incorporate it in the parameterizing. | https://api.github.com/repos/pandas-dev/pandas/pulls/20337 | 2018-03-13T21:53:56Z | 2018-03-16T11:35:21Z | 2018-03-16T11:35:20Z | 2018-03-17T23:01:09Z |
DOC: update the pandas.DataFrame.cummax docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5682ad411fd2f..2adc289f98d94 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8487,19 +8487,21 @@ def compound(self, axis=None, skipna=None, level=None):
cls.compound = compound
cls.cummin = _make_cum_function(
- cls, 'cummin', name, name2, axis_descr, "cumulative minimum",
+ cls, 'cummin', name, name2, axis_descr, "minimum",
lambda y, axis: np.minimum.accumulate(y, axis), "min",
- np.inf, np.nan)
+ np.inf, np.nan, _cummin_examples)
cls.cumsum = _make_cum_function(
- cls, 'cumsum', name, name2, axis_descr, "cumulative sum",
- lambda y, axis: y.cumsum(axis), "sum", 0., np.nan)
+ cls, 'cumsum', name, name2, axis_descr, "sum",
+ lambda y, axis: y.cumsum(axis), "sum", 0.,
+ np.nan, _cumsum_examples)
cls.cumprod = _make_cum_function(
- cls, 'cumprod', name, name2, axis_descr, "cumulative product",
- lambda y, axis: y.cumprod(axis), "prod", 1., np.nan)
+ cls, 'cumprod', name, name2, axis_descr, "product",
+ lambda y, axis: y.cumprod(axis), "prod", 1.,
+ np.nan, _cumprod_examples)
cls.cummax = _make_cum_function(
- cls, 'cummax', name, name2, axis_descr, "cumulative max",
+ cls, 'cummax', name, name2, axis_descr, "maximum",
lambda y, axis: np.maximum.accumulate(y, axis), "max",
- -np.inf, np.nan)
+ -np.inf, np.nan, _cummax_examples)
cls.sum = _make_min_count_stat_function(
cls, 'sum', name, name2, axis_descr,
@@ -8702,8 +8704,8 @@ def _doc_parms(cls):
Include only boolean columns. If None, will attempt to use everything,
then use only boolean data. Not implemented for Series.
**kwargs : any, default None
- Additional keywords have no affect but might be accepted for
- compatibility with numpy.
+ Additional keywords have no effect but might be accepted for
+ compatibility with NumPy.
Returns
-------
@@ -8761,24 +8763,296 @@ def _doc_parms(cls):
"""
_cnum_doc = """
+Return cumulative %(desc)s over a DataFrame or Series axis.
+
+Returns a DataFrame or Series of the same size containing the cumulative
+%(desc)s.
Parameters
----------
-axis : %(axis_descr)s
+axis : {0 or 'index', 1 or 'columns'}, default 0
+ The index or the name of the axis. 0 is equivalent to None or 'index'.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
+ will be NA.
+*args, **kwargs :
+ Additional keywords have no effect but might be accepted for
+ compatibility with NumPy.
Returns
-------
-%(outname)s : %(name1)s\n
-
-
+%(outname)s : %(name1)s or %(name2)s\n
+%(examples)s
See also
--------
pandas.core.window.Expanding.%(accum_func_name)s : Similar functionality
but ignores ``NaN`` values.
+%(name2)s.%(accum_func_name)s : Return the %(desc)s over
+ %(name2)s axis.
+%(name2)s.cummax : Return cumulative maximum over %(name2)s axis.
+%(name2)s.cummin : Return cumulative minimum over %(name2)s axis.
+%(name2)s.cumsum : Return cumulative sum over %(name2)s axis.
+%(name2)s.cumprod : Return cumulative product over %(name2)s axis.
+"""
+
+_cummin_examples = """\
+Examples
+--------
+**Series**
+
+>>> s = pd.Series([2, np.nan, 5, -1, 0])
+>>> s
+0 2.0
+1 NaN
+2 5.0
+3 -1.0
+4 0.0
+dtype: float64
+
+By default, NA values are ignored.
+
+>>> s.cummin()
+0 2.0
+1 NaN
+2 2.0
+3 -1.0
+4 -1.0
+dtype: float64
+
+To include NA values in the operation, use ``skipna=False``
+
+>>> s.cummin(skipna=False)
+0 2.0
+1 NaN
+2 NaN
+3 NaN
+4 NaN
+dtype: float64
+
+**DataFrame**
+
+>>> df = pd.DataFrame([[2.0, 1.0],
+... [3.0, np.nan],
+... [1.0, 0.0]],
+... columns=list('AB'))
+>>> df
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 1.0 0.0
+
+By default, iterates over rows and finds the minimum
+in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
+
+>>> df.cummin()
+ A B
+0 2.0 1.0
+1 2.0 NaN
+2 1.0 0.0
+
+To iterate over columns and find the minimum in each row,
+use ``axis=1``
+
+>>> df.cummin(axis=1)
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 1.0 0.0
+"""
+
+_cumsum_examples = """\
+Examples
+--------
+**Series**
+
+>>> s = pd.Series([2, np.nan, 5, -1, 0])
+>>> s
+0 2.0
+1 NaN
+2 5.0
+3 -1.0
+4 0.0
+dtype: float64
+
+By default, NA values are ignored.
+
+>>> s.cumsum()
+0 2.0
+1 NaN
+2 7.0
+3 6.0
+4 6.0
+dtype: float64
+
+To include NA values in the operation, use ``skipna=False``
+
+>>> s.cumsum(skipna=False)
+0 2.0
+1 NaN
+2 NaN
+3 NaN
+4 NaN
+dtype: float64
+
+**DataFrame**
+
+>>> df = pd.DataFrame([[2.0, 1.0],
+... [3.0, np.nan],
+... [1.0, 0.0]],
+... columns=list('AB'))
+>>> df
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 1.0 0.0
+
+By default, iterates over rows and finds the sum
+in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
+
+>>> df.cumsum()
+ A B
+0 2.0 1.0
+1 5.0 NaN
+2 6.0 1.0
+
+To iterate over columns and find the sum in each row,
+use ``axis=1``
+
+>>> df.cumsum(axis=1)
+ A B
+0 2.0 3.0
+1 3.0 NaN
+2 1.0 1.0
+"""
+
+_cumprod_examples = """\
+Examples
+--------
+**Series**
+
+>>> s = pd.Series([2, np.nan, 5, -1, 0])
+>>> s
+0 2.0
+1 NaN
+2 5.0
+3 -1.0
+4 0.0
+dtype: float64
+
+By default, NA values are ignored.
+
+>>> s.cumprod()
+0 2.0
+1 NaN
+2 10.0
+3 -10.0
+4 -0.0
+dtype: float64
+
+To include NA values in the operation, use ``skipna=False``
+
+>>> s.cumprod(skipna=False)
+0 2.0
+1 NaN
+2 NaN
+3 NaN
+4 NaN
+dtype: float64
+**DataFrame**
+
+>>> df = pd.DataFrame([[2.0, 1.0],
+... [3.0, np.nan],
+... [1.0, 0.0]],
+... columns=list('AB'))
+>>> df
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 1.0 0.0
+
+By default, iterates over rows and finds the product
+in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
+
+>>> df.cumprod()
+ A B
+0 2.0 1.0
+1 6.0 NaN
+2 6.0 0.0
+
+To iterate over columns and find the product in each row,
+use ``axis=1``
+
+>>> df.cumprod(axis=1)
+ A B
+0 2.0 2.0
+1 3.0 NaN
+2 1.0 0.0
+"""
+
+_cummax_examples = """\
+Examples
+--------
+**Series**
+
+>>> s = pd.Series([2, np.nan, 5, -1, 0])
+>>> s
+0 2.0
+1 NaN
+2 5.0
+3 -1.0
+4 0.0
+dtype: float64
+
+By default, NA values are ignored.
+
+>>> s.cummax()
+0 2.0
+1 NaN
+2 5.0
+3 5.0
+4 5.0
+dtype: float64
+
+To include NA values in the operation, use ``skipna=False``
+
+>>> s.cummax(skipna=False)
+0 2.0
+1 NaN
+2 NaN
+3 NaN
+4 NaN
+dtype: float64
+
+**DataFrame**
+
+>>> df = pd.DataFrame([[2.0, 1.0],
+... [3.0, np.nan],
+... [1.0, 0.0]],
+... columns=list('AB'))
+>>> df
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 1.0 0.0
+
+By default, iterates over rows and finds the maximum
+in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
+
+>>> df.cummax()
+ A B
+0 2.0 1.0
+1 3.0 NaN
+2 3.0 1.0
+
+To iterate over columns and find the maximum in each row,
+use ``axis=1``
+
+>>> df.cummax(axis=1)
+ A B
+0 2.0 2.0
+1 3.0 NaN
+2 1.0 1.0
"""
_any_see_also = """\
@@ -8975,11 +9249,11 @@ def stat_func(self, axis=None, skipna=None, level=None, ddof=1,
def _make_cum_function(cls, name, name1, name2, axis_descr, desc,
- accum_func, accum_func_name, mask_a, mask_b):
+ accum_func, accum_func_name, mask_a, mask_b, examples):
@Substitution(outname=name, desc=desc, name1=name1, name2=name2,
- axis_descr=axis_descr, accum_func_name=accum_func_name)
- @Appender("Return {0} over requested axis.".format(desc) +
- _cnum_doc)
+ axis_descr=axis_descr, accum_func_name=accum_func_name,
+ examples=examples)
+ @Appender(_cnum_doc)
def cum_func(self, axis=None, skipna=True, *args, **kwargs):
skipna = nv.validate_cum_func_with_skipna(skipna, args, kwargs, name)
if axis is None:
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
##################### Docstring (pandas.DataFrame.cummax) #####################
################################################################################
Return cumulative maximum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
maximum.
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
The index or the name of the axis. 0 is equivalent to None or 'index'.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
-------
cummax : Series or DataFrame
Examples
--------
**Series**
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummax()
0 2.0
1 NaN
2 5.0
3 5.0
4 5.0
dtype: float64
To include NA values in the operation, use ``skipna=False``
>>> s.cummax(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
**DataFrame**
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the maximum
in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
>>> df.cummax()
A B
0 2.0 1.0
1 3.0 NaN
2 3.0 1.0
To iterate over columns and find the maximum in each row,
use ``axis=1``
>>> df.cummax(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 1.0
See also
--------
pandas.core.window.Expanding.max : Similar functionality
but ignores ``NaN`` values.
DataFrame.max : Return the maximum over
DataFrame axis.
DataFrame.cummax : Return cumulative maximum over DataFrame axis.
DataFrame.cummin : Return cumulative minimum over DataFrame axis.
DataFrame.cumsum : Return cumulative sum over DataFrame axis.
DataFrame.cumprod : Return cumulative product over DataFrame axis.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'kwargs', 'args'} not documented
Unknown parameters {'*args, **kwargs :'}
Parameter "*args, **kwargs :" has no type
################################################################################
##################### Docstring (pandas.DataFrame.cummin) #####################
################################################################################
Return cumulative minimum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
minimum.
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
The index or the name of the axis. 0 is equivalent to None or 'index'.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
-------
cummin : Series or DataFrame
Examples
--------
**Series**
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cummin()
0 2.0
1 NaN
2 2.0
3 -1.0
4 -1.0
dtype: float64
To include NA values in the operation, use ``skipna=False``
>>> s.cummin(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
**DataFrame**
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the minimum
in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
>>> df.cummin()
A B
0 2.0 1.0
1 2.0 NaN
2 1.0 0.0
To iterate over columns and find the minimum in each row,
use ``axis=1``
>>> df.cummin(axis=1)
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
See also
--------
pandas.core.window.Expanding.min : Similar functionality
but ignores ``NaN`` values.
DataFrame.min : Return the minimum over
DataFrame axis.
DataFrame.cummax : Return cumulative maximum over DataFrame axis.
DataFrame.cummin : Return cumulative minimum over DataFrame axis.
DataFrame.cumsum : Return cumulative sum over DataFrame axis.
DataFrame.cumprod : Return cumulative product over DataFrame axis.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args, **kwargs :'}
Parameter "*args, **kwargs :" has no type
################################################################################
##################### Docstring (pandas.DataFrame.cumsum) #####################
################################################################################
Return cumulative sum over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
sum.
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
The index or the name of the axis. 0 is equivalent to None or 'index'.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
-------
cumsum : Series or DataFrame
Examples
--------
**Series**
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumsum()
0 2.0
1 NaN
2 7.0
3 6.0
4 6.0
dtype: float64
To include NA values in the operation, use ``skipna=False``
>>> s.cumsum(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
**DataFrame**
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the sum
in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
>>> df.cumsum()
A B
0 2.0 1.0
1 5.0 NaN
2 6.0 1.0
To iterate over columns and find the sum in each row,
use ``axis=1``
>>> df.cumsum(axis=1)
A B
0 2.0 3.0
1 3.0 NaN
2 1.0 1.0
See also
--------
pandas.core.window.Expanding.sum : Similar functionality
but ignores ``NaN`` values.
DataFrame.sum : Return the sum over
DataFrame axis.
DataFrame.cummax : Return cumulative maximum over DataFrame axis.
DataFrame.cummin : Return cumulative minimum over DataFrame axis.
DataFrame.cumsum : Return cumulative sum over DataFrame axis.
DataFrame.cumprod : Return cumulative product over DataFrame axis.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args, **kwargs :'}
Parameter "*args, **kwargs :" has no type
################################################################################
##################### Docstring (pandas.DataFrame.cumprod) #####################
################################################################################
Return cumulative product over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
product.
Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
The index or the name of the axis. 0 is equivalent to None or 'index'.
skipna : boolean, default True
Exclude NA/null values. If an entire row/column is NA, the result
will be NA.
*args, **kwargs :
Additional keywords have no effect but might be accepted for
compatibility with NumPy.
Returns
-------
cumprod : Series or DataFrame
Examples
--------
**Series**
>>> s = pd.Series([2, np.nan, 5, -1, 0])
>>> s
0 2.0
1 NaN
2 5.0
3 -1.0
4 0.0
dtype: float64
By default, NA values are ignored.
>>> s.cumprod()
0 2.0
1 NaN
2 10.0
3 -10.0
4 -0.0
dtype: float64
To include NA values in the operation, use ``skipna=False``
>>> s.cumprod(skipna=False)
0 2.0
1 NaN
2 NaN
3 NaN
4 NaN
dtype: float64
**DataFrame**
>>> df = pd.DataFrame([[2.0, 1.0],
... [3.0, np.nan],
... [1.0, 0.0]],
... columns=list('AB'))
>>> df
A B
0 2.0 1.0
1 3.0 NaN
2 1.0 0.0
By default, iterates over rows and finds the product
in each column. This is equivalent to ``axis=None`` or ``axis='index'``.
>>> df.cumprod()
A B
0 2.0 1.0
1 6.0 NaN
2 6.0 0.0
To iterate over columns and find the product in each row,
use ``axis=1``
>>> df.cumprod(axis=1)
A B
0 2.0 2.0
1 3.0 NaN
2 1.0 0.0
See also
--------
pandas.core.window.Expanding.prod : Similar functionality
but ignores ``NaN`` values.
DataFrame.prod : Return the product over
DataFrame axis.
DataFrame.cummax : Return cumulative maximum over DataFrame axis.
DataFrame.cummin : Return cumulative minimum over DataFrame axis.
DataFrame.cumsum : Return cumulative sum over DataFrame axis.
DataFrame.cumprod : Return cumulative product over DataFrame axis.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args, **kwargs :'}
Parameter "*args, **kwargs :" has no type
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
The *args/**kwargs error is a bug in the parameter evaluation | https://api.github.com/repos/pandas-dev/pandas/pulls/20336 | 2018-03-13T20:29:35Z | 2018-03-17T18:04:34Z | 2018-03-17T18:04:34Z | 2018-03-17T19:17:10Z |
DOC: update the Period.qyear docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index d89c06d43ccb9..5072d196709e6 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1463,6 +1463,45 @@ cdef class _Period(object):
@property
def qyear(self):
+ """
+ Fiscal year the Period lies in according to its starting-quarter.
+
+ The `year` and the `qyear` of the period will be the same if the fiscal
+ and calendar years are the same. When they are not, the fiscal year
+ can be different from the calendar year of the period.
+
+ Returns
+ -------
+ int
+ The fiscal year of the period.
+
+ See Also
+ --------
+ Period.year : Return the calendar year of the period.
+
+ Examples
+ --------
+ If the natural and fiscal year are the same, `qyear` and `year` will
+ be the same.
+
+ >>> per = pd.Period('2018Q1', freq='Q')
+ >>> per.qyear
+ 2018
+ >>> per.year
+ 2018
+
+ If the fiscal year starts in April (`Q-MAR`), the first quarter of
+ 2018 will start in April 2017. `year` will then be 2018, but `qyear`
+ will be the fiscal year, 2018.
+
+ >>> per = pd.Period('2018Q1', freq='Q-MAR')
+ >>> per.start_time
+ Timestamp('2017-04-01 00:00:00')
+ >>> per.qyear
+ 2018
+ >>> per.year
+ 2017
+ """
base, mult = get_freq_code(self.freq)
return pqyear(self.ordinal, base)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
(pandas_dev) ani@ani-Lenovo-G50-80:~/github/pandas/scripts$ python validate_docstrings.py
- [ ] pandas.Period.qyear
################################################################################
####################### Docstring (pandas.Period.qyear) #######################
################################################################################
Return a year of the given date.
This function simply find the year of the given date that period fallsin.
Returns
-------
int
See Also
--------
Period.qyear :Return the year of the date
Examples
--------
>>> p = pd.Period("2014-9-21", freq="A")
>>> p.qyear
2014
################################################################################
################################## Validation ##################################
################################################################################
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly. | https://api.github.com/repos/pandas-dev/pandas/pulls/20333 | 2018-03-13T18:20:20Z | 2018-07-08T19:42:56Z | 2018-07-08T19:42:56Z | 2018-07-08T19:47:32Z |
DOC : Update the pandas.Period.week docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index eda0a8492ad24..9270b8baece08 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1305,6 +1305,32 @@ cdef class _Period(object):
@property
def week(self):
+ """
+ Get the week of the year on the given Period.
+
+ Returns
+ -------
+ int
+
+ See Also
+ --------
+ Period.dayofweek : Get the day component of the Period.
+ Period.weekday : Get the day component of the Period.
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11", "H")
+ >>> p.week
+ 10
+
+ >>> p = pd.Period("2018-02-01", "D")
+ >>> p.week
+ 5
+
+ >>> p = pd.Period("2018-01-06", "D")
+ >>> p.week
+ 1
+ """
return self.weekofyear
@property
| ```
# Period.week
################################################################################
######################## Docstring (pandas.Period.week) ########################
################################################################################
Get the total weeks on the Period falls in.
Returns
-------
int
See Also
--------
Period.dayofweek : Get the day component of the Period.
Period.weekday : Get the day component of the Period.
Examples
--------
>>> p = pd.Period("2018-03-11")
>>> p.week
10
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20331 | 2018-03-13T16:33:06Z | 2018-03-17T10:31:17Z | 2018-03-17T10:31:17Z | 2018-03-17T10:56:41Z |
CI: Fix linting on master [ci skip] | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index ab7108fb60bd7..d7e66bf4c9320 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1300,7 +1300,7 @@ cdef class _Period(object):
Returns
-------
- int
+ int
The second of the Period (ranges from 0 to 59).
See Also
| [ci skip]
I broke master again. Sorry. | https://api.github.com/repos/pandas-dev/pandas/pulls/20330 | 2018-03-13T15:29:48Z | 2018-03-13T15:30:04Z | 2018-03-13T15:30:04Z | 2018-03-13T15:30:07Z |
DOC: Formatting for DtypeWarning Docstring [ci skip] | diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index 16654f0227182..ff34df64c88d2 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -68,8 +68,7 @@ class DtypeWarning(Warning):
... 'b': ['b'] * 300000})
>>> df.to_csv('test.csv', index=False)
>>> df2 = pd.read_csv('test.csv')
-
- DtypeWarning: Columns (0) have mixed types
+ ... # DtypeWarning: Columns (0) have mixed types
Important to notice that ``df2`` will contain both `str` and `int` for the
same input, '1'.
| xref : https://github.com/pandas-dev/pandas/pull/20208

cc @jorisvandenbossche @hissashirocha | https://api.github.com/repos/pandas-dev/pandas/pulls/20329 | 2018-03-13T15:03:10Z | 2018-03-13T15:08:00Z | 2018-03-13T15:08:00Z | 2018-03-13T15:08:03Z |
CLN: Fixed line length | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ff3f377e07f98..79265e35ef6e6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4459,9 +4459,9 @@ def pivot(self, index=None, columns=None, values=None):
Return reshaped DataFrame organized by given index / column values.
Reshape data (produce a "pivot" table) based on column values. Uses
- unique values from specified `index` / `columns` to form axes of the resulting
- DataFrame. This function does not support data aggregation, multiple
- values will result in a MultiIndex in the columns. See the
+ unique values from specified `index` / `columns` to form axes of the
+ resulting DataFrame. This function does not support data aggregation,
+ multiple values will result in a MultiIndex in the columns. See the
:ref:`User Guide <reshaping>` for more on reshaping.
Parameters
| This is failing master right now. | https://api.github.com/repos/pandas-dev/pandas/pulls/20327 | 2018-03-13T11:36:29Z | 2018-03-13T11:38:22Z | 2018-03-13T11:38:22Z | 2018-03-13T11:38:31Z |
DOC: update the Period.start_time attribute docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index d7e66bf4c9320..2baf8c47ad7e3 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1164,6 +1164,32 @@ cdef class _Period(object):
@property
def start_time(self):
+ """
+ Get the Timestamp for the start of the period.
+
+ Returns
+ -------
+ Timestamp
+
+ See also
+ --------
+ Period.end_time : Return the end Timestamp.
+ Period.dayofyear : Return the day of year.
+ Period.daysinmonth : Return the days in that month.
+ Period.dayofweek : Return the day of the week.
+
+ Examples
+ --------
+ >>> period = pd.Period('2012-1-1', freq='D')
+ >>> period
+ Period('2012-01-01', 'D')
+
+ >>> period.start_time
+ Timestamp('2012-01-01 00:00:00')
+
+ >>> period.end_time
+ Timestamp('2012-01-01 23:59:59.999999999')
+ """
return self.to_timestamp(how='S')
@property
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
(pandas_dev) root@kalih:~/pythonpanda/pandas# python scripts/validate_docstrings.py pandas.Period.start_time
################################################################################
##################### Docstring (pandas.Period.start_time) #####################
################################################################################
Get the Timestamp for the start of the period.
Returns
-------
Timestamp
See also
--------
Period.dayofyear : Return the day of year.
Period.daysinmonth : Return the days in that month.
Period.dayofweek : Return the day of the week.
Examples
--------
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20325 | 2018-03-13T09:36:44Z | 2018-03-14T14:38:18Z | 2018-03-14T14:38:18Z | 2018-03-14T14:38:18Z |
DOC: update the Period.start_time attribute docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..b964c6126cd10 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1164,6 +1164,36 @@ cdef class _Period(object):
@property
def start_time(self):
+ """
+ Return the Timestamp.
+
+ This attribute returns the timestamp of the particular
+ starting date for the given period.
+
+ See also
+ --------
+ Period.dayofyear
+ Return the day of year.
+ Period.daysinmonth
+ Return the days in that month.
+ Period.dayofweek
+ Return the day of the week.
+
+ Returns
+ -------
+ Timestamp
+
+ Examples
+ --------
+ >>> period = pd.Period('2012-1-1', freq='D')
+ >>> period
+ Period('2012-01-01', 'D')
+
+ >>> period.start_time
+ Timestamp('2012-01-01 00:00:00')
+ >>> period.end_time
+ Timestamp('2012-01-01 23:59:59.999999999')
+ """
return self.to_timestamp(how='S')
@property
@@ -1246,6 +1276,38 @@ cdef class _Period(object):
@property
def dayofweek(self):
+ """
+ Return the day of the week.
+
+ This attribute returns the day of the week on which the particular
+ starting date for the given period occurs with Monday=0, Sunday=6.
+
+ Returns
+ -------
+ Int
+ Range from 0 to 6(included).
+
+ See also
+ --------
+ Period.dayofyear
+ Return the day of year.
+ Period.daysinmonth
+ Return the days in that month.
+
+ Examples
+ --------
+ >>> period1 = pd.Period('2012-1-1 19:00', freq='H')
+ >>> period1
+ Period('2012-01-01 19:00', 'H')
+ >>> period1.dayofweek
+ 6
+
+ >>> period2 = pd.Period('2013-1-9 11:00', freq='H')
+ >>> period2
+ Period('2013-01-09 11:00', 'H')
+ >>> period2.dayofweek
+ 2
+ """
base, mult = get_freq_code(self.freq)
return pweekday(self.ordinal, base)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
(pandas_dev) root@kalih:~/pythonpanda/pandas# python scripts/validate_docstrings.py pandas.Period.start_time
################################################################################
##################### Docstring (pandas.Period.start_time) #####################
################################################################################
Return the Timestamp.
This attribute returns the timestamp of the particular
starting date for the given period.
See also
--------
Period.dayofyear
Return the day of year.
Period.daysinmonth
Return the days in that month.
Period.dayofweek
Return the day of the week.
Returns
-------
Timestamp
Examples
--------
>>> period = pd.Period('2012-1-1', freq='D')
>>> period
Period('2012-01-01', 'D')
>>> period.start_time
Timestamp('2012-01-01 00:00:00')
>>> period.end_time
Timestamp('2012-01-01 23:59:59.999999999')
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.start_time" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20323 | 2018-03-13T09:05:07Z | 2018-03-13T09:23:06Z | null | 2018-03-13T09:23:12Z |
DOC: update the Index.get duplicates docstring | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e40e084fe94d6..545324981c531 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1810,6 +1810,10 @@ def get_duplicates(self):
[2.0, 3.0]
>>> pd.Index(['a', 'b', 'b', 'c', 'c', 'c', 'd']).get_duplicates()
['b', 'c']
+
+ Note that for a DatetimeIndex, it does not return a list but a new
+ DatetimeIndex:
+
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03',
... '2018-01-03', '2018-01-04', '2018-01-04'],
... format='%Y-%m-%d')
@@ -1830,11 +1834,6 @@ def get_duplicates(self):
... format='%Y-%m-%d')
>>> pd.Index(dates).get_duplicates()
DatetimeIndex([], dtype='datetime64[ns]', freq=None)
-
- Notes
- -----
- In case of datetime-like indexes, the function is overridden where the
- result is converted to DatetimeIndex.
"""
from collections import defaultdict
counter = defaultdict(lambda: 0)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################### Docstring (pandas.Index.get_duplicates) ###################
################################################################################
Extract duplicated index elements.
Returns a sorted list of index elements which appear more than once in
the index.
Returns
-------
array-like
List of duplicated indexes.
See Also
--------
Index.duplicated : Return boolean array denoting duplicates.
Index.drop_duplicates : Return Index with duplicates removed.
Examples
--------
Works on different Index of types.
>>> pd.Index([1, 2, 2, 3, 3, 3, 4]).get_duplicates()
[2, 3]
>>> pd.Index([1., 2., 2., 3., 3., 3., 4.]).get_duplicates()
[2.0, 3.0]
>>> pd.Index(['a', 'b', 'b', 'c', 'c', 'c', 'd']).get_duplicates()
['b', 'c']
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03',
... '2018-01-03', '2018-01-04', '2018-01-04'],
... format='%Y-%m-%d')
>>> pd.Index(dates).get_duplicates()
DatetimeIndex(['2018-01-03', '2018-01-04'],
dtype='datetime64[ns]', freq=None)
Sorts duplicated elements even when indexes are unordered.
>>> pd.Index([1, 2, 3, 2, 3, 4, 3]).get_duplicates()
[2, 3]
Return empty array-like structure when all elements are unique.
>>> pd.Index([1, 2, 3, 4]).get_duplicates()
[]
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03'],
... format='%Y-%m-%d')
>>> pd.Index(dates).get_duplicates()
DatetimeIndex([], dtype='datetime64[ns]', freq=None)
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Index.get_duplicates" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20322 | 2018-03-13T08:31:25Z | 2018-03-13T09:20:16Z | 2018-03-13T09:20:16Z | 2018-03-13T09:20:33Z |
DOC: update the Index.get duplicates docstring | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index fd1d3690e8a89..afe1a2cd5cb94 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1710,6 +1710,54 @@ def _invalid_indexer(self, form, key):
kind=type(key)))
def get_duplicates(self):
+ """
+ Extract duplicated index elements.
+
+ Returns a sorted list of index elements which appear more than once in
+ the index.
+
+ Returns
+ -------
+ array-like
+ List of duplicated indexes.
+
+ See Also
+ --------
+ Index.duplicated : Return boolean array denoting duplicates.
+ Index.drop_duplicates : Return Index with duplicates removed.
+
+ Examples
+ --------
+
+ Works on different Index of types.
+
+ >>> pd.Index([1, 2, 2, 3, 3, 3, 4]).get_duplicates()
+ [2, 3]
+ >>> pd.Index([1., 2., 2., 3., 3., 3., 4.]).get_duplicates()
+ [2.0, 3.0]
+ >>> pd.Index(['a', 'b', 'b', 'c', 'c', 'c', 'd']).get_duplicates()
+ ['b', 'c']
+ >>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03',
+ ... '2018-01-03', '2018-01-04', '2018-01-04'],
+ ... format='%Y-%m-%d')
+ >>> pd.Index(dates).get_duplicates()
+ DatetimeIndex(['2018-01-03', '2018-01-04'],
+ dtype='datetime64[ns]', freq=None)
+
+ Sorts duplicated elements even when indexes are unordered.
+
+ >>> pd.Index([1, 2, 3, 2, 3, 4, 3]).get_duplicates()
+ [2, 3]
+
+ Return empty array-like structure when all elements are unique.
+
+ >>> pd.Index([1, 2, 3, 4]).get_duplicates()
+ []
+ >>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03'],
+ ... format='%Y-%m-%d')
+ >>> pd.Index(dates).get_duplicates()
+ DatetimeIndex([], dtype='datetime64[ns]', freq=None)
+ """
from collections import defaultdict
counter = defaultdict(lambda: 0)
for k in self.values:
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################### Docstring (pandas.Index.get_duplicates) ###################
################################################################################
Extract duplicated index elements.
Returns a sorted list of index elements which appear more than once in
the index.
Returns
-------
array-like
List of duplicated indexes.
See Also
--------
Index.duplicated : Return boolean array denoting duplicates.
Index.drop_duplicates : Return Index with duplicates removed.
Examples
--------
Works on different Index of types.
>>> pd.Index([1, 2, 2, 3, 3, 3, 4]).get_duplicates()
[2, 3]
>>> pd.Index([1., 2., 2., 3., 3., 3., 4.]).get_duplicates()
[2.0, 3.0]
>>> pd.Index(['a', 'b', 'b', 'c', 'c', 'c', 'd']).get_duplicates()
['b', 'c']
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03',
... '2018-01-03', '2018-01-04', '2018-01-04'],
... format='%Y-%m-%d')
>>> pd.Index(dates).get_duplicates()
DatetimeIndex(['2018-01-03', '2018-01-04'],
dtype='datetime64[ns]', freq=None)
Sorts duplicated elements even when indexes are unordered.
>>> pd.Index([1, 2, 3, 2, 3, 4, 3]).get_duplicates()
[2, 3]
Return empty array-like structure when all elements are unique.
>>> pd.Index([1, 2, 3, 4]).get_duplicates()
[]
>>> dates = pd.to_datetime(['2018-01-01', '2018-01-02', '2018-01-03'],
... format='%Y-%m-%d')
>>> pd.Index(dates).get_duplicates()
DatetimeIndex([], dtype='datetime64[ns]', freq=None)
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Index.get_duplicates" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20321 | 2018-03-13T08:24:10Z | 2018-03-13T08:26:17Z | null | 2018-03-13T08:32:13Z |
Fixed str.split IndexError with Empty String and expand=True | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index f686a042c1a74..45dd61e4e7571 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -936,6 +936,7 @@ Indexing
- Bug in :class:`IntervalIndex` where set operations that returned an empty ``IntervalIndex`` had the wrong dtype (:issue:`19101`)
- Bug in :meth:`DataFrame.drop_duplicates` where no ``KeyError`` is raised when passing in columns that don't exist on the ``DataFrame`` (issue:`19726`)
- Bug in ``Index`` subclasses constructors that ignore unexpected keyword arguments (:issue:`19348`)
+- Bug in :meth:`Series.str.split` where splitting an empty string without a `pater` and with ``expand=True`` would raise an ``IndexError`` (:issue:`20002`)
MultiIndex
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 11081535cf63f..924550b133c3c 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1615,7 +1615,8 @@ def cons_row(x):
if result:
# propagate nan values to match longest sequence (GH 18450)
max_len = max(len(x) for x in result)
- result = [x * max_len if x[0] is np.nan else x for x in result]
+ result = [x * max_len if len(x) and x[0] is np.nan else x for x
+ in result]
if not isinstance(expand, bool):
raise ValueError("expand must be True or False")
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index a878d6ed7b052..8ab1ee5574aab 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -2144,6 +2144,13 @@ def test_split_nan_expand(self):
# tm.assert_frame_equal does not differentiate
assert all(np.isnan(x) for x in result.iloc[1])
+ def test_split_no_pat_and_expand(self):
+ # GH 20002
+ df = DataFrame({'foo': ['']})
+ exp = DataFrame([[]])
+ res = df['foo'].str.split(expand=True)
+ tm.assert_frame_equal(res, exp)
+
def test_split_with_name(self):
# GH 12617
|
- [x] closes #20002
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20320 | 2018-03-13T07:37:37Z | 2018-03-13T08:22:23Z | null | 2018-05-14T21:11:47Z |
DOC : Update the Period.second docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index eda0a8492ad24..1c543eed41d83 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1295,6 +1295,25 @@ cdef class _Period(object):
@property
def second(self):
+ """
+ Get the second component of the Period.
+
+ Returns
+ -------
+ int
+ The second of the Period (ranges from 0 to 59).
+
+ See Also
+ --------
+ Period.hour : Get the hour component of the Period.
+ Period.minute : Get the minute component of the Period.
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11 13:03:12.050000")
+ >>> p.second
+ 12
+ """
base, mult = get_freq_code(self.freq)
return psecond(self.ordinal, base)
| ```
# Period.second
############################################################################
####################### Docstring (pandas.Period.second) #######################
################################################################################
Get second of the minute component of the Period.
Returns
-------
int
The second as an integer, between 0 and 59.
See Also
--------
Period.hour : Get the hour component of the Period.
Period.minute : Get the minute component of the Period.
Examples
--------
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.second
12
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20319 | 2018-03-13T02:58:20Z | 2018-03-13T11:41:20Z | 2018-03-13T11:41:20Z | 2018-03-17T11:02:10Z |
DOC: update the day_name function docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b9c4b59536d0c..89f38824065bd 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -2128,19 +2128,34 @@ def month_name(self, locale=None):
def day_name(self, locale=None):
"""
- Return the day names of the DateTimeIndex with specified locale.
+ Return the day names.
+
+ Provides day names of the DateTimeIndex for the dataframe which consists of the time and date data with specified locale.
Parameters
----------
locale : string, default None (English locale)
- locale determining the language in which to return the day name
-
+ The locale determines the language in which the day name should be returned.
+
Returns
-------
- month_names : Index
- Index of day names
-
- .. versionadded:: 0.23.0
+ day_names : Index
+ Index (instance) of day names
+
+ See Also
+ --------
+ month_name : Return the month names
+
+ Examples
+ --------
+ #From the time stamps data based on it the day_name is returned.
+ # 1.) First data is changed to asi8(int64) format.
+ # 2.) Then it is checked whether the timestamp is null.
+ # 3.) If yes then it is checked whether the timestamp is utc(Universal Time Coordinated).
+ # 4.) If yes then it is sent to local_timestamps method where the local timestamp is calculated and converted to asi8 if no the loop breaks.
+ # 5.) Then it calls the get_date_name_field function where it passes the data the object to be retrieved(day_name) and the locale and stores it in result .
+ # 6.) Next it deals with the null values using _maybe_mask_result function.
+ # 7.) Then it returns the instance of the the result.
"""
values = self.asi8
if self.tz is not None:
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [ X] PR title is "DOC: update the <your-function-or-method> docstring"
- [ X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [ X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
Docstring for "pandas.DatetimeIndex.day_name" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Checklist for other PRs (remove this part if you are doing a PR for the pandas documentation sprint):
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/20316 | 2018-03-12T20:52:17Z | 2018-07-08T19:59:45Z | null | 2018-07-08T19:59:46Z |
DOC: update the pandas.Series.add_suffix docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ca54b744f02a4..6a68fccb46eac 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2951,7 +2951,8 @@ def add_prefix(self, prefix):
See Also
--------
- Series.add_suffix: Suffix labels with string `suffix`.
+ Series.add_suffix: Suffix row labels with string `suffix`.
+ DataFrame.add_suffix: Suffix column labels with string `suffix`.
Examples
--------
@@ -2990,15 +2991,57 @@ def add_prefix(self, prefix):
def add_suffix(self, suffix):
"""
- Concatenate suffix string with panel items names.
+ Suffix labels with string `suffix`.
+
+ For Series, the row labels are suffixed.
+ For DataFrame, the column labels are suffixed.
Parameters
----------
- suffix : string
+ suffix : str
+ The string to add after each label.
Returns
-------
- with_suffix : type of caller
+ Series or DataFrame
+ New Series or DataFrame with updated labels.
+
+ See Also
+ --------
+ Series.add_prefix: Prefix row labels with string `prefix`.
+ DataFrame.add_prefix: Prefix column labels with string `prefix`.
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3, 4])
+ >>> s
+ 0 1
+ 1 2
+ 2 3
+ 3 4
+ dtype: int64
+
+ >>> s.add_suffix('_item')
+ 0_item 1
+ 1_item 2
+ 2_item 3
+ 3_item 4
+ dtype: int64
+
+ >>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
+ >>> df
+ A B
+ 0 1 3
+ 1 2 4
+ 2 3 5
+ 3 4 6
+
+ >>> df.add_suffix('_col')
+ A_col B_col
+ 0 1 3
+ 1 2 4
+ 2 3 5
+ 3 4 6
"""
new_data = self._data.add_suffix(suffix)
return self._constructor(new_data).__finalize__(self)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
##################### Docstring (pandas.Series.add_suffix) #####################
################################################################################
Add a suffix string to panel items names.
Parameters
----------
suffix : str
The string to add after each item name.
Returns
-------
Series
Original Series with updated item names.
See Also
--------
pandas.Series.add_prefix: Concatenate prefix string with panel items names.
Examples
--------
>>> s = pd.Series([1,2,3,4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_suffix('_item')
0_item 1
1_item 2
2_item 3
3_item 4
dtype: int64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly. | https://api.github.com/repos/pandas-dev/pandas/pulls/20315 | 2018-03-12T20:36:37Z | 2018-03-13T17:54:33Z | 2018-03-13T17:54:33Z | 2018-03-13T17:54:40Z |
Doc : update the pandas.Period.minute docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 7763fe7b9008d..e82a149f5745b 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1246,6 +1246,25 @@ cdef class _Period(object):
@property
def minute(self):
+ """
+ Get minute of the hour component of the Period.
+
+ Returns
+ -------
+ int
+ The minute as an integer, between 0 and 59.
+
+ See Also
+ --------
+ Period.hour : Get the hour component of the Period.
+ Period.second : Get the second component of the Period.
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11 13:03:12.050000")
+ >>> p.minute
+ 3
+ """
base, mult = get_freq_code(self.freq)
return pminute(self.ordinal, base)
| # Period.minute
################################################################################
####################### Docstring (pandas.Period.minute) #######################
################################################################################
Get minute of the hour component of the Period.
Returns
-------
int
The minute as an integer, between 0 and 59.
See Also
--------
Period.hour : Get the hour of the Period.
Period.second : Get the second of the Period.
Examples
--------
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.minute
3
################################################################################
################################## Validation ##################################
################################################################################
| https://api.github.com/repos/pandas-dev/pandas/pulls/20314 | 2018-03-12T20:31:31Z | 2018-03-12T21:16:30Z | 2018-03-12T21:16:30Z | 2018-03-13T02:18:14Z |
DOC: update docstring of pandas.Series.add_prefix docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 23654613104ec..64787869ef46c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2963,15 +2963,56 @@ def _update_inplace(self, result, verify_is_copy=True):
def add_prefix(self, prefix):
"""
- Concatenate prefix string with panel items names.
+ Prefix labels with string `prefix`.
+
+ For Series, the row labels are prefixed.
+ For DataFrame, the column labels are prefixed.
Parameters
----------
- prefix : string
+ prefix : str
+ The string to add before each label.
Returns
-------
- with_prefix : type of caller
+ Series or DataFrame
+ New Series or DataFrame with updated labels.
+
+ See Also
+ --------
+ Series.add_suffix: Suffix labels with string `suffix`.
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3, 4])
+ >>> s
+ 0 1
+ 1 2
+ 2 3
+ 3 4
+ dtype: int64
+
+ >>> s.add_prefix('item_')
+ item_0 1
+ item_1 2
+ item_2 3
+ item_3 4
+ dtype: int64
+
+ >>> df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [3, 4, 5, 6]})
+ >>> df
+ A B
+ 0 1 3
+ 1 2 4
+ 2 3 5
+ 3 4 6
+
+ >>> df.add_prefix('col_')
+ col_A col_B
+ 0 1 3
+ 1 2 4
+ 2 3 5
+ 3 4 6
"""
new_data = self._data.add_prefix(prefix)
return self._constructor(new_data).__finalize__(self)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
##################### Docstring (pandas.Series.add_prefix) #####################
################################################################################
Concatenate prefix string with panel items names.
Parameters
----------
prefix : str
The string to add before each item name.
Returns
-------
Series
Original Series with updated item names.
See Also
--------
pandas.Series.add_suffix: Add a suffix string to panel items names.
Examples
--------
>>> s = pd.Series([1,2,3,4])
>>> s
0 1
1 2
2 3
3 4
dtype: int64
>>> s.add_prefix('item_')
item_0 1
item_1 2
item_2 3
item_3 4
dtype: int64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly. | https://api.github.com/repos/pandas-dev/pandas/pulls/20313 | 2018-03-12T20:17:40Z | 2018-03-13T17:50:47Z | 2018-03-13T17:50:47Z | 2018-03-13T17:51:06Z |
DOC : Update the pandas.Period.hour docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 7763fe7b9008d..d5241e5a706f5 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1241,6 +1241,31 @@ cdef class _Period(object):
@property
def hour(self):
+ """
+ Get the hour of the day component of the Period.
+
+ Returns
+ -------
+ int
+ The hour as an integer, between 0 and 23.
+
+ See Also
+ --------
+ Period.second : Get the second component of the Period.
+ Period.minute : Get the minute component of the Period.
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11 13:03:12.050000")
+ >>> p.hour
+ 13
+
+ Period longer than a day
+
+ >>> p = pd.Period("2018-03-11", freq="M")
+ >>> p.hour
+ 0
+ """
base, mult = get_freq_code(self.freq)
return phour(self.ordinal, base)
| ```
# Period.hour
################################################################################
######################## Docstring (pandas.Period.hour) ########################
################################################################################
Get hours of a day that a Period falls on.
Returns
-------
int
See Also
--------
Period.minute : Get the minute of hour
Period.second : Get the second of hour
Examples
--------
>>> p = pd.Period("2018-03-11 13:03:12.050000")
>>> p.hour
13
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20312 | 2018-03-12T19:30:49Z | 2018-03-12T21:04:41Z | 2018-03-12T21:04:41Z | 2018-03-13T02:19:24Z |
COMPAT: Ascii example | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index f3f5c80c66e2f..a57d48f1b9ce4 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1534,10 +1534,10 @@ def is_categorical(self):
>>> idx.is_categorical()
False
- >>> s = pd.Series(["Peter", "Víctor", "Elisabeth", "Mar"])
+ >>> s = pd.Series(["Peter", "Victor", "Elisabeth", "Mar"])
>>> s
0 Peter
- 1 Víctor
+ 1 Victor
2 Elisabeth
3 Mar
dtype: object
| Py2 issues:
```
pytest -m single -r xX --junitxml=/tmp/single.xml --strict --only-slow --skip-network pandas
Traceback (most recent call last):
File "/home/travis/miniconda3/envs/pandas/lib/python2.7/site-packages/_pytest/config.py", line 366, in _importconftest
mod = conftestpath.pyimport()
File "/home/travis/miniconda3/envs/pandas/lib/python2.7/site-packages/py/_path/local.py", line 668, in pyimport
__import__(modname)
File "/home/travis/build/pandas-dev/pandas/pandas/__init__.py", line 45, in <module>
from pandas.core.api import *
File "/home/travis/build/pandas-dev/pandas/pandas/core/api.py", line 10, in <module>
from pandas.core.groupby import Grouper
File "/home/travis/build/pandas-dev/pandas/pandas/core/groupby.py", line 45, in <module>
from pandas.core.index import (Index, MultiIndex,
File "/home/travis/build/pandas-dev/pandas/pandas/core/index.py", line 2, in <module>
from pandas.core.indexes.api import *
File "/home/travis/build/pandas-dev/pandas/pandas/core/indexes/api.py", line 1, in <module>
from pandas.core.indexes.base import (Index,
File "/home/travis/build/pandas-dev/pandas/pandas/core/indexes/base.py", line 1537
SyntaxError: Non-ASCII character '\xc3' in file /home/travis/build/pandas-dev/pandas/pandas/core/indexes/base.py on line 1538, but no encoding declared; see http://python.org/dev/peps/pep-0263/ for details
ERROR: could not load /home/travis/build/pandas-dev/pandas/pandas/conftest.py
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/20311 | 2018-03-12T18:50:09Z | 2018-03-12T19:06:32Z | 2018-03-12T19:06:32Z | 2018-03-12T19:06:35Z |
DOC: update the pandas.DataFrame.pct_change docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d5bdeb7fe1a4d..b1326b84cefcd 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7515,29 +7515,118 @@ def _check_percentile(self, q):
return q
_shared_docs['pct_change'] = """
- Percent change over given number of periods.
+ Percentage change between the current and a prior element.
+
+ Computes the percentage change from the immediately previous row by
+ default. This is useful in comparing the percentage of change in a time
+ series of elements.
Parameters
----------
periods : int, default 1
- Periods to shift for forming percent change
+ Periods to shift for forming percent change.
fill_method : str, default 'pad'
- How to handle NAs before computing percent changes
+ How to handle NAs before computing percent changes.
limit : int, default None
- The number of consecutive NAs to fill before stopping
+ The number of consecutive NAs to fill before stopping.
freq : DateOffset, timedelta, or offset alias string, optional
- Increment to use from time series API (e.g. 'M' or BDay())
+ Increment to use from time series API (e.g. 'M' or BDay()).
+ **kwargs
+ Additional keyword arguments are passed into
+ `DataFrame.shift` or `Series.shift`.
Returns
-------
- chg : %(klass)s
+ chg : Series or DataFrame
+ The same type as the calling object.
- Notes
- -----
+ See Also
+ --------
+ Series.diff : Compute the difference of two elements in a Series.
+ DataFrame.diff : Compute the difference of two elements in a DataFrame.
+ Series.shift : Shift the index by some number of periods.
+ DataFrame.shift : Shift the index by some number of periods.
+
+ Examples
+ --------
+ **Series**
+
+ >>> s = pd.Series([90, 91, 85])
+ >>> s
+ 0 90
+ 1 91
+ 2 85
+ dtype: int64
+
+ >>> s.pct_change()
+ 0 NaN
+ 1 0.011111
+ 2 -0.065934
+ dtype: float64
+
+ >>> s.pct_change(periods=2)
+ 0 NaN
+ 1 NaN
+ 2 -0.055556
+ dtype: float64
- By default, the percentage change is calculated along the stat
- axis: 0, or ``Index``, for ``DataFrame`` and 1, or ``minor`` for
- ``Panel``. You can change this with the ``axis`` keyword argument.
+ See the percentage change in a Series where filling NAs with last
+ valid observation forward to next valid.
+
+ >>> s = pd.Series([90, 91, None, 85])
+ >>> s
+ 0 90.0
+ 1 91.0
+ 2 NaN
+ 3 85.0
+ dtype: float64
+
+ >>> s.pct_change(fill_method='ffill')
+ 0 NaN
+ 1 0.011111
+ 2 0.000000
+ 3 -0.065934
+ dtype: float64
+
+ **DataFrame**
+
+ Percentage change in French franc, Deutsche Mark, and Italian lira from
+ 1980-01-01 to 1980-03-01.
+
+ >>> df = pd.DataFrame({
+ ... 'FR': [4.0405, 4.0963, 4.3149],
+ ... 'GR': [1.7246, 1.7482, 1.8519],
+ ... 'IT': [804.74, 810.01, 860.13]},
+ ... index=['1980-01-01', '1980-02-01', '1980-03-01'])
+ >>> df
+ FR GR IT
+ 1980-01-01 4.0405 1.7246 804.74
+ 1980-02-01 4.0963 1.7482 810.01
+ 1980-03-01 4.3149 1.8519 860.13
+
+ >>> df.pct_change()
+ FR GR IT
+ 1980-01-01 NaN NaN NaN
+ 1980-02-01 0.013810 0.013684 0.006549
+ 1980-03-01 0.053365 0.059318 0.061876
+
+ Percentage of change in GOOG and APPL stock volume. Shows computing
+ the percentage change between columns.
+
+ >>> df = pd.DataFrame({
+ ... '2016': [1769950, 30586265],
+ ... '2015': [1500923, 40912316],
+ ... '2014': [1371819, 41403351]},
+ ... index=['GOOG', 'APPL'])
+ >>> df
+ 2016 2015 2014
+ GOOG 1769950 1500923 1371819
+ APPL 30586265 40912316 41403351
+
+ >>> df.pct_change(axis='columns')
+ 2016 2015 2014
+ GOOG NaN -0.151997 -0.086016
+ APPL NaN 0.337604 0.012002
"""
@Appender(_shared_docs['pct_change'] % _shared_doc_kwargs)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################### Docstring (pandas.DataFrame.pct_change) ###################
################################################################################
Percentage change between the current and previous element.
This is useful in comparing the percentage of change in a time series
of elements.
Parameters
----------
periods : int, default 1
Periods to shift for forming percent change.
fill_method : str, default 'pad'
How to handle NAs before computing percent changes.
limit : int, default None
The number of consecutive NAs to fill before stopping.
freq : DateOffset, timedelta, or offset alias string, optional
Increment to use from time series API (e.g. 'M' or BDay()).
**kwargs : mapping, optional
Additional keyword arguments are passed into
`DataFrame.shift` or `Series.shift`.
Returns
-------
chg : Series or DataFrame
The same type as the calling object.
Examples
--------
See the percentage change in a Series.
>>> s = pd.Series([90, 91, 85])
>>> s
0 90
1 91
2 85
dtype: int64
>>> s.pct_change()
0 NaN
1 0.011111
2 -0.065934
dtype: float64
See the percentage change in a Series where filling NAs with last
valid observation forward to next valid.
>>> s = pd.Series([90, 91, None, 85])
>>> s
0 90.0
1 91.0
2 NaN
3 85.0
dtype: float64
>>> s.pct_change(fill_method='ffill')
0 NaN
1 0.011111
2 0.000000
3 -0.065934
dtype: float64
Percentage change in French franc, Deutsche Mark, and Italian lira from
1 January 1980 to 1 March 1980.
>>> df = pd.DataFrame({
... 'FR': [4.0405, 4.0963, 4.3149],
... 'GR': [1.7246, 1.7482, 1.8519],
... 'IT': [804.74, 810.01, 860.13]},
... index=['1980-01-01', '1980-02-01', '1980-03-01'])
>>> df
FR GR IT
1980-01-01 4.0405 1.7246 804.74
1980-02-01 4.0963 1.7482 810.01
1980-03-01 4.3149 1.8519 860.13
>>> df.pct_change()
FR GR IT
1980-01-01 NaN NaN NaN
1980-02-01 0.013810 0.013684 0.006549
1980-03-01 0.053365 0.059318 0.061876
Percentage of change in GOOG and APPL stock volume.
>>> df = pd.DataFrame({
... '2016': [1769950, 30586265],
... '2015': [1500923, 40912316],
... '2014': [1371819, 41403351]},
... index=['GOOG', 'APPL'])
>>> df
2016 2015 2014
GOOG 1769950 1500923 1371819
APPL 30586265 40912316 41403351
>>> df.pct_change(axis='columns')
2016 2015 2014
GOOG NaN -0.151997 -0.086016
APPL NaN 0.337604 0.012002
See Also
--------
DataFrame.diff : see the difference of two columns
Series.diff : see the difference of two columns
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameters {'kwargs'} not documented
Unknown parameters {'**kwargs'}
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20310 | 2018-03-12T18:04:05Z | 2018-03-12T20:19:14Z | 2018-03-12T20:19:14Z | 2018-03-12T20:53:13Z |
DOC: update the pandas.Series.str.split docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 11081535cf63f..cb55108e9d05a 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1116,9 +1116,8 @@ def str_split(arr, pat=None, n=None):
Returns
-------
- split : Series/Index or DataFrame/MultiIndex of objects
- Type matches caller unless ``expand=True`` (return type is DataFrame or
- MultiIndex)
+ Series, Index, DataFrame or MultiIndex
+ Type matches caller unless ``expand=True`` (see Notes).
Notes
-----
@@ -1129,6 +1128,16 @@ def str_split(arr, pat=None, n=None):
- If for a certain row the number of found splits < `n`,
append `None` for padding up to `n` if ``expand=True``
+ If using ``expand=True``, Series and Index callers return DataFrame and
+ MultiIndex objects, respectively.
+
+ See Also
+ --------
+ str.split : Standard library version of this method.
+ Series.str.get_dummies : Split each string into dummy variables.
+ Series.str.partition : Split string on a separator, returning
+ the before, separator, and after components.
+
Examples
--------
>>> s = pd.Series(["this is good text", "but this is even better"])
@@ -1145,8 +1154,10 @@ def str_split(arr, pat=None, n=None):
1 [but this is even better]
dtype: object
- When using ``expand=True``, the split elements will
- expand out into separate columns.
+ When using ``expand=True``, the split elements will expand out into
+ separate columns.
+
+ For Series object, output return type is DataFrame.
>>> s.str.split(expand=True)
0 1 2 3 4
@@ -1157,6 +1168,13 @@ def str_split(arr, pat=None, n=None):
0 this good text
1 but this even better
+ For Index object, output return type is MultiIndex.
+
+ >>> i = pd.Index(["ba 100 001", "ba 101 002", "ba 102 003"])
+ >>> i.str.split(expand=True)
+ MultiIndex(levels=[['ba'], ['100', '101', '102'], ['001', '002', '003']],
+ labels=[[0, 0, 0], [0, 1, 2], [0, 1, 2]])
+
Parameter `n` can be used to limit the number of splits in the output.
>>> s.str.split("is", n=1)
| Following up from discussion on #20282
- [x] PR title is "DOC: update the pandas.Series.str.split docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py pandas.Series.str.split docstring`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single pandas.Series.str.split docstring`
```
################################################################################
##################### Docstring (pandas.Series.str.split) #####################
################################################################################
Split strings around given separator/delimiter.
Split each string in the caller's values by given
pattern, propagating NaN values. Equivalent to :meth:`str.split`.
Parameters
----------
pat : str, optional
String or regular expression to split on.
If not specified, split on whitespace.
n : int, default -1 (all)
Limit number of splits in output.
``None``, 0 and -1 will be interpreted as return all splits.
expand : bool, default False
Expand the splitted strings into separate columns.
* If ``True``, return DataFrame/MultiIndex expanding dimensionality.
* If ``False``, return Series/Index, containing lists of strings.
Returns
-------
split : Series/Index or DataFrame/MultiIndex of objects
Type matches caller unless ``expand=True`` (return type is DataFrame or
MultiIndex)
Notes
-----
The handling of the `n` keyword depends on the number of found splits:
- If found splits > `n`, make first `n` splits only
- If found splits <= `n`, make all splits
- If for a certain row the number of found splits < `n`,
append `None` for padding up to `n` if ``expand=True``
Examples
--------
>>> s = pd.Series(["this is good text", "but this is even better"])
By default, split will return an object of the same size
having lists containing the split elements
>>> s.str.split()
0 [this, is, good, text]
1 [but, this, is, even, better]
dtype: object
>>> s.str.split("random")
0 [this is good text]
1 [but this is even better]
dtype: object
When using ``expand=True``, the split elements will expand out into
separate columns.
For Series object, output return type is DataFrame.
>>> s.str.split(expand=True)
0 1 2 3 4
0 this is good text None
1 but this is even better
>>> s.str.split(" is ", expand=True)
0 1
0 this good text
1 but this even better
For Index object, output return type is MultiIndex.
>>> i = pd.Index(["ba 100 001", "ba 101 002", "ba 102 003"])
>>> i.str.split(expand=True)
MultiIndex(levels=[['ba'], ['100', '101', '102'], ['001', '002', '003']],
labels=[[0, 0, 0], [0, 1, 2], [0, 1, 2]])
Parameter `n` can be used to limit the number of splits in the output.
>>> s.str.split("is", n=1)
0 [th, is good text]
1 [but th, is even better]
dtype: object
>>> s.str.split("is", n=1, expand=True)
0 1
0 th is good text
1 but th is even better
If NaN is present, it is propagated throughout the columns
during the split.
>>> s = pd.Series(["this is good text", "but this is even better", np.nan])
>>> s.str.split(n=3, expand=True)
0 1 2 3
0 this is good text
1 but this is even better
2 NaN NaN NaN NaN
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
See Also section not found
```
@jorisvandenbossche @WillAyd | https://api.github.com/repos/pandas-dev/pandas/pulls/20307 | 2018-03-12T16:34:14Z | 2018-03-13T11:58:03Z | 2018-03-13T11:58:03Z | 2018-03-13T11:58:03Z |
DOC: update the Index.sort_values docstring | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e82b641db98fd..6233fea2c52d0 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2311,7 +2311,46 @@ def asof_locs(self, where, mask):
def sort_values(self, return_indexer=False, ascending=True):
"""
- Return sorted copy of Index
+ Return a sorted copy of the index.
+
+ Return a sorted copy of the index, and optionally return the indices
+ that sorted the index itself.
+
+ Parameters
+ ----------
+ return_indexer : bool, default False
+ Should the indices that would sort the index be returned.
+ ascending : bool, default True
+ Should the index values be sorted in an ascending order.
+
+ Returns
+ -------
+ sorted_index : pandas.Index
+ Sorted copy of the index.
+ indexer : numpy.ndarray, optional
+ The indices that the index itself was sorted by.
+
+ See Also
+ --------
+ pandas.Series.sort_values : Sort values of a Series.
+ pandas.DataFrame.sort_values : Sort values in a DataFrame.
+
+ Examples
+ --------
+ >>> idx = pd.Index([10, 100, 1, 1000])
+ >>> idx
+ Int64Index([10, 100, 1, 1000], dtype='int64')
+
+ Sort values in ascending order (default behavior).
+
+ >>> idx.sort_values()
+ Int64Index([1, 10, 100, 1000], dtype='int64')
+
+ Sort values in descending order, and also get the indices `idx` was
+ sorted by.
+
+ >>> idx.sort_values(ascending=False, return_indexer=True)
+ (Int64Index([1000, 100, 10, 1], dtype='int64'), array([3, 1, 0, 2]))
"""
_as = self.argsort()
if not ascending:
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
##################### Docstring (pandas.Index.sort_values) #####################
################################################################################
Return a sorted copy of the index.
Return a sorted copy of the index, and optionally return the indices
that sorted the index itself.
Parameters
----------
return_indexer : bool, default False
Should the indices that would sort the index be returned.
ascending : bool, default True
Should the index values be sorted in an ascending order.
Returns
-------
sorted_index : pandas.Index
Sorted copy of the index.
_as : numpy.ndarray, optional
The indices that the index itself was sorted by.
See Also
--------
pandas.Series.sort_values : Sort values of a Series.
pandas.DataFrame.sort_values : Sort values in a DataFrame.
Examples
--------
>>> idx = pd.Index([10, 100, 1, 1000])
>>> idx
Int64Index([10, 100, 1, 1000], dtype='int64')
Sort values in ascending order (default behavior).
>>> idx.sort_values()
Int64Index([1, 10, 100, 1000], dtype='int64')
Sort values in descending order, and also get the indices `idx` was
sorted by.
>>> idx.sort_values(ascending=False, return_indexer=True)
(Int64Index([1000, 100, 10, 1], dtype='int64'), array([3, 1, 0, 2]))
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Index.sort_values" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20299 | 2018-03-12T12:21:22Z | 2018-03-12T20:21:16Z | 2018-03-12T20:21:16Z | 2018-03-12T20:29:39Z |
DOC: update the Period.days_in_month docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..adf4f0569965f 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1270,6 +1270,35 @@ cdef class _Period(object):
@property
def days_in_month(self):
+ """
+ Get the total number of days in the month that this period falls on.
+
+ Returns
+ -------
+ int
+
+ See Also
+ --------
+ Period.daysinmonth : Gets the number of days in the month.
+ DatetimeIndex.daysinmonth : Gets the number of days in the month.
+ calendar.monthrange : Returns a tuple containing weekday
+ (0-6 ~ Mon-Sun) and number of days (28-31).
+
+ Examples
+ --------
+ >>> p = pd.Period('2018-2-17')
+ >>> p.days_in_month
+ 28
+
+ >>> pd.Period('2018-03-01').days_in_month
+ 31
+
+ Handles the leap year case as well:
+
+ >>> p = pd.Period('2016-2-17')
+ >>> p.days_in_month
+ 29
+ """
base, mult = get_freq_code(self.freq)
return pdays_in_month(self.ordinal, base)
| - [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
```
################################################################################
################### Docstring (pandas.Period.days_in_month) ###################
################################################################################
Get the total number of days in the month that this period falls on.
Returns
-------
int
See Also
--------
Period.daysinmonth : Returns the number of days in the month.
DatetimeIndex.daysinmonth : Return the number of days in the month.
calendar.monthrange : Return a tuple containing weekday
(0-6 ~ Mon-Sun) and number of days (28-31).
Examples
--------
>>> p= pd.Period('2018-2-17')
>>> p.days_in_month
28
>>> import datetime
>>> pd.Period(datetime.datetime.today(), freq= 'B').days_in_month
31
Handles the leap year case as well:
>>> p= pd.Period('2016-2-17')
>>> p.days_in_month
29
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
```
* Short summary summarises pretty much all the characteristics of this attribute.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20297 | 2018-03-12T11:38:30Z | 2018-03-12T16:37:52Z | 2018-03-12T16:37:52Z | 2018-03-12T21:18:06Z |
DOC: update the Period.daysinmonth docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..c5829d1846f0a 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1275,6 +1275,24 @@ cdef class _Period(object):
@property
def daysinmonth(self):
+ """
+ Get the total number of days of the month that the Period falls in.
+
+ Returns
+ -------
+ int
+
+ See Also
+ --------
+ Period.days_in_month : Return the days of the month
+ Period.dayofyear : Return the day of the year
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11", freq='H')
+ >>> p.daysinmonth
+ 31
+ """
return self.days_in_month
@property
| # Period.daysinmonth
################################################################################
#################### Docstring (pandas.Period.daysinmonth) ####################
################################################################################
Return total number of days of the month in given Period.
This attribute returns the total number of days of given month
Returns
-------
Int
Number of days with in month
See also
--------
Period.dayofweek
Return the day of the week
Period.dayofyear
Return the day of the year
Examples
--------
>>> import pandas as pd
>>> p = pd.Period("2018-03-11", freq='H')
>>> p.daysinmonth
31
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.daysinmonth" correct. :)
| https://api.github.com/repos/pandas-dev/pandas/pulls/20295 | 2018-03-12T07:10:55Z | 2018-03-12T15:02:07Z | 2018-03-12T15:02:07Z | 2018-03-12T15:13:08Z |
DOC: Update the Period.day docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..af4ec4e8a9a66 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1217,6 +1217,25 @@ cdef class _Period(object):
@property
def day(self):
+ """
+ Get day of the month that a Period falls on.
+
+ Returns
+ -------
+ int
+
+ See Also
+ --------
+ Period.dayofweek : Get the day of the week
+
+ Period.dayofyear : Get the day of the year
+
+ Examples
+ --------
+ >>> p = pd.Period("2018-03-11", freq='H')
+ >>> p.day
+ 11
+ """
base, mult = get_freq_code(self.freq)
return pday(self.ordinal, base)
| # Period.day Docstring
################################################################################
######################## Docstring (pandas.Period.day) ########################
################################################################################
Return the number of days of particular month of a given Period.
This attribute returns the total number of days of given month on which
the particular date occurs.
Returns
-------
Int
Number of days till particular date
See also
--------
Period.dayofweek
Return the day of the week
Period.dayofyear
Return the day of the year
Examples
--------
>>> import pandas as pd
>>> p = pd.Period("2018-03-11", freq='H')
>>> p.day
11
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.day" correct. :)
| https://api.github.com/repos/pandas-dev/pandas/pulls/20294 | 2018-03-12T05:26:55Z | 2018-03-12T10:26:24Z | 2018-03-12T10:26:24Z | 2018-03-12T10:32:37Z |
Clean / Consolidate pandas/tests/io/test_html.py | diff --git a/ci/requirements-2.7_COMPAT.pip b/ci/requirements-2.7_COMPAT.pip
index 13cd35a923124..0e154dbc07525 100644
--- a/ci/requirements-2.7_COMPAT.pip
+++ b/ci/requirements-2.7_COMPAT.pip
@@ -1,4 +1,4 @@
html5lib==1.0b2
-beautifulsoup4==4.2.0
+beautifulsoup4==4.2.1
openpyxl
argparse
diff --git a/ci/requirements-optional-conda.txt b/ci/requirements-optional-conda.txt
index 6edb8d17337e4..65357ce2018d2 100644
--- a/ci/requirements-optional-conda.txt
+++ b/ci/requirements-optional-conda.txt
@@ -1,4 +1,4 @@
-beautifulsoup4
+beautifulsoup4>=4.2.1
blosc
bottleneck
fastparquet
diff --git a/ci/requirements-optional-pip.txt b/ci/requirements-optional-pip.txt
index 8d4421ba2b681..43c7d47892095 100644
--- a/ci/requirements-optional-pip.txt
+++ b/ci/requirements-optional-pip.txt
@@ -1,6 +1,6 @@
# This file was autogenerated by scripts/convert_deps.py
# Do not modify directly
-beautifulsoup4
+beautifulsoup4>=4.2.1
blosc
bottleneck
fastparquet
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 07f57dbd65709..7d741c6c2c75a 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -266,6 +266,12 @@ Optional Dependencies
* One of the following combinations of libraries is needed to use the
top-level :func:`~pandas.read_html` function:
+ .. versionchanged:: 0.23.0
+
+ .. note::
+
+ If using BeautifulSoup4 a minimum version of 4.2.1 is required
+
* `BeautifulSoup4`_ and `html5lib`_ (Any recent version of `html5lib`_ is
okay.)
* `BeautifulSoup4`_ and `lxml`_
@@ -282,9 +288,6 @@ Optional Dependencies
* You are highly encouraged to read :ref:`HTML Table Parsing gotchas <io.html.gotchas>`.
It explains issues surrounding the installation and
usage of the above three libraries.
- * You may need to install an older version of `BeautifulSoup4`_:
- Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 32-bit
- Ubuntu/Debian
.. note::
diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index 791365295c268..c08e22af295f4 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -358,13 +358,15 @@ Dependencies have increased minimum versions
We have updated our minimum supported versions of dependencies (:issue:`15184`).
If installed, we now require:
-+-----------------+-----------------+----------+
-| Package | Minimum Version | Required |
-+=================+=================+==========+
-| python-dateutil | 2.5.0 | X |
-+-----------------+-----------------+----------+
-| openpyxl | 2.4.0 | |
-+-----------------+-----------------+----------+
++-----------------+-----------------+----------+---------------+
+| Package | Minimum Version | Required | Issue |
++=================+=================+==========+===============+
+| python-dateutil | 2.5.0 | X | :issue:`15184`|
++-----------------+-----------------+----------+---------------+
+| openpyxl | 2.4.0 | | :issue:`15184`|
++-----------------+-----------------+----------+---------------+
+| beautifulsoup4 | 4.2.1 | | :issue:`20082`|
++-----------------+-----------------+----------+---------------+
.. _whatsnew_0230.api_breaking.dict_insertion_order:
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 78aaf4596c8b7..aefa1ddd6cf0b 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -131,6 +131,9 @@ def lmap(*args, **kwargs):
def lfilter(*args, **kwargs):
return list(filter(*args, **kwargs))
+ from importlib import reload
+ reload = reload
+
else:
# Python 2
import re
@@ -184,6 +187,7 @@ def get_range_parameters(data):
lmap = builtins.map
lfilter = builtins.filter
+ reload = builtins.reload
if PY2:
def iteritems(obj, **kw):
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 300a5a151f5d2..ba5da1b4e3a76 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -14,8 +14,7 @@
from pandas.core.dtypes.common import is_list_like
from pandas.errors import EmptyDataError
-from pandas.io.common import (_is_url, urlopen,
- parse_url, _validate_header_arg)
+from pandas.io.common import _is_url, urlopen, _validate_header_arg
from pandas.io.parsers import TextParser
from pandas.compat import (lrange, lmap, u, string_types, iteritems,
raise_with_traceback, binary_type)
@@ -554,8 +553,7 @@ def _parse_td(self, row):
return row.xpath('.//td|.//th')
def _parse_tr(self, table):
- expr = './/tr[normalize-space()]'
- return table.xpath(expr)
+ return table.xpath('.//tr')
def _parse_tables(self, doc, match, kwargs):
pattern = match.pattern
@@ -606,18 +604,20 @@ def _build_doc(self):
"""
from lxml.html import parse, fromstring, HTMLParser
from lxml.etree import XMLSyntaxError
-
- parser = HTMLParser(recover=False, encoding=self.encoding)
+ parser = HTMLParser(recover=True, encoding=self.encoding)
try:
- # try to parse the input in the simplest way
- r = parse(self.io, parser=parser)
-
+ if _is_url(self.io):
+ with urlopen(self.io) as f:
+ r = parse(f, parser=parser)
+ else:
+ # try to parse the input in the simplest way
+ r = parse(self.io, parser=parser)
try:
r = r.getroot()
except AttributeError:
pass
- except (UnicodeDecodeError, IOError):
+ except (UnicodeDecodeError, IOError) as e:
# if the input is a blob of html goop
if not _is_url(self.io):
r = fromstring(self.io, parser=parser)
@@ -627,17 +627,7 @@ def _build_doc(self):
except AttributeError:
pass
else:
- # not a url
- scheme = parse_url(self.io).scheme
- if scheme not in _valid_schemes:
- # lxml can't parse it
- msg = (('{invalid!r} is not a valid url scheme, valid '
- 'schemes are {valid}')
- .format(invalid=scheme, valid=_valid_schemes))
- raise ValueError(msg)
- else:
- # something else happened: maybe a faulty connection
- raise
+ raise e
else:
if not hasattr(r, 'text_content'):
raise XMLSyntaxError("no text parsed from document", 0, 0, 0)
@@ -657,12 +647,21 @@ def _parse_raw_thead(self, table):
thead = table.xpath(expr)
res = []
if thead:
- trs = self._parse_tr(thead[0])
- for tr in trs:
- cols = [_remove_whitespace(x.text_content()) for x in
- self._parse_td(tr)]
+ # Grab any directly descending table headers first
+ ths = thead[0].xpath('./th')
+ if ths:
+ cols = [_remove_whitespace(x.text_content()) for x in ths]
if any(col != '' for col in cols):
res.append(cols)
+ else:
+ trs = self._parse_tr(thead[0])
+
+ for tr in trs:
+ cols = [_remove_whitespace(x.text_content()) for x in
+ self._parse_td(tr)]
+
+ if any(col != '' for col in cols):
+ res.append(cols)
return res
def _parse_raw_tfoot(self, table):
@@ -739,14 +738,10 @@ def _parser_dispatch(flavor):
raise ImportError(
"BeautifulSoup4 (bs4) not found, please install it")
import bs4
- if LooseVersion(bs4.__version__) == LooseVersion('4.2.0'):
- raise ValueError("You're using a version"
- " of BeautifulSoup4 (4.2.0) that has been"
- " known to cause problems on certain"
- " operating systems such as Debian. "
- "Please install a version of"
- " BeautifulSoup4 != 4.2.0, both earlier"
- " and later releases will work.")
+ if LooseVersion(bs4.__version__) <= LooseVersion('4.2.0'):
+ raise ValueError("A minimum version of BeautifulSoup 4.2.1 "
+ "is required")
+
else:
if not _HAS_LXML:
raise ImportError("lxml not found, please install it")
diff --git a/pandas/tests/io/data/banklist.html b/pandas/tests/io/data/banklist.html
index cbcce5a2d49ff..c6f0e47c2a3ef 100644
--- a/pandas/tests/io/data/banklist.html
+++ b/pandas/tests/io/data/banklist.html
@@ -340,6 +340,7 @@ <h1 class="page_title">Failed Bank List</h1>
<td class="closing">April 19, 2013</td>
<td class="updated">April 23, 2013</td>
</tr>
+ <tr>
<td class="institution"><a href="goldcanyon.html">Gold Canyon Bank</a></td>
<td class="city">Gold Canyon</td>
<td class="state">AZ</td>
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index b18104e951504..79b9a3715efd2 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -4,17 +4,8 @@
import os
import re
import threading
-import warnings
-
-# imports needed for Python 3.x but will fail under Python 2.x
-try:
- from importlib import import_module, reload
-except ImportError:
- import_module = __import__
-
-
-from distutils.version import LooseVersion
+from functools import partial
import pytest
@@ -23,48 +14,18 @@
from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index,
date_range, Series)
-from pandas.compat import (map, zip, StringIO, string_types, BytesIO,
- is_platform_windows, PY3)
-from pandas.io.common import URLError, urlopen, file_path_to_url
+from pandas.compat import (map, zip, StringIO, BytesIO,
+ is_platform_windows, PY3, reload)
+from pandas.io.common import URLError, file_path_to_url
import pandas.io.html
from pandas.io.html import read_html
from pandas._libs.parsers import ParserError
import pandas.util.testing as tm
+import pandas.util._test_decorators as td
from pandas.util.testing import makeCustomDataframe as mkdf, network
-def _have_module(module_name):
- try:
- import_module(module_name)
- return True
- except ImportError:
- return False
-
-
-def _skip_if_no(module_name):
- if not _have_module(module_name):
- pytest.skip("{0!r} not found".format(module_name))
-
-
-def _skip_if_none_of(module_names):
- if isinstance(module_names, string_types):
- _skip_if_no(module_names)
- if module_names == 'bs4':
- import bs4
- if LooseVersion(bs4.__version__) == LooseVersion('4.2.0'):
- pytest.skip("Bad version of bs4: 4.2.0")
- else:
- not_found = [module_name for module_name in module_names if not
- _have_module(module_name)]
- if set(not_found) & set(module_names):
- pytest.skip("{0!r} not found".format(not_found))
- if 'bs4' in module_names:
- import bs4
- if LooseVersion(bs4.__version__) == LooseVersion('4.2.0'):
- pytest.skip("Bad version of bs4: 4.2.0")
-
-
DATA_PATH = tm.get_data_path()
@@ -82,33 +43,45 @@ def assert_framelist_equal(list1, list2, *args, **kwargs):
assert not frame_i.empty, 'frames are both empty'
-def test_bs4_version_fails():
- _skip_if_none_of(('bs4', 'html5lib'))
+@td.skip_if_no('bs4')
+def test_bs4_version_fails(monkeypatch):
import bs4
- if LooseVersion(bs4.__version__) == LooseVersion('4.2.0'):
- tm.assert_raises(AssertionError, read_html, os.path.join(DATA_PATH,
- "spam.html"),
- flavor='bs4')
+ monkeypatch.setattr(bs4, '__version__', '4.2')
+ with tm.assert_raises_regex(ValueError, "minimum version"):
+ read_html(os.path.join(DATA_PATH, "spam.html"), flavor='bs4')
-class ReadHtmlMixin(object):
+def test_invalid_flavor():
+ url = 'google.com'
+ with pytest.raises(ValueError):
+ read_html(url, 'google', flavor='not a* valid**++ flaver')
- def read_html(self, *args, **kwargs):
- kwargs.setdefault('flavor', self.flavor)
- return read_html(*args, **kwargs)
+
+@td.skip_if_no('bs4')
+@td.skip_if_no('lxml')
+def test_same_ordering():
+ filename = os.path.join(DATA_PATH, 'valid_markup.html')
+ dfs_lxml = read_html(filename, index_col=0, flavor=['lxml'])
+ dfs_bs4 = read_html(filename, index_col=0, flavor=['bs4'])
+ assert_framelist_equal(dfs_lxml, dfs_bs4)
-class TestReadHtml(ReadHtmlMixin):
- flavor = 'bs4'
+@pytest.mark.parametrize("flavor", [
+ pytest.param('bs4', marks=pytest.mark.skipif(
+ not td.safe_import('lxml'), reason='No bs4')),
+ pytest.param('lxml', marks=pytest.mark.skipif(
+ not td.safe_import('lxml'), reason='No lxml'))], scope="class")
+class TestReadHtml(object):
spam_data = os.path.join(DATA_PATH, 'spam.html')
spam_data_kwargs = {}
if PY3:
spam_data_kwargs['encoding'] = 'UTF-8'
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- @classmethod
- def setup_class(cls):
- _skip_if_none_of(('bs4', 'html5lib'))
+ @pytest.fixture(autouse=True, scope="function")
+ def set_defaults(self, flavor, request):
+ self.read_html = partial(read_html, flavor=flavor)
+ yield
def test_to_html_compat(self):
df = mkdf(4, 3, data_gen_f=lambda *args: rand(), c_idx_names=False,
@@ -150,7 +123,6 @@ def test_spam_no_types(self):
df1 = self.read_html(self.spam_data, '.*Water.*')
df2 = self.read_html(self.spam_data, 'Unit')
assert_framelist_equal(df1, df2)
-
assert df1[0].iloc[0, 0] == 'Proximates'
assert df1[0].columns[0] == 'Nutrient'
@@ -667,6 +639,9 @@ def test_computer_sales_page(self):
r"multi_index of columns"):
self.read_html(data, header=[0, 1])
+ data = os.path.join(DATA_PATH, 'computer_sales_page.html')
+ assert self.read_html(data, header=[1, 2])
+
def test_wikipedia_states_table(self):
data = os.path.join(DATA_PATH, 'wikipedia_states.html')
assert os.path.isfile(data), '%r is not a file' % data
@@ -674,39 +649,6 @@ def test_wikipedia_states_table(self):
result = self.read_html(data, 'Arizona', header=1)[0]
assert result['sq mi'].dtype == np.dtype('float64')
- @pytest.mark.parametrize("displayed_only,exp0,exp1", [
- (True, DataFrame(["foo"]), None),
- (False, DataFrame(["foo bar baz qux"]), DataFrame(["foo"]))])
- def test_displayed_only(self, displayed_only, exp0, exp1):
- # GH 20027
- data = StringIO("""<html>
- <body>
- <table>
- <tr>
- <td>
- foo
- <span style="display:none;text-align:center">bar</span>
- <span style="display:none">baz</span>
- <span style="display: none">qux</span>
- </td>
- </tr>
- </table>
- <table style="display: none">
- <tr>
- <td>foo</td>
- </tr>
- </table>
- </body>
- </html>""")
-
- dfs = self.read_html(data, displayed_only=displayed_only)
- tm.assert_frame_equal(dfs[0], exp0)
-
- if exp1 is not None:
- tm.assert_frame_equal(dfs[1], exp1)
- else:
- assert len(dfs) == 1 # Should not parse hidden table
-
def test_decimal_rows(self):
# GH 12907
@@ -815,80 +757,6 @@ def test_multiple_header_rows(self):
html_df = read_html(html, )[0]
tm.assert_frame_equal(expected_df, html_df)
-
-def _lang_enc(filename):
- return os.path.splitext(os.path.basename(filename))[0].split('_')
-
-
-class TestReadHtmlEncoding(object):
- files = glob.glob(os.path.join(DATA_PATH, 'html_encoding', '*.html'))
- flavor = 'bs4'
-
- @classmethod
- def setup_class(cls):
- _skip_if_none_of((cls.flavor, 'html5lib'))
-
- def read_html(self, *args, **kwargs):
- kwargs['flavor'] = self.flavor
- return read_html(*args, **kwargs)
-
- def read_filename(self, f, encoding):
- return self.read_html(f, encoding=encoding, index_col=0)
-
- def read_file_like(self, f, encoding):
- with open(f, 'rb') as fobj:
- return self.read_html(BytesIO(fobj.read()), encoding=encoding,
- index_col=0)
-
- def read_string(self, f, encoding):
- with open(f, 'rb') as fobj:
- return self.read_html(fobj.read(), encoding=encoding, index_col=0)
-
- def test_encode(self):
- assert self.files, 'no files read from the data folder'
- for f in self.files:
- _, encoding = _lang_enc(f)
- try:
- from_string = self.read_string(f, encoding).pop()
- from_file_like = self.read_file_like(f, encoding).pop()
- from_filename = self.read_filename(f, encoding).pop()
- tm.assert_frame_equal(from_string, from_file_like)
- tm.assert_frame_equal(from_string, from_filename)
- except Exception:
- # seems utf-16/32 fail on windows
- if is_platform_windows():
- if '16' in encoding or '32' in encoding:
- continue
- raise
-
-
-class TestReadHtmlEncodingLxml(TestReadHtmlEncoding):
- flavor = 'lxml'
-
- @classmethod
- def setup_class(cls):
- super(TestReadHtmlEncodingLxml, cls).setup_class()
- _skip_if_no(cls.flavor)
-
-
-class TestReadHtmlLxml(ReadHtmlMixin):
- flavor = 'lxml'
-
- @classmethod
- def setup_class(cls):
- _skip_if_no('lxml')
-
- def test_data_fail(self):
- from lxml.etree import XMLSyntaxError
- spam_data = os.path.join(DATA_PATH, 'spam.html')
- banklist_data = os.path.join(DATA_PATH, 'banklist.html')
-
- with pytest.raises(XMLSyntaxError):
- self.read_html(spam_data)
-
- with pytest.raises(XMLSyntaxError):
- self.read_html(banklist_data)
-
def test_works_on_valid_markup(self):
filename = os.path.join(DATA_PATH, 'valid_markup.html')
dfs = self.read_html(filename, index_col=0)
@@ -897,7 +765,6 @@ def test_works_on_valid_markup(self):
@pytest.mark.slow
def test_fallback_success(self):
- _skip_if_none_of(('bs4', 'html5lib'))
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
self.read_html(banklist_data, '.*Water.*', flavor=['lxml', 'html5lib'])
@@ -908,27 +775,6 @@ def test_to_html_timestamp(self):
result = df.to_html()
assert '2000-01-01' in result
- def test_parse_dates_list(self):
- df = DataFrame({'date': date_range('1/1/2001', periods=10)})
- expected = df.to_html()
- res = self.read_html(expected, parse_dates=[1], index_col=0)
- tm.assert_frame_equal(df, res[0])
- res = self.read_html(expected, parse_dates=['date'], index_col=0)
- tm.assert_frame_equal(df, res[0])
-
- def test_parse_dates_combine(self):
- raw_dates = Series(date_range('1/1/2001', periods=10))
- df = DataFrame({'date': raw_dates.map(lambda x: str(x.date())),
- 'time': raw_dates.map(lambda x: str(x.time()))})
- res = self.read_html(df.to_html(), parse_dates={'datetime': [1, 2]},
- index_col=1)
- newdf = DataFrame({'datetime': raw_dates})
- tm.assert_frame_equal(newdf, res[0])
-
- def test_computer_sales_page(self):
- data = os.path.join(DATA_PATH, 'computer_sales_page.html')
- self.read_html(data, header=[0, 1])
-
@pytest.mark.parametrize("displayed_only,exp0,exp1", [
(True, DataFrame(["foo"]), None),
(False, DataFrame(["foo bar baz qux"]), DataFrame(["foo"]))])
@@ -962,134 +808,99 @@ def test_displayed_only(self, displayed_only, exp0, exp1):
else:
assert len(dfs) == 1 # Should not parse hidden table
+ @pytest.mark.parametrize("f", glob.glob(
+ os.path.join(DATA_PATH, 'html_encoding', '*.html')))
+ def test_encode(self, f):
+ _, encoding = os.path.splitext(os.path.basename(f))[0].split('_')
-def test_invalid_flavor():
- url = 'google.com'
- with pytest.raises(ValueError):
- read_html(url, 'google', flavor='not a* valid**++ flaver')
-
-
-def get_elements_from_file(url, element='table'):
- _skip_if_none_of(('bs4', 'html5lib'))
- url = file_path_to_url(url)
- from bs4 import BeautifulSoup
- with urlopen(url) as f:
- soup = BeautifulSoup(f, features='html5lib')
- return soup.find_all(element)
-
-
-@pytest.mark.slow
-def test_bs4_finds_tables():
- filepath = os.path.join(DATA_PATH, "spam.html")
- with warnings.catch_warnings():
- warnings.filterwarnings('ignore')
- assert get_elements_from_file(filepath, 'table')
-
-
-def get_lxml_elements(url, element):
- _skip_if_no('lxml')
- from lxml.html import parse
- doc = parse(url)
- return doc.xpath('.//{0}'.format(element))
-
-
-@pytest.mark.slow
-def test_lxml_finds_tables():
- filepath = os.path.join(DATA_PATH, "spam.html")
- assert get_lxml_elements(filepath, 'table')
-
-
-@pytest.mark.slow
-def test_lxml_finds_tbody():
- filepath = os.path.join(DATA_PATH, "spam.html")
- assert get_lxml_elements(filepath, 'tbody')
-
-
-def test_same_ordering():
- _skip_if_none_of(['bs4', 'lxml', 'html5lib'])
- filename = os.path.join(DATA_PATH, 'valid_markup.html')
- dfs_lxml = read_html(filename, index_col=0, flavor=['lxml'])
- dfs_bs4 = read_html(filename, index_col=0, flavor=['bs4'])
- assert_framelist_equal(dfs_lxml, dfs_bs4)
-
-
-class ErrorThread(threading.Thread):
- def run(self):
try:
- super(ErrorThread, self).run()
- except Exception as e:
- self.err = e
- else:
- self.err = None
+ with open(f, 'rb') as fobj:
+ from_string = self.read_html(fobj.read(), encoding=encoding,
+ index_col=0).pop()
+ with open(f, 'rb') as fobj:
+ from_file_like = self.read_html(BytesIO(fobj.read()),
+ encoding=encoding,
+ index_col=0).pop()
-@pytest.mark.slow
-def test_importcheck_thread_safety():
- # see gh-16928
+ from_filename = self.read_html(f, encoding=encoding,
+ index_col=0).pop()
+ tm.assert_frame_equal(from_string, from_file_like)
+ tm.assert_frame_equal(from_string, from_filename)
+ except Exception:
+ # seems utf-16/32 fail on windows
+ if is_platform_windows():
+ if '16' in encoding or '32' in encoding:
+ pytest.skip()
+ raise
- # force import check by reinitalising global vars in html.py
- pytest.importorskip('lxml')
- reload(pandas.io.html)
+ def test_parse_failure_unseekable(self):
+ # Issue #17975
- filename = os.path.join(DATA_PATH, 'valid_markup.html')
- helper_thread1 = ErrorThread(target=read_html, args=(filename,))
- helper_thread2 = ErrorThread(target=read_html, args=(filename,))
+ if self.read_html.keywords.get('flavor') == 'lxml':
+ pytest.skip("Not applicable for lxml")
- helper_thread1.start()
- helper_thread2.start()
+ class UnseekableStringIO(StringIO):
+ def seekable(self):
+ return False
- while helper_thread1.is_alive() or helper_thread2.is_alive():
- pass
- assert None is helper_thread1.err is helper_thread2.err
+ bad = UnseekableStringIO('''
+ <table><tr><td>spam<foobr />eggs</td></tr></table>''')
+ assert self.read_html(bad)
-def test_parse_failure_unseekable():
- # Issue #17975
- _skip_if_no('lxml')
- _skip_if_no('bs4')
+ with pytest.raises(ValueError,
+ match='passed a non-rewindable file object'):
+ self.read_html(bad)
- class UnseekableStringIO(StringIO):
- def seekable(self):
- return False
+ def test_parse_failure_rewinds(self):
+ # Issue #17975
- good = UnseekableStringIO('''
- <table><tr><td>spam<br />eggs</td></tr></table>''')
- bad = UnseekableStringIO('''
- <table><tr><td>spam<foobr />eggs</td></tr></table>''')
+ class MockFile(object):
+ def __init__(self, data):
+ self.data = data
+ self.at_end = False
- assert read_html(good)
- assert read_html(bad, flavor='bs4')
+ def read(self, size=None):
+ data = '' if self.at_end else self.data
+ self.at_end = True
+ return data
- bad.seek(0)
+ def seek(self, offset):
+ self.at_end = False
- with pytest.raises(ValueError,
- match='passed a non-rewindable file object'):
- read_html(bad)
+ def seekable(self):
+ return True
+ good = MockFile('<table><tr><td>spam<br />eggs</td></tr></table>')
+ bad = MockFile('<table><tr><td>spam<foobr />eggs</td></tr></table>')
-def test_parse_failure_rewinds():
- # Issue #17975
- _skip_if_no('lxml')
- _skip_if_no('bs4')
+ assert self.read_html(good)
+ assert self.read_html(bad)
- class MockFile(object):
- def __init__(self, data):
- self.data = data
- self.at_end = False
+ @pytest.mark.slow
+ def test_importcheck_thread_safety(self):
+ # see gh-16928
- def read(self, size=None):
- data = '' if self.at_end else self.data
- self.at_end = True
- return data
+ class ErrorThread(threading.Thread):
+ def run(self):
+ try:
+ super(ErrorThread, self).run()
+ except Exception as e:
+ self.err = e
+ else:
+ self.err = None
- def seek(self, offset):
- self.at_end = False
+ # force import check by reinitalising global vars in html.py
+ reload(pandas.io.html)
- def seekable(self):
- return True
+ filename = os.path.join(DATA_PATH, 'valid_markup.html')
+ helper_thread1 = ErrorThread(target=self.read_html, args=(filename,))
+ helper_thread2 = ErrorThread(target=self.read_html, args=(filename,))
- good = MockFile('<table><tr><td>spam<br />eggs</td></tr></table>')
- bad = MockFile('<table><tr><td>spam<foobr />eggs</td></tr></table>')
+ helper_thread1.start()
+ helper_thread2.start()
- assert read_html(good)
- assert read_html(bad)
+ while helper_thread1.is_alive() or helper_thread2.is_alive():
+ pass
+ assert None is helper_thread1.err is helper_thread2.err
| This is an **extremely** aggressive change to get all of the test cases unified between the LXML and BS4 parsers. On the plus side both parsers can share the same set of tests and functionality, but on the downside it gives lxml a little more power than it had previously, where it would quickly fall back to bs4 for malformed sites.
Review / criticism appreciated | https://api.github.com/repos/pandas-dev/pandas/pulls/20293 | 2018-03-12T04:12:28Z | 2018-03-14T10:47:48Z | 2018-03-14T10:47:48Z | 2018-03-14T15:50:38Z |
BUG: Retain tz-aware dtypes with melt (#15785) | diff --git a/doc/source/whatsnew/v0.23.0.txt b/doc/source/whatsnew/v0.23.0.txt
index f686a042c1a74..791365295c268 100644
--- a/doc/source/whatsnew/v0.23.0.txt
+++ b/doc/source/whatsnew/v0.23.0.txt
@@ -896,6 +896,7 @@ Timezones
- Bug in :func:`Timestamp.tz_localize` where localizing a timestamp near the minimum or maximum valid values could overflow and return a timestamp with an incorrect nanosecond value (:issue:`12677`)
- Bug when iterating over :class:`DatetimeIndex` that was localized with fixed timezone offset that rounded nanosecond precision to microseconds (:issue:`19603`)
- Bug in :func:`DataFrame.diff` that raised an ``IndexError`` with tz-aware values (:issue:`18578`)
+- Bug in :func:`melt` that converted tz-aware dtypes to tz-naive (:issue:`15785`)
Offsets
^^^^^^^
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 01445eb30a9e5..ce99d2f8c9a63 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -13,7 +13,9 @@
import re
from pandas.core.dtypes.missing import notna
+from pandas.core.dtypes.common import is_extension_type
from pandas.core.tools.numeric import to_numeric
+from pandas.core.reshape.concat import concat
@Appender(_shared_docs['melt'] %
@@ -70,7 +72,12 @@ def melt(frame, id_vars=None, value_vars=None, var_name=None,
mdata = {}
for col in id_vars:
- mdata[col] = np.tile(frame.pop(col).values, K)
+ id_data = frame.pop(col)
+ if is_extension_type(id_data):
+ id_data = concat([id_data] * K, ignore_index=True)
+ else:
+ id_data = np.tile(id_data.values, K)
+ mdata[col] = id_data
mcolumns = id_vars + var_name + [value_name]
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index 000b22d4fdd36..81570de7586de 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -212,6 +212,27 @@ def test_multiindex(self):
res = self.df1.melt()
assert res.columns.tolist() == ['CAP', 'low', 'value']
+ @pytest.mark.parametrize("col", [
+ pd.Series(pd.date_range('2010', periods=5, tz='US/Pacific')),
+ pd.Series(["a", "b", "c", "a", "d"], dtype="category"),
+ pd.Series([0, 1, 0, 0, 0])])
+ def test_pandas_dtypes(self, col):
+ # GH 15785
+ df = DataFrame({'klass': range(5),
+ 'col': col,
+ 'attr1': [1, 0, 0, 0, 0],
+ 'attr2': col})
+ expected_value = pd.concat([pd.Series([1, 0, 0, 0, 0]), col],
+ ignore_index=True)
+ result = melt(df, id_vars=['klass', 'col'], var_name='attribute',
+ value_name='value')
+ expected = DataFrame({0: list(range(5)) * 2,
+ 1: pd.concat([col] * 2, ignore_index=True),
+ 2: ['attr1'] * 5 + ['attr2'] * 5,
+ 3: expected_value})
+ expected.columns = ['klass', 'col', 'attribute', 'value']
+ tm.assert_frame_equal(result, expected)
+
class TestLreshape(object):
| - [x] closes #15785
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
`.values` call was converting tz aware data to tz naive data (by casting to a numpy array). Added an additional test for Categorical data as well. | https://api.github.com/repos/pandas-dev/pandas/pulls/20292 | 2018-03-12T02:00:24Z | 2018-03-13T10:31:10Z | 2018-03-13T10:31:10Z | 2018-03-13T15:01:34Z |
DOC: update `pandas/core/ops.py` docstring with examples for pandas.DataFrame.gt and .ge methods | diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 6c6a54993b669..cb73318927355 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -370,6 +370,66 @@ def _get_op_name(op, special):
e NaN 2.0
"""
+_gt_example_FRAME = """
+>>> df1 = pd.DataFrame({'num1': range(1,6),
+ 'num2': range(2,11,2),
+ 'num3': range(1,20,4)})
+>>> df1
+ num1 num2 num3
+0 1 2 1
+1 2 4 5
+2 3 6 9
+3 4 8 13
+4 5 10 17
+>>> df2 = pd.DataFrame({'num1': range(6,11),
+ 'num2': range(1,10,2),
+ 'num3': range(1,20,4)})
+>>> df2
+ num1 num2 num3
+0 6 1 1
+1 7 3 5
+2 8 5 9
+3 9 7 13
+4 10 9 17
+>>> df1.gt(df2)
+ num1 num2 num3
+0 False True False
+1 False True False
+2 False True False
+3 False True False
+4 False True False
+"""
+
+_ge_example_FRAME = """
+>>> df1 = pd.DataFrame({'num1': range(1,6),
+ 'num2': range(2,11,2),
+ 'num3': range(1,20,4)})
+>>> df1
+ num1 num2 num3
+0 1 2 1
+1 2 4 5
+2 3 6 9
+3 4 8 13
+4 5 10 17
+>>> df2 = pd.DataFrame({'num1': range(6,11),
+ 'num2': range(1,10,2),
+ 'num3': range(1,20,4)})
+>>> df2
+ num1 num2 num3
+0 6 1 1
+1 7 3 5
+2 8 5 9
+3 9 7 13
+4 10 9 17
+>>> df1.ge(df2)
+ num1 num2 num3
+0 False True True
+1 False True True
+2 False True True
+3 False True True
+4 False True True
+"""
+
_op_descriptions = {
# Arithmetic Operators
'add': {'op': '+',
@@ -425,11 +485,11 @@ def _get_op_name(op, special):
'gt': {'op': '>',
'desc': 'Greater than',
'reverse': None,
- 'df_examples': None},
+ 'df_examples': _gt_example_FRAME},
'ge': {'op': '>=',
'desc': 'Greater than or equal to',
'reverse': None,
- 'df_examples': None}}
+ 'df_examples': _ge_example_FRAME}}
_op_names = list(_op_descriptions.keys())
for key in _op_names:
| Checklist for the pandas documentation sprint:
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
####################### Docstring (pandas.DataFrame.gt) #######################
################################################################################
Wrapper for flexible comparison methods gt
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Docstring text (summary) should start in the line immediately after the opening quotes (not in the same line, or leaving a blank line in between)
Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
Summary does not end with dot
No extended summary found
Errors in parameters section
Parameters {'axis', 'level', 'other'} not documented
No returns section found
See Also section not found
No examples section found
################################################################################
####################### Docstring (pandas.DataFrame.ge) #######################
################################################################################
Wrapper for flexible comparison methods ge
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Docstring text (summary) should start in the line immediately after the opening quotes (not in the same line, or leaving a blank line in between)
Closing quotes should be placed in the line after the last text in the docstring (do not close the quotes in the same line as the text, or leave a blank line between the last text and the quotes)
Summary does not end with dot
No extended summary found
Errors in parameters section
Parameters {'level', 'other', 'axis'} not documented
No returns section found
See Also section not found
No examples section found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
Comparison methods in `ops.py` including `.gt()` and `.ge()` do not have standard docstring like other methods. They use a generalized wrapper summary line. This pull request simply adds examples in line with recent merge for `.add()` method: https://github.com/pandas-dev/pandas/pull/20246/files | https://api.github.com/repos/pandas-dev/pandas/pulls/20291 | 2018-03-12T00:37:27Z | 2018-03-18T23:49:55Z | null | 2018-03-18T23:49:55Z |
DOC: update the DataFrame.at[] docstring | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 560e7638b5510..fb3279840c7bf 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1888,11 +1888,50 @@ def __setitem__(self, key, value):
class _AtIndexer(_ScalarAccessIndexer):
- """Fast label-based scalar accessor
+ """
+ Access a single value for a row/column label pair.
+
+ Similar to ``loc``, in that both provide label-based lookups. Use
+ ``at`` if you only need to get or set a single value in a DataFrame
+ or Series.
+
+ See Also
+ --------
+ DataFrame.iat : Access a single value for a row/column pair by integer
+ position
+ DataFrame.loc : Access a group of rows and columns by label(s)
+ Series.at : Access a single value using a label
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
+ ... index=[4, 5, 6], columns=['A', 'B', 'C'])
+ >>> df
+ A B C
+ 4 0 2 3
+ 5 0 4 1
+ 6 10 20 30
+
+ Get value at specified row/column pair
+
+ >>> df.at[4, 'B']
+ 2
+
+ Set value at specified row/column pair
+
+ >>> df.at[4, 'B'] = 10
+ >>> df.at[4, 'B']
+ 10
- Similarly to ``loc``, ``at`` provides **label** based scalar lookups.
- You can also set using these indexers.
+ Get value within a Series
+ >>> df.loc[5].at['B']
+ 4
+
+ Raises
+ ------
+ KeyError
+ When label does not exist in DataFrame
"""
_takeable = False
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
# paste output of "scripts/validate_docstrings.py <your-function-or-method>" here
# between the "```" (remove this comment, but keep the "```")
################################################################################
####################### Docstring (pandas.DataFrame.at) #######################
################################################################################
Access a single value for a row/column label pair.
Similar to ``loc``, in that both provide label-based lookups. Use
``at`` if you only need to get or set a single value in a DataFrame
or Series.
See Also
--------
DataFrame.iat : Access a single value for a row/column pair by integer position
DataFrame.loc : Access a group of rows and columns by label(s)
Series.at : Access a single value using a label
Examples
--------
>>> df = pd.DataFrame([[0, 2, 3], [0, 4, 1], [10, 20, 30]],
... index=[4, 5, 6], columns=['A', 'B', 'C'])
>>> df
A B C
4 0 2 3
5 0 4 1
6 10 20 30
Get value at specified row/column pair
>>> df.at[4, 'B']
2
Set value at specified row/column pair
>>> df.at[4, 'B'] = 10
>>> df.at[4, 'B']
10
Get value within a series
>>> df.loc[5].at['B']
4
Raises
------
KeyError
When label does not exist in DataFrame
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No returns section found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly. | https://api.github.com/repos/pandas-dev/pandas/pulls/20290 | 2018-03-12T00:29:18Z | 2018-03-12T14:25:34Z | 2018-03-12T14:25:34Z | 2018-03-12T14:25:47Z |
DOC: update the pandas.DataFrame.clip_lower docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a75e3960cda16..920b0f5ae7b51 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6572,7 +6572,10 @@ def clip_upper(self, threshold, axis=None, inplace=False):
def clip_lower(self, threshold, axis=None, inplace=False):
"""
- Return copy of the input with values below a threshold truncated.
+ Trim values below a given threshold.
+
+ Elements below the `threshold` will be changed to match the
+ `threshold` value(s).
Parameters
----------
@@ -6597,17 +6600,22 @@ def clip_lower(self, threshold, axis=None, inplace=False):
See Also
--------
- Series.clip : Return copy of input with values below and above
- thresholds truncated.
- Series.clip_upper : Return copy of input with values above
- threshold truncated.
+ DataFrame.clip : General purpose method to trim `DataFrame` values to
+ given threshold(s)
+ DataFrame.clip_upper : Trim `DataFrame` values above given
+ threshold(s)
+ Series.clip : General purpose method to trim `Series` values to given
+ threshold(s)
+ Series.clip_upper : Trim `Series` values above given threshold(s)
Returns
-------
- clipped : same type as input
+ clipped
+ Original data with values trimmed.
Examples
--------
+
Series single threshold clipping:
>>> s = pd.Series([5, 6, 7, 8, 9])
@@ -6659,17 +6667,18 @@ def clip_lower(self, threshold, axis=None, inplace=False):
`threshold` should be the same length as the axis specified by
`axis`.
- >>> df.clip_lower(np.array([3, 3, 5]), axis='index')
+ >>> df.clip_lower([3, 3, 5], axis='index')
A B
0 3 3
1 3 4
2 5 6
- >>> df.clip_lower(np.array([4, 5]), axis='columns')
+ >>> df.clip_lower([4, 5], axis='columns')
A B
0 4 5
1 4 5
2 5 6
+
"""
return self._clip_with_one_bound(threshold, method=self.ge,
axis=axis, inplace=inplace)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.DataFrame.clip_lower" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20289 | 2018-03-11T23:03:51Z | 2018-07-08T14:39:58Z | 2018-07-08T14:39:58Z | 2018-07-08T14:40:31Z |
CI: use dateutil-master in testing | diff --git a/ci/requirements-3.6_NUMPY_DEV.build.sh b/ci/requirements-3.6_NUMPY_DEV.build.sh
index 9145bf1d3481c..fd79142c5cebb 100644
--- a/ci/requirements-3.6_NUMPY_DEV.build.sh
+++ b/ci/requirements-3.6_NUMPY_DEV.build.sh
@@ -12,8 +12,7 @@ PRE_WHEELS="https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf
pip install --pre --upgrade --timeout=60 -f $PRE_WHEELS numpy scipy
# install dateutil from master
-# pip install -U git+git://github.com/dateutil/dateutil.git
-pip install dateutil
+pip install -U git+git://github.com/dateutil/dateutil.git
# cython via pip
pip install cython
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 37f0a2f818a3b..5eca467746157 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1,9 +1,7 @@
import pytest
-from distutils.version import LooseVersion
import numpy
import pandas
-import dateutil
import pandas.util._test_decorators as td
@@ -68,14 +66,6 @@ def ip():
return InteractiveShell()
-is_dateutil_le_261 = pytest.mark.skipif(
- LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
- reason="dateutil api change version")
-is_dateutil_gt_261 = pytest.mark.skipif(
- LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
- reason="dateutil stable version")
-
-
@pytest.fixture(params=[None, 'gzip', 'bz2', 'zip',
pytest.param('xz', marks=td.skip_if_no_lzma)])
def compression(request):
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index 0d42b6e9692fe..fb7677bb1449c 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -12,7 +12,6 @@
from distutils.version import LooseVersion
import pandas as pd
-from pandas.conftest import is_dateutil_le_261, is_dateutil_gt_261
from pandas._libs import tslib
from pandas._libs.tslibs import parsing
from pandas.core.tools import datetimes as tools
@@ -1058,7 +1057,6 @@ def test_dayfirst(self, cache):
class TestGuessDatetimeFormat(object):
@td.skip_if_not_us_locale
- @is_dateutil_le_261
def test_guess_datetime_format_for_array(self):
expected_format = '%Y-%m-%d %H:%M:%S.%f'
dt_string = datetime(2011, 12, 30, 0, 0, 0).strftime(expected_format)
@@ -1078,27 +1076,6 @@ def test_guess_datetime_format_for_array(self):
[np.nan, np.nan, np.nan], dtype='O'))
assert format_for_string_of_nans is None
- @td.skip_if_not_us_locale
- @is_dateutil_gt_261
- def test_guess_datetime_format_for_array_gt_261(self):
- expected_format = '%Y-%m-%d %H:%M:%S.%f'
- dt_string = datetime(2011, 12, 30, 0, 0, 0).strftime(expected_format)
-
- test_arrays = [
- np.array([dt_string, dt_string, dt_string], dtype='O'),
- np.array([np.nan, np.nan, dt_string], dtype='O'),
- np.array([dt_string, 'random_string'], dtype='O'),
- ]
-
- for test_array in test_arrays:
- assert tools._guess_datetime_format_for_array(
- test_array) is None
-
- format_for_string_of_nans = tools._guess_datetime_format_for_array(
- np.array(
- [np.nan, np.nan, np.nan], dtype='O'))
- assert format_for_string_of_nans is None
-
class TestToDatetimeInferFormat(object):
diff --git a/pandas/tests/tslibs/test_parsing.py b/pandas/tests/tslibs/test_parsing.py
index 34cce088a8b42..14c9ca1f6cc54 100644
--- a/pandas/tests/tslibs/test_parsing.py
+++ b/pandas/tests/tslibs/test_parsing.py
@@ -8,7 +8,6 @@
from dateutil.parser import parse
import pandas.util._test_decorators as td
-from pandas.conftest import is_dateutil_le_261, is_dateutil_gt_261
from pandas import compat
from pandas.util import testing as tm
from pandas._libs.tslibs import parsing
@@ -96,7 +95,6 @@ def test_parsers_monthfreq(self):
class TestGuessDatetimeFormat(object):
@td.skip_if_not_us_locale
- @is_dateutil_le_261
@pytest.mark.parametrize(
"string, format",
[
@@ -112,19 +110,6 @@ def test_guess_datetime_format_with_parseable_formats(
result = parsing._guess_datetime_format(string)
assert result == format
- @td.skip_if_not_us_locale
- @is_dateutil_gt_261
- @pytest.mark.parametrize(
- "string",
- ['20111230', '2011-12-30', '30-12-2011',
- '2011-12-30 00:00:00', '2011-12-30T00:00:00',
- '2011-12-30 00:00:00.000000'])
- def test_guess_datetime_format_with_parseable_formats_gt_261(
- self, string):
- result = parsing._guess_datetime_format(string)
- assert result is None
-
- @is_dateutil_le_261
@pytest.mark.parametrize(
"dayfirst, expected",
[
@@ -136,17 +121,7 @@ def test_guess_datetime_format_with_dayfirst(self, dayfirst, expected):
ambiguous_string, dayfirst=dayfirst)
assert result == expected
- @is_dateutil_gt_261
- @pytest.mark.parametrize(
- "dayfirst", [True, False])
- def test_guess_datetime_format_with_dayfirst_gt_261(self, dayfirst):
- ambiguous_string = '01/01/2011'
- result = parsing._guess_datetime_format(
- ambiguous_string, dayfirst=dayfirst)
- assert result is None
-
@td.skip_if_has_locale
- @is_dateutil_le_261
@pytest.mark.parametrize(
"string, format",
[
@@ -158,19 +133,6 @@ def test_guess_datetime_format_with_locale_specific_formats(
result = parsing._guess_datetime_format(string)
assert result == format
- @td.skip_if_has_locale
- @is_dateutil_gt_261
- @pytest.mark.parametrize(
- "string",
- [
- '30/Dec/2011',
- '30/December/2011',
- '30/Dec/2011 00:00:00'])
- def test_guess_datetime_format_with_locale_specific_formats_gt_261(
- self, string):
- result = parsing._guess_datetime_format(string)
- assert result is None
-
def test_guess_datetime_format_invalid_inputs(self):
# A datetime string must include a year, month and a day for it
# to be guessable, in addition to being a string that looks like
@@ -189,7 +151,6 @@ def test_guess_datetime_format_invalid_inputs(self):
for invalid_dt in invalid_dts:
assert parsing._guess_datetime_format(invalid_dt) is None
- @is_dateutil_le_261
@pytest.mark.parametrize(
"string, format",
[
@@ -204,21 +165,6 @@ def test_guess_datetime_format_nopadding(self, string, format):
result = parsing._guess_datetime_format(string)
assert result == format
- @is_dateutil_gt_261
- @pytest.mark.parametrize(
- "string",
- [
- '2011-1-1',
- '30-1-2011',
- '1/1/2011',
- '2011-1-1 00:00:00',
- '2011-1-1 0:0:0',
- '2011-1-3T00:00:0'])
- def test_guess_datetime_format_nopadding_gt_261(self, string):
- # GH 11142
- result = parsing._guess_datetime_format(string)
- assert result is None
-
class TestArrayToDatetime(object):
def test_try_parse_dates(self):
| closes #18332
| https://api.github.com/repos/pandas-dev/pandas/pulls/20288 | 2018-03-11T22:20:04Z | 2018-03-12T12:09:44Z | 2018-03-12T12:09:44Z | 2018-03-12T12:09:44Z |
TST: adding join_types fixture | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 5eca467746157..7a4ef56d7d749 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -89,3 +89,11 @@ def compression_no_zip(request):
def datetime_tz_utc():
from datetime import timezone
return timezone.utc
+
+
+@pytest.fixture(params=['inner', 'outer', 'left', 'right'])
+def join_type(request):
+ """
+ Fixture for trying all types of join operations
+ """
+ return request.param
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 8f51dbabd5b71..758f3f0ef9ebc 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -978,11 +978,10 @@ def test_empty(self):
assert not index.empty
assert index[:0].empty
- @pytest.mark.parametrize('how', ['outer', 'inner', 'left', 'right'])
- def test_join_self_unique(self, how):
+ def test_join_self_unique(self, join_type):
index = self.create_index()
if index.is_unique:
- joined = index.join(index, how=how)
+ joined = index.join(index, how=join_type)
assert (index == joined).all()
def test_searchsorted_monotonic(self, indices):
diff --git a/pandas/tests/indexes/datetimes/test_datetime.py b/pandas/tests/indexes/datetimes/test_datetime.py
index b685584a29fb9..51788b3e25507 100644
--- a/pandas/tests/indexes/datetimes/test_datetime.py
+++ b/pandas/tests/indexes/datetimes/test_datetime.py
@@ -250,10 +250,9 @@ def test_does_not_convert_mixed_integer(self):
assert cols.dtype == joined.dtype
tm.assert_numpy_array_equal(cols.values, joined.values)
- @pytest.mark.parametrize('how', ['outer', 'inner', 'left', 'right'])
- def test_join_self(self, how):
+ def test_join_self(self, join_type):
index = date_range('1/1/2000', periods=10)
- joined = index.join(index, how=how)
+ joined = index.join(index, how=join_type)
assert index is joined
def assert_index_parameters(self, index):
@@ -274,8 +273,7 @@ def test_ns_index(self):
freq=index.freq)
self.assert_index_parameters(new_index)
- @pytest.mark.parametrize('how', ['left', 'right', 'inner', 'outer'])
- def test_join_with_period_index(self, how):
+ def test_join_with_period_index(self, join_type):
df = tm.makeCustomDataframe(
10, 10, data_gen_f=lambda *args: np.random.randint(2),
c_idx_type='p', r_idx_type='dt')
@@ -284,7 +282,7 @@ def test_join_with_period_index(self, how):
with tm.assert_raises_regex(ValueError,
'can only call with other '
'PeriodIndex-ed objects'):
- df.columns.join(s.index, how=how)
+ df.columns.join(s.index, how=join_type)
def test_factorize(self):
idx1 = DatetimeIndex(['2014-01', '2014-01', '2014-02', '2014-02',
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 217610b76cf0f..2913812db0dd4 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -700,18 +700,17 @@ def test_dti_tz_constructors(self, tzstr):
# -------------------------------------------------------------
# Unsorted
- @pytest.mark.parametrize('how', ['inner', 'outer', 'left', 'right'])
- def test_join_utc_convert(self, how):
+ def test_join_utc_convert(self, join_type):
rng = date_range('1/1/2011', periods=100, freq='H', tz='utc')
left = rng.tz_convert('US/Eastern')
right = rng.tz_convert('Europe/Berlin')
- result = left.join(left[:-5], how=how)
+ result = left.join(left[:-5], how=join_type)
assert isinstance(result, DatetimeIndex)
assert result.tz == left.tz
- result = left.join(right[:-5], how=how)
+ result = left.join(right[:-5], how=join_type)
assert isinstance(result, DatetimeIndex)
assert result.tz.zone == 'UTC'
diff --git a/pandas/tests/indexes/period/test_period.py b/pandas/tests/indexes/period/test_period.py
index 4548d7fa1a468..923d826fe1a5e 100644
--- a/pandas/tests/indexes/period/test_period.py
+++ b/pandas/tests/indexes/period/test_period.py
@@ -532,10 +532,9 @@ def test_map(self):
exp = Index([x.ordinal for x in index])
tm.assert_index_equal(result, exp)
- @pytest.mark.parametrize('how', ['outer', 'inner', 'left', 'right'])
- def test_join_self(self, how):
+ def test_join_self(self, join_type):
index = period_range('1/1/2000', periods=10)
- joined = index.join(index, how=how)
+ joined = index.join(index, how=join_type)
assert index is joined
def test_insert(self):
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index ec0836dfa174b..6598e0663fb9a 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -14,20 +14,18 @@ def _permute(obj):
class TestPeriodIndex(object):
- @pytest.mark.parametrize('kind', ['inner', 'outer', 'left', 'right'])
- def test_joins(self, kind):
+ def test_joins(self, join_type):
index = period_range('1/1/2000', '1/20/2000', freq='D')
- joined = index.join(index[:-5], how=kind)
+ joined = index.join(index[:-5], how=join_type)
assert isinstance(joined, PeriodIndex)
assert joined.freq == index.freq
- @pytest.mark.parametrize('kind', ['inner', 'outer', 'left', 'right'])
- def test_join_self(self, kind):
+ def test_join_self(self, join_type):
index = period_range('1/1/2000', '1/20/2000', freq='D')
- res = index.join(index, how=kind)
+ res = index.join(index, how=join_type)
assert index is res
def test_join_does_not_recur(self):
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index eb429f46a3355..e8f05cb928cad 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1599,16 +1599,15 @@ def test_slice_keep_name(self):
idx = Index(['a', 'b'], name='asdf')
assert idx.name == idx[1:].name
- def test_join_self(self):
- # instance attributes of the form self.<name>Index
- indices = 'unicode', 'str', 'date', 'int', 'float'
- kinds = 'outer', 'inner', 'left', 'right'
- for index_kind in indices:
- res = getattr(self, '{0}Index'.format(index_kind))
-
- for kind in kinds:
- joined = res.join(res, how=kind)
- assert res is joined
+ # instance attributes of the form self.<name>Index
+ @pytest.mark.parametrize('index_kind',
+ ['unicode', 'str', 'date', 'int', 'float'])
+ def test_join_self(self, join_type, index_kind):
+
+ res = getattr(self, '{0}Index'.format(index_kind))
+
+ joined = res.join(res, how=join_type)
+ assert res is joined
def test_str_attribute(self):
# GH9068
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index cd6a5c761d0c2..34abf7052da8c 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -450,11 +450,11 @@ def test_inplace_mutation_resets_values(self):
# Make sure label setting works too
labels2 = [[0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0]]
- exp_values = np.empty((6, ), dtype=object)
+ exp_values = np.empty((6,), dtype=object)
exp_values[:] = [(long(1), 'a')] * 6
# Must be 1d array of tuples
- assert exp_values.shape == (6, )
+ assert exp_values.shape == (6,)
new_values = mi2.set_labels(labels2).values
# Not inplace shouldn't change
@@ -583,7 +583,7 @@ def test_constructor_single_level(self):
def test_constructor_no_levels(self):
tm.assert_raises_regex(ValueError, "non-zero number "
- "of levels/labels",
+ "of levels/labels",
MultiIndex, levels=[], labels=[])
both_re = re.compile('Must pass both levels and labels')
with tm.assert_raises_regex(TypeError, both_re):
@@ -595,7 +595,7 @@ def test_constructor_mismatched_label_levels(self):
labels = [np.array([1]), np.array([2]), np.array([3])]
levels = ["a"]
tm.assert_raises_regex(ValueError, "Length of levels and labels "
- "must be the same", MultiIndex,
+ "must be the same", MultiIndex,
levels=levels, labels=labels)
length_error = re.compile('>= length of level')
label_error = re.compile(r'Unequal label lengths: \[4, 2\]')
@@ -844,19 +844,19 @@ def test_from_arrays_different_lengths(self):
idx1 = [1, 2, 3]
idx2 = ['a', 'b']
tm.assert_raises_regex(ValueError, '^all arrays must '
- 'be same length$',
+ 'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
idx1 = []
idx2 = ['a', 'b']
tm.assert_raises_regex(ValueError, '^all arrays must '
- 'be same length$',
+ 'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
idx1 = [1, 2, 3]
idx2 = []
tm.assert_raises_regex(ValueError, '^all arrays must '
- 'be same length$',
+ 'be same length$',
MultiIndex.from_arrays, [idx1, idx2])
def test_from_product(self):
@@ -964,7 +964,7 @@ def test_values_boxed(self):
def test_values_multiindex_datetimeindex(self):
# Test to ensure we hit the boxing / nobox part of MI.values
- ints = np.arange(10**18, 10**18 + 5)
+ ints = np.arange(10 ** 18, 10 ** 18 + 5)
naive = pd.DatetimeIndex(ints)
aware = pd.DatetimeIndex(ints, tz='US/Central')
@@ -1023,7 +1023,7 @@ def test_append(self):
def test_append_mixed_dtypes(self):
# GH 13660
- dti = date_range('2011-01-01', freq='M', periods=3,)
+ dti = date_range('2011-01-01', freq='M', periods=3, )
dti_tz = date_range('2011-01-01', freq='M', periods=3, tz='US/Eastern')
pi = period_range('2011-01', freq='M', periods=3)
@@ -1067,9 +1067,12 @@ def test_get_level_values(self):
tm.assert_index_equal(result, expected)
# GH 10460
- index = MultiIndex(levels=[CategoricalIndex(
- ['A', 'B']), CategoricalIndex([1, 2, 3])], labels=[np.array(
- [0, 0, 0, 1, 1, 1]), np.array([0, 1, 2, 0, 1, 2])])
+ index = MultiIndex(
+ levels=[CategoricalIndex(['A', 'B']),
+ CategoricalIndex([1, 2, 3])],
+ labels=[np.array([0, 0, 0, 1, 1, 1]),
+ np.array([0, 1, 2, 0, 1, 2])])
+
exp = CategoricalIndex(['A', 'A', 'A', 'B', 'B', 'B'])
tm.assert_index_equal(index.get_level_values(0), exp)
exp = CategoricalIndex([1, 2, 3, 1, 2, 3])
@@ -1397,7 +1400,7 @@ def test_slice_locs_not_sorted(self):
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])])
tm.assert_raises_regex(KeyError, "[Kk]ey length.*greater than "
- "MultiIndex lexsort depth",
+ "MultiIndex lexsort depth",
index.slice_locs, (1, 0, 1), (2, 1, 0))
# works
@@ -1887,12 +1890,12 @@ def test_difference(self):
expected.names = first.names
assert first.names == result.names
tm.assert_raises_regex(TypeError, "other must be a MultiIndex "
- "or a list of tuples",
+ "or a list of tuples",
first.difference, [1, 2, 3, 4, 5])
def test_from_tuples(self):
tm.assert_raises_regex(TypeError, 'Cannot infer number of levels '
- 'from empty list',
+ 'from empty list',
MultiIndex.from_tuples, [])
expected = MultiIndex(levels=[[1, 3], [2, 4]],
@@ -2039,8 +2042,9 @@ def test_droplevel_with_names(self):
dropped = index.droplevel(0)
assert dropped.name == 'second'
- index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
- lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
+ index = MultiIndex(
+ levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))],
+ labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])],
names=['one', 'two', 'three'])
dropped = index.droplevel(0)
@@ -2051,8 +2055,9 @@ def test_droplevel_with_names(self):
assert dropped.equals(expected)
def test_droplevel_multiple(self):
- index = MultiIndex(levels=[Index(lrange(4)), Index(lrange(4)), Index(
- lrange(4))], labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
+ index = MultiIndex(
+ levels=[Index(lrange(4)), Index(lrange(4)), Index(lrange(4))],
+ labels=[np.array([0, 0, 1, 2, 2, 2, 3, 3]), np.array(
[0, 1, 0, 0, 0, 1, 0, 1]), np.array([1, 0, 1, 1, 0, 0, 1, 0])],
names=['one', 'two', 'three'])
@@ -2101,7 +2106,7 @@ def test_insert(self):
# key wrong length
msg = "Item must have length equal to number of levels"
with tm.assert_raises_regex(ValueError, msg):
- self.index.insert(0, ('foo2', ))
+ self.index.insert(0, ('foo2',))
left = pd.DataFrame([['a', 'b', 0], ['b', 'd', 1]],
columns=['1st', '2nd', '3rd'])
@@ -2134,8 +2139,8 @@ def test_insert(self):
# GH9250
idx = [('test1', i) for i in range(5)] + \
- [('test2', i) for i in range(6)] + \
- [('test', 17), ('test', 18)]
+ [('test2', i) for i in range(6)] + \
+ [('test', 17), ('test', 18)]
left = pd.Series(np.linspace(0, 10, 11),
pd.MultiIndex.from_tuples(idx[:-2]))
@@ -2210,42 +2215,36 @@ def take_invalid_kwargs(self):
tm.assert_raises_regex(ValueError, msg, idx.take,
indices, mode='clip')
- def test_join_level(self):
- def _check_how(other, how):
- join_index, lidx, ridx = other.join(self.index, how=how,
- level='second',
- return_indexers=True)
-
- exp_level = other.join(self.index.levels[1], how=how)
- assert join_index.levels[0].equals(self.index.levels[0])
- assert join_index.levels[1].equals(exp_level)
-
- # pare down levels
- mask = np.array(
- [x[1] in exp_level for x in self.index], dtype=bool)
- exp_values = self.index.values[mask]
- tm.assert_numpy_array_equal(join_index.values, exp_values)
-
- if how in ('outer', 'inner'):
- join_index2, ridx2, lidx2 = \
- self.index.join(other, how=how, level='second',
- return_indexers=True)
-
- assert join_index.equals(join_index2)
- tm.assert_numpy_array_equal(lidx, lidx2)
- tm.assert_numpy_array_equal(ridx, ridx2)
- tm.assert_numpy_array_equal(join_index2.values, exp_values)
-
- def _check_all(other):
- _check_how(other, 'outer')
- _check_how(other, 'inner')
- _check_how(other, 'left')
- _check_how(other, 'right')
-
- _check_all(Index(['three', 'one', 'two']))
- _check_all(Index(['one']))
- _check_all(Index(['one', 'three']))
-
+ @pytest.mark.parametrize('other',
+ [Index(['three', 'one', 'two']),
+ Index(['one']),
+ Index(['one', 'three'])])
+ def test_join_level(self, other, join_type):
+ join_index, lidx, ridx = other.join(self.index, how=join_type,
+ level='second',
+ return_indexers=True)
+
+ exp_level = other.join(self.index.levels[1], how=join_type)
+ assert join_index.levels[0].equals(self.index.levels[0])
+ assert join_index.levels[1].equals(exp_level)
+
+ # pare down levels
+ mask = np.array(
+ [x[1] in exp_level for x in self.index], dtype=bool)
+ exp_values = self.index.values[mask]
+ tm.assert_numpy_array_equal(join_index.values, exp_values)
+
+ if join_type in ('outer', 'inner'):
+ join_index2, ridx2, lidx2 = \
+ self.index.join(other, how=join_type, level='second',
+ return_indexers=True)
+
+ assert join_index.equals(join_index2)
+ tm.assert_numpy_array_equal(lidx, lidx2)
+ tm.assert_numpy_array_equal(ridx, ridx2)
+ tm.assert_numpy_array_equal(join_index2.values, exp_values)
+
+ def test_join_level_corner_case(self):
# some corner cases
idx = Index(['three', 'one', 'two'])
result = idx.join(self.index, level='second')
@@ -2254,12 +2253,10 @@ def _check_all(other):
tm.assert_raises_regex(TypeError, "Join.*MultiIndex.*ambiguous",
self.index.join, self.index, level=1)
- def test_join_self(self):
- kinds = 'outer', 'inner', 'left', 'right'
- for kind in kinds:
- res = self.index
- joined = res.join(res, how=kind)
- assert res is joined
+ def test_join_self(self, join_type):
+ res = self.index
+ joined = res.join(res, how=join_type)
+ assert res is joined
def test_join_multi(self):
# GH 10665
@@ -2335,7 +2332,7 @@ def test_duplicates(self):
assert self.index.append(self.index).has_duplicates
index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[
- [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]])
+ [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]])
assert index.has_duplicates
# GH 9075
@@ -2434,8 +2431,11 @@ def check(nlevels, with_nulls):
def test_duplicate_meta_data(self):
# GH 10115
- index = MultiIndex(levels=[[0, 1], [0, 1, 2]], labels=[
- [0, 0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 0, 1, 2]])
+ index = MultiIndex(
+ levels=[[0, 1], [0, 1, 2]],
+ labels=[[0, 0, 0, 0, 1, 1, 1],
+ [0, 1, 2, 0, 0, 1, 2]])
+
for idx in [index,
index.set_names([None, None]),
index.set_names([None, 'Num']),
diff --git a/pandas/tests/indexes/timedeltas/test_timedelta.py b/pandas/tests/indexes/timedeltas/test_timedelta.py
index 37db9d704aa1f..4692b6d675e6b 100644
--- a/pandas/tests/indexes/timedeltas/test_timedelta.py
+++ b/pandas/tests/indexes/timedeltas/test_timedelta.py
@@ -102,10 +102,9 @@ def test_factorize(self):
tm.assert_numpy_array_equal(arr, exp_arr)
tm.assert_index_equal(idx, idx3)
- @pytest.mark.parametrize('kind', ['outer', 'inner', 'left', 'right'])
- def test_join_self(self, kind):
+ def test_join_self(self, join_type):
index = timedelta_range('1 day', periods=10)
- joined = index.join(index, how=kind)
+ joined = index.join(index, how=join_type)
tm.assert_index_equal(index, joined)
def test_does_not_convert_mixed_integer(self):
diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py
index a64069fa700b8..1b8f3632d381c 100644
--- a/pandas/tests/reshape/merge/test_join.py
+++ b/pandas/tests/reshape/merge/test_join.py
@@ -297,7 +297,25 @@ def test_join_on_series_buglet(self):
'b': [2, 2]}, index=df.index)
tm.assert_frame_equal(result, expected)
- def test_join_index_mixed(self):
+ def test_join_index_mixed(self, join_type):
+ # no overlapping blocks
+ df1 = DataFrame(index=np.arange(10))
+ df1['bool'] = True
+ df1['string'] = 'foo'
+
+ df2 = DataFrame(index=np.arange(5, 15))
+ df2['int'] = 1
+ df2['float'] = 1.
+
+ joined = df1.join(df2, how=join_type)
+ expected = _join_by_hand(df1, df2, how=join_type)
+ assert_frame_equal(joined, expected)
+
+ joined = df2.join(df1, how=join_type)
+ expected = _join_by_hand(df2, df1, how=join_type)
+ assert_frame_equal(joined, expected)
+
+ def test_join_index_mixed_overlap(self):
df1 = DataFrame({'A': 1., 'B': 2, 'C': 'foo', 'D': True},
index=np.arange(10),
columns=['A', 'B', 'C', 'D'])
@@ -317,25 +335,6 @@ def test_join_index_mixed(self):
expected = _join_by_hand(df1, df2)
assert_frame_equal(joined, expected)
- # no overlapping blocks
- df1 = DataFrame(index=np.arange(10))
- df1['bool'] = True
- df1['string'] = 'foo'
-
- df2 = DataFrame(index=np.arange(5, 15))
- df2['int'] = 1
- df2['float'] = 1.
-
- for kind in ['inner', 'outer', 'left', 'right']:
-
- joined = df1.join(df2, how=kind)
- expected = _join_by_hand(df1, df2, how=kind)
- assert_frame_equal(joined, expected)
-
- joined = df2.join(df1, how=kind)
- expected = _join_by_hand(df2, df1, how=kind)
- assert_frame_equal(joined, expected)
-
def test_join_empty_bug(self):
# generated an exception in 0.4.3
x = DataFrame()
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 5dca45c8dd8bb..dbf7c7f100b0e 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -22,7 +22,6 @@
import pandas.util.testing as tm
from pandas.api.types import CategoricalDtype as CDT
-
N = 50
NGROUPS = 8
@@ -319,7 +318,12 @@ def test_left_merge_empty_dataframe(self):
result = merge(right, left, on='key', how='right')
assert_frame_equal(result, left)
- def test_merge_left_empty_right_empty(self):
+ @pytest.mark.parametrize('kwarg',
+ [dict(left_index=True, right_index=True),
+ dict(left_index=True, right_on='x'),
+ dict(left_on='a', right_index=True),
+ dict(left_on='a', right_on='x')])
+ def test_merge_left_empty_right_empty(self, join_type, kwarg):
# GH 10824
left = pd.DataFrame([], columns=['a', 'b', 'c'])
right = pd.DataFrame([], columns=['x', 'y', 'z'])
@@ -328,19 +332,8 @@ def test_merge_left_empty_right_empty(self):
index=pd.Index([], dtype=object),
dtype=object)
- for kwarg in [dict(left_index=True, right_index=True),
- dict(left_index=True, right_on='x'),
- dict(left_on='a', right_index=True),
- dict(left_on='a', right_on='x')]:
-
- result = pd.merge(left, right, how='inner', **kwarg)
- tm.assert_frame_equal(result, exp_in)
- result = pd.merge(left, right, how='left', **kwarg)
- tm.assert_frame_equal(result, exp_in)
- result = pd.merge(left, right, how='right', **kwarg)
- tm.assert_frame_equal(result, exp_in)
- result = pd.merge(left, right, how='outer', **kwarg)
- tm.assert_frame_equal(result, exp_in)
+ result = pd.merge(left, right, how=join_type, **kwarg)
+ tm.assert_frame_equal(result, exp_in)
def test_merge_left_empty_right_notempty(self):
# GH 10824
@@ -429,14 +422,16 @@ def test_merge_nosort(self):
d = {"var1": np.random.randint(0, 10, size=10),
"var2": np.random.randint(0, 10, size=10),
- "var3": [datetime(2012, 1, 12), datetime(2011, 2, 4),
- datetime(
- 2010, 2, 3), datetime(2012, 1, 12),
- datetime(
- 2011, 2, 4), datetime(2012, 4, 3),
- datetime(
- 2012, 3, 4), datetime(2008, 5, 1),
- datetime(2010, 2, 3), datetime(2012, 2, 3)]}
+ "var3": [datetime(2012, 1, 12),
+ datetime(2011, 2, 4),
+ datetime(2010, 2, 3),
+ datetime(2012, 1, 12),
+ datetime(2011, 2, 4),
+ datetime(2012, 4, 3),
+ datetime(2012, 3, 4),
+ datetime(2008, 5, 1),
+ datetime(2010, 2, 3),
+ datetime(2012, 2, 3)]}
df = DataFrame.from_dict(d)
var3 = df.var3.unique()
var3.sort()
@@ -1299,6 +1294,7 @@ def test_join_multi_levels(self):
def f():
household.join(portfolio, how='inner')
+
pytest.raises(ValueError, f)
portfolio2 = portfolio.copy()
@@ -1306,6 +1302,7 @@ def f():
def f():
portfolio2.join(portfolio, how='inner')
+
pytest.raises(ValueError, f)
def test_join_multi_levels2(self):
@@ -1347,6 +1344,7 @@ def test_join_multi_levels2(self):
def f():
household.join(log_return, how='inner')
+
pytest.raises(NotImplementedError, f)
# this is the equivalency
@@ -1375,6 +1373,7 @@ def f():
def f():
household.join(log_return, how='outer')
+
pytest.raises(NotImplementedError, f)
@pytest.mark.parametrize("klass", [None, np.asarray, Series, Index])
@@ -1413,8 +1412,7 @@ class TestMergeDtypes(object):
[1.0, 2.0],
Series([1, 2], dtype='uint64'),
Series([1, 2], dtype='int32')
- ]
- )
+ ])
def test_different(self, right_vals):
left = DataFrame({'A': ['foo', 'bar'],
@@ -1683,8 +1681,7 @@ def test_other_columns(self, left, right):
'change', [lambda x: x,
lambda x: x.astype(CDT(['foo', 'bar', 'bah'])),
lambda x: x.astype(CDT(ordered=True))])
- @pytest.mark.parametrize('how', ['inner', 'outer', 'left', 'right'])
- def test_dtype_on_merged_different(self, change, how, left, right):
+ def test_dtype_on_merged_different(self, change, join_type, left, right):
# our merging columns, X now has 2 different dtypes
# so we must be object as a result
@@ -1693,7 +1690,7 @@ def test_dtype_on_merged_different(self, change, how, left, right):
assert is_categorical_dtype(left.X.values)
# assert not left.X.values.is_dtype_equal(right.X.values)
- merged = pd.merge(left, right, on='X', how=how)
+ merged = pd.merge(left, right, on='X', how=join_type)
result = merged.dtypes.sort_index()
expected = Series([np.dtype('O'),
@@ -1823,7 +1820,6 @@ class TestMergeOnIndexes(object):
'b': [np.nan, 100, 200, 300]},
index=[0, 1, 2, 3]))])
def test_merge_on_indexes(self, left_df, right_df, how, sort, expected):
-
result = pd.merge(left_df, right_df,
left_index=True,
right_index=True,
diff --git a/pandas/tests/reshape/merge/test_merge_index_as_string.py b/pandas/tests/reshape/merge/test_merge_index_as_string.py
index 09109e2692a24..a27fcf41681e6 100644
--- a/pandas/tests/reshape/merge/test_merge_index_as_string.py
+++ b/pandas/tests/reshape/merge/test_merge_index_as_string.py
@@ -157,9 +157,7 @@ def test_merge_indexes_and_columns_lefton_righton(
@pytest.mark.parametrize('left_index',
['inner', ['inner', 'outer']])
-@pytest.mark.parametrize('how',
- ['inner', 'left', 'right', 'outer'])
-def test_join_indexes_and_columns_on(df1, df2, left_index, how):
+def test_join_indexes_and_columns_on(df1, df2, left_index, join_type):
# Construct left_df
left_df = df1.set_index(left_index)
@@ -169,12 +167,12 @@ def test_join_indexes_and_columns_on(df1, df2, left_index, how):
# Result
expected = (left_df.reset_index()
- .join(right_df, on=['outer', 'inner'], how=how,
+ .join(right_df, on=['outer', 'inner'], how=join_type,
lsuffix='_x', rsuffix='_y')
.set_index(left_index))
# Perform join
- result = left_df.join(right_df, on=['outer', 'inner'], how=how,
+ result = left_df.join(right_df, on=['outer', 'inner'], how=join_type,
lsuffix='_x', rsuffix='_y')
assert_frame_equal(result, expected, check_like=True)
diff --git a/pandas/tests/series/indexing/test_alter_index.py b/pandas/tests/series/indexing/test_alter_index.py
index c1b6d0a452232..999ed5f26daee 100644
--- a/pandas/tests/series/indexing/test_alter_index.py
+++ b/pandas/tests/series/indexing/test_alter_index.py
@@ -18,8 +18,6 @@
from pandas.util.testing import (assert_series_equal)
import pandas.util.testing as tm
-JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-
@pytest.mark.parametrize(
'first_slice,second_slice', [
@@ -28,7 +26,6 @@
[[None, -5], [None, 0]],
[[None, 0], [None, 0]]
])
-@pytest.mark.parametrize('join_type', JOIN_TYPES)
@pytest.mark.parametrize('fill', [None, -1])
def test_align(test_data, first_slice, second_slice, join_type, fill):
a = test_data.ts[slice(*first_slice)]
@@ -67,7 +64,6 @@ def test_align(test_data, first_slice, second_slice, join_type, fill):
[[None, -5], [None, 0]],
[[None, 0], [None, 0]]
])
-@pytest.mark.parametrize('join_type', JOIN_TYPES)
@pytest.mark.parametrize('method', ['pad', 'bfill'])
@pytest.mark.parametrize('limit', [None, 1])
def test_align_fill_method(test_data,
diff --git a/pandas/tests/series/indexing/test_boolean.py b/pandas/tests/series/indexing/test_boolean.py
index f1f4a5a05697d..2aef0df5349cb 100644
--- a/pandas/tests/series/indexing/test_boolean.py
+++ b/pandas/tests/series/indexing/test_boolean.py
@@ -16,8 +16,6 @@
from pandas.util.testing import (assert_series_equal)
import pandas.util.testing as tm
-JOIN_TYPES = ['inner', 'outer', 'left', 'right']
-
def test_getitem_boolean(test_data):
s = test_data.series
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
index f484cdea2e09f..bcea47f42056b 100644
--- a/pandas/tests/series/indexing/test_datetime.py
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -20,7 +20,6 @@
import pandas._libs.index as _index
from pandas._libs import tslib
-JOIN_TYPES = ['inner', 'outer', 'left', 'right']
"""
Also test support for datetime64[ns] in Series / DataFrame
diff --git a/pandas/tests/series/test_period.py b/pandas/tests/series/test_period.py
index 8ff2071e351d0..63726f27914f3 100644
--- a/pandas/tests/series/test_period.py
+++ b/pandas/tests/series/test_period.py
@@ -117,7 +117,7 @@ def test_intercept_astype_object(self):
result = df.values.squeeze()
assert (result[:, 0] == expected.values).all()
- def test_align_series(self):
+ def test_add_series(self):
rng = period_range('1/1/2000', '1/1/2010', freq='A')
ts = Series(np.random.randn(len(rng)), index=rng)
@@ -129,13 +129,16 @@ def test_align_series(self):
result = ts + _permute(ts[::2])
tm.assert_series_equal(result, expected)
- # it works!
- for kind in ['inner', 'outer', 'left', 'right']:
- ts.align(ts[::2], join=kind)
msg = "Input has different freq=D from PeriodIndex\\(freq=A-DEC\\)"
with tm.assert_raises_regex(period.IncompatibleFrequency, msg):
ts + ts.asfreq('D', how="end")
+ def test_align_series(self, join_type):
+ rng = period_range('1/1/2000', '1/1/2010', freq='A')
+ ts = Series(np.random.randn(len(rng)), index=rng)
+
+ ts.align(ts[::2], join=join_type)
+
def test_truncate(self):
# GH 17717
idx1 = pd.PeriodIndex([
| follow up for PR #20059
Extracting join types to a fixture
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/20287 | 2018-03-11T22:13:20Z | 2018-03-13T10:32:17Z | 2018-03-13T10:32:17Z | 2018-03-13T10:50:44Z |
DOC: Improve the docstrings of CategoricalIndex.map | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index e7d414f9de544..afbf4baf0d002 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1080,20 +1080,73 @@ def remove_unused_categories(self, inplace=False):
return cat
def map(self, mapper):
- """Apply mapper function to its categories (not codes).
+ """
+ Map categories using input correspondence (dict, Series, or function).
+
+ Maps the categories to new categories. If the mapping correspondence is
+ one-to-one the result is a :class:`~pandas.Categorical` which has the
+ same order property as the original, otherwise a :class:`~pandas.Index`
+ is returned.
+
+ If a `dict` or :class:`~pandas.Series` is used any unmapped category is
+ mapped to `NaN`. Note that if this happens an :class:`~pandas.Index`
+ will be returned.
Parameters
----------
- mapper : callable
- Function to be applied. When all categories are mapped
- to different categories, the result will be Categorical which has
- the same order property as the original. Otherwise, the result will
- be np.ndarray.
+ mapper : function, dict, or Series
+ Mapping correspondence.
Returns
-------
- applied : Categorical or Index.
+ pandas.Categorical or pandas.Index
+ Mapped categorical.
+
+ See Also
+ --------
+ CategoricalIndex.map : Apply a mapping correspondence on a
+ :class:`~pandas.CategoricalIndex`.
+ Index.map : Apply a mapping correspondence on an
+ :class:`~pandas.Index`.
+ Series.map : Apply a mapping correspondence on a
+ :class:`~pandas.Series`.
+ Series.apply : Apply more complex functions on a
+ :class:`~pandas.Series`.
+
+ Examples
+ --------
+ >>> cat = pd.Categorical(['a', 'b', 'c'])
+ >>> cat
+ [a, b, c]
+ Categories (3, object): [a, b, c]
+ >>> cat.map(lambda x: x.upper())
+ [A, B, C]
+ Categories (3, object): [A, B, C]
+ >>> cat.map({'a': 'first', 'b': 'second', 'c': 'third'})
+ [first, second, third]
+ Categories (3, object): [first, second, third]
+
+ If the mapping is one-to-one the ordering of the categories is
+ preserved:
+
+ >>> cat = pd.Categorical(['a', 'b', 'c'], ordered=True)
+ >>> cat
+ [a, b, c]
+ Categories (3, object): [a < b < c]
+ >>> cat.map({'a': 3, 'b': 2, 'c': 1})
+ [3, 2, 1]
+ Categories (3, int64): [3 < 2 < 1]
+
+ If the mapping is not one-to-one an :class:`~pandas.Index` is returned:
+
+ >>> cat.map({'a': 'first', 'b': 'second', 'c': 'first'})
+ Index(['first', 'second', 'first'], dtype='object')
+
+ If a `dict` is used, all unmapped categories are mapped to `NaN` and
+ the result is an :class:`~pandas.Index`:
+ >>> cat.map({'a': 'first', 'b': 'second'})
+ Index(['first', 'second', nan], dtype='object')
"""
new_categories = self.categories.map(mapper)
try:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 446b1b02706e3..95bfc8bfcb5c5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3352,14 +3352,16 @@ def groupby(self, values):
return result
def map(self, mapper, na_action=None):
- """Map values of Series using input correspondence
+ """
+ Map values using input correspondence (a dict, Series, or function).
Parameters
----------
mapper : function, dict, or Series
+ Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
- mapping function
+ mapping correspondence.
Returns
-------
@@ -3367,7 +3369,6 @@ def map(self, mapper, na_action=None):
The output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
-
"""
from .multi import MultiIndex
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 7b902b92d44a4..71caa098c7a28 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -660,20 +660,71 @@ def is_dtype_equal(self, other):
take_nd = take
def map(self, mapper):
- """Apply mapper function to its categories (not codes).
+ """
+ Map values using input correspondence (a dict, Series, or function).
+
+ Maps the values (their categories, not the codes) of the index to new
+ categories. If the mapping correspondence is one-to-one the result is a
+ :class:`~pandas.CategoricalIndex` which has the same order property as
+ the original, otherwise an :class:`~pandas.Index` is returned.
+
+ If a `dict` or :class:`~pandas.Series` is used any unmapped category is
+ mapped to `NaN`. Note that if this happens an :class:`~pandas.Index`
+ will be returned.
Parameters
----------
- mapper : callable
- Function to be applied. When all categories are mapped
- to different categories, the result will be a CategoricalIndex
- which has the same order property as the original. Otherwise,
- the result will be a Index.
+ mapper : function, dict, or Series
+ Mapping correspondence.
Returns
-------
- applied : CategoricalIndex or Index
+ pandas.CategoricalIndex or pandas.Index
+ Mapped index.
+
+ See Also
+ --------
+ Index.map : Apply a mapping correspondence on an
+ :class:`~pandas.Index`.
+ Series.map : Apply a mapping correspondence on a
+ :class:`~pandas.Series`.
+ Series.apply : Apply more complex functions on a
+ :class:`~pandas.Series`.
+ Examples
+ --------
+ >>> idx = pd.CategoricalIndex(['a', 'b', 'c'])
+ >>> idx
+ CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
+ ordered=False, dtype='category')
+ >>> idx.map(lambda x: x.upper())
+ CategoricalIndex(['A', 'B', 'C'], categories=['A', 'B', 'C'],
+ ordered=False, dtype='category')
+ >>> idx.map({'a': 'first', 'b': 'second', 'c': 'third'})
+ CategoricalIndex(['first', 'second', 'third'], categories=['first',
+ 'second', 'third'], ordered=False, dtype='category')
+
+ If the mapping is one-to-one the ordering of the categories is
+ preserved:
+
+ >>> idx = pd.CategoricalIndex(['a', 'b', 'c'], ordered=True)
+ >>> idx
+ CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
+ ordered=True, dtype='category')
+ >>> idx.map({'a': 3, 'b': 2, 'c': 1})
+ CategoricalIndex([3, 2, 1], categories=[3, 2, 1], ordered=True,
+ dtype='category')
+
+ If the mapping is not one-to-one an :class:`~pandas.Index` is returned:
+
+ >>> idx.map({'a': 'first', 'b': 'second', 'c': 'first'})
+ Index(['first', 'second', 'first'], dtype='object')
+
+ If a `dict` is used, all unmapped categories are mapped to `NaN` and
+ the result is an :class:`~pandas.Index`:
+
+ >>> idx.map({'a': 'first', 'b': 'second'})
+ Index(['first', 'second', nan], dtype='object')
"""
return self._shallow_copy_with_infer(self.values.map(mapper))
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e4801242073a2..f0ba369e1731a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2831,25 +2831,26 @@ def unstack(self, level=-1, fill_value=None):
def map(self, arg, na_action=None):
"""
- Map values of Series using input correspondence (which can be
- a dict, Series, or function)
+ Map values of Series using input correspondence (a dict, Series, or
+ function).
Parameters
----------
arg : function, dict, or Series
+ Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
- mapping function
+ mapping correspondence.
Returns
-------
y : Series
- same index as caller
+ Same index as caller.
Examples
--------
- Map inputs to outputs (both of type `Series`)
+ Map inputs to outputs (both of type `Series`):
>>> x = pd.Series([1,2,3], index=['one', 'two', 'three'])
>>> x
@@ -2900,9 +2901,9 @@ def map(self, arg, na_action=None):
See Also
--------
- Series.apply: For applying more complex functions on a Series
- DataFrame.apply: Apply a function row-/column-wise
- DataFrame.applymap: Apply a function elementwise on a whole DataFrame
+ Series.apply : For applying more complex functions on a Series.
+ DataFrame.apply : Apply a function row-/column-wise.
+ DataFrame.applymap : Apply a function elementwise on a whole DataFrame.
Notes
-----
| Hi core devs,
I worked on the CategoricalIndex.map docstring during the sprint in Paris.
Although we concentrated on this function, while doing that we couldn't help improving (marginally!) also the docstrings for Categorical.map, Index.map and Series.map.
We probably should have refrained from doing it, knowing that other cities were working on those, but it was difficult to change one and leaving the style of the others incoherent with the first.
As a result I'm proposing here a pull request that combines all the changes.
The validation script passes for CategoricalIndex.map but not for the other cases.
If that is not acceptable I can change that, but I will probably need some help with the git part.
Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################### Docstring (pandas.CategoricalIndex.map) ###################
################################################################################
Map index values using input correspondence (a dict, Series, or
function).
Maps the values (their categories, not the codes) of the index to new
categories. If the mapping correspondence maps each original category
to a different new category the result is a CategoricalIndex which has
the same order property as the original, otherwise an Index is
returned.
If a dictionary or Series is used any unmapped category is mapped to
NA. Note that if this happens an Index will be returned.
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
Returns
-------
CategoricalIndex or Index
Mapped index.
See Also
--------
Index.map : Apply a mapping correspondence on an Index.
Series.map : Apply a mapping correspondence on a Series.
Series.apply : Apply more complex functions on a Series.
Examples
--------
>>> idx = pd.CategoricalIndex(['a', 'b', 'c'])
>>> idx
CategoricalIndex(['a', 'b', 'c'], categories=['a', 'b', 'c'],
ordered=False, dtype='category')
>>> idx.map(lambda x: x.upper())
CategoricalIndex(['A', 'B', 'C'], categories=['A', 'B', 'C'],
ordered=False, dtype='category')
>>> idx.map({'a': 'first', 'b': 'second', 'c': 'third'})
CategoricalIndex(['first', 'second', 'third'], categories=['first',
'second', 'third'], ordered=False, dtype='category')
If the mapping is not bijective an Index is returned:
>>> idx.map({'a': 'first', 'b': 'second', 'c': 'first'})
Index(['first', 'second', 'first'], dtype='object')
If a dictionary is used, all unmapped categories are mapped to NA and
the result is an Index:
>>> idx.map({'a': 'first', 'b': 'second'})
Index(['first', 'second', nan], dtype='object')
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.CategoricalIndex.map" correct. :)
################################################################################
###################### Docstring (pandas.Categorical.map) ######################
################################################################################
Map categories (not codes) using input correspondence (a dict,
Series, or function).
Maps the categories to new categories. If the mapping
correspondence maps each original category to a different new category
the result is a Categorical which has the same order property as
the original, otherwise an np.ndarray is returned.
If a dictionary or Series is used any unmapped category is mapped to
NA. Note that if this happens an np.ndarray will be returned.
Parameters
----------
mapper : callable
Function to be applied.
Returns
-------
applied : Categorical or Index.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
See Also section not found
No examples section found
################################################################################
######################### Docstring (pandas.Index.map) #########################
################################################################################
Map values using input correspondence (a dict, Series, or function).
Parameters
----------
mapper : function, dict, or Series
Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping correspondence.
Returns
-------
applied : Union[Index, MultiIndex], inferred
The output of the mapping function applied to the index.
If the function returns a tuple with more than one element
a MultiIndex will be returned.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
See Also section not found
No examples section found
################################################################################
######################## Docstring (pandas.Series.map) ########################
################################################################################
Map values of Series using input correspondence (a dict, Series, or
function).
Parameters
----------
arg : function, dict, or Series
Mapping correspondence.
na_action : {None, 'ignore'}
If 'ignore', propagate NA values, without passing them to the
mapping correspondence.
Returns
-------
y : Series
Same index as caller.
Examples
--------
Map inputs to outputs (both of type `Series`):
>>> x = pd.Series([1,2,3], index=['one', 'two', 'three'])
>>> x
one 1
two 2
three 3
dtype: int64
>>> y = pd.Series(['foo', 'bar', 'baz'], index=[1,2,3])
>>> y
1 foo
2 bar
3 baz
>>> x.map(y)
one foo
two bar
three baz
If `arg` is a dictionary, return a new Series with values converted
according to the dictionary's mapping:
>>> z = {1: 'A', 2: 'B', 3: 'C'}
>>> x.map(z)
one A
two B
three C
Use na_action to control whether NA values are affected by the mapping
function.
>>> s = pd.Series([1, 2, 3, np.nan])
>>> s2 = s.map('this is a string {}'.format, na_action=None)
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 this is a string nan
dtype: object
>>> s3 = s.map('this is a string {}'.format, na_action='ignore')
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 NaN
dtype: object
See Also
--------
Series.apply : For applying more complex functions on a Series.
DataFrame.apply : Apply a function row-/column-wise.
DataFrame.applymap : Apply a function elementwise on a whole DataFrame.
Notes
-----
When `arg` is a dictionary, values in Series that are not in the
dictionary (as keys) are converted to ``NaN``. However, if the
dictionary is a ``dict`` subclass that defines ``__missing__`` (i.e.
provides a method for default values), then this default is used
rather than ``NaN``:
>>> from collections import Counter
>>> counter = Counter()
>>> counter['bar'] += 1
>>> y.map(counter)
1 0
2 1
3 0
dtype: int64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No summary found (a short summary in a single line should be present at the beginning of the docstring)
Examples do not pass tests
################################################################################
################################### Doctests ###################################
################################################################################
**********************************************************************
Line 31, in pandas.Series.map
Failed example:
y
Expected:
1 foo
2 bar
3 baz
Got:
1 foo
2 bar
3 baz
dtype: object
**********************************************************************
Line 36, in pandas.Series.map
Failed example:
x.map(y)
Expected:
one foo
two bar
three baz
Got:
one foo
two bar
three baz
dtype: object
**********************************************************************
Line 46, in pandas.Series.map
Failed example:
x.map(z)
Expected:
one A
two B
three C
Got:
one A
two B
three C
dtype: object
**********************************************************************
Line 56, in pandas.Series.map
Failed example:
s2 = s.map('this is a string {}'.format, na_action=None)
Expected:
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 this is a string nan
dtype: object
Got nothing
**********************************************************************
Line 63, in pandas.Series.map
Failed example:
s3 = s.map('this is a string {}'.format, na_action='ignore')
Expected:
0 this is a string 1.0
1 this is a string 2.0
2 this is a string 3.0
3 NaN
dtype: object
Got nothing
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20286 | 2018-03-11T17:16:41Z | 2018-03-22T08:39:01Z | 2018-03-22T08:39:01Z | 2018-03-29T11:55:58Z |
DOC/TST: Added NORMALIZE_WHITESPACE flag | diff --git a/setup.cfg b/setup.cfg
index 942b2b0a1a0bf..6d9657737a8bd 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -32,3 +32,4 @@ markers =
slow: mark a test as slow
network: mark a test as network
high_memory: mark a test as a high-memory only
+doctest_optionflags= NORMALIZE_WHITESPACE IGNORE_EXCEPTION_DETAIL
| I think we want this to be always on. You can disable it with
```
doctest: -NORMALIZE_WHITESPACE
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20284 | 2018-03-11T15:24:40Z | 2018-03-12T14:38:30Z | 2018-03-12T14:38:30Z | 2018-03-12T14:38:33Z |
DOC: Updated docstring Period.day | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..3e18bbb3d64e8 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1217,6 +1217,33 @@ cdef class _Period(object):
@property
def day(self):
+ """
+ Return the number of days of particular month of a given Period.
+
+ This attribute returns the total number of days of given month on which
+ the particular date occurs.
+ Returns
+ -------
+ Int
+ Number of days till particular date
+
+ See also
+ --------
+ Period.dayofweek
+ Return the day of the week
+
+ Period.dayofyear
+ Return the day of the year
+
+ Examples
+ --------
+ >>> import pandas as pd
+ >>> p = pd.Period("2018-03-11", freq='H')
+ >>> p.day
+ 11
+
+ """
+
base, mult = get_freq_code(self.freq)
return pday(self.ordinal, base)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 6b97ee90cd93c..4e06f606160d7 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1937,8 +1937,13 @@ def tz_convert(self, tz):
mapping={True: 'infer', False: 'raise'})
def tz_localize(self, tz, ambiguous='raise', errors='raise'):
"""
- Localize tz-naive DatetimeIndex to given time zone (using
- pytz/dateutil), or remove timezone from tz-aware DatetimeIndex
+ Localize tz-naive DatetimeIndex to tz-aware DatetimeIndex.
+
+ This method takes a naive DatetimeIndex object and make this
+ time zone aware. It does not move the time to another
+ timezone.
+ Time zone localization helps to switch b/w time zone aware and time
+ zone unaware objects.
Parameters
----------
@@ -1946,7 +1951,8 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
Time zone for time. Corresponding timestamps would be converted to
time zone of the TimeSeries.
None will remove timezone holding local time.
- ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
+ ambiguous : str {'infer', 'NaT', 'raise'} or bool array, \
+ default 'raise'
- 'infer' will attempt to infer fall dst-transition hours based on
order
- bool-ndarray where True signifies a DST time, False signifies a
@@ -1955,7 +1961,7 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
- 'NaT' will return NaT where there are ambiguous times
- 'raise' will raise an AmbiguousTimeError if there are ambiguous
times
- errors : 'raise', 'coerce', default 'raise'
+ errors : {'raise', 'coerce'}, default 'raise'
- 'raise' will raise a NonExistentTimeError if a timestamp is not
valid in the specified timezone (e.g. due to a transition from
or to DST time)
@@ -1968,14 +1974,47 @@ def tz_localize(self, tz, ambiguous='raise', errors='raise'):
.. deprecated:: 0.15.0
Attempt to infer fall dst-transition hours based on order
+
Returns
-------
- localized : DatetimeIndex
+ DatetimeIndex
+
+ Examples
+ --------
+ In the example below, We put the date range from 01 March 2018
+ to 08 March 2018 & localize this to US/Eastern Time zone & again
+ we perform reverse operation where we remove tize zone & make it
+ tz-naive
+
+ >>> dti = pd.date_range('2018-03-01', '2018-03-03')
+
+ >>> dti
+ DatetimeIndex(['2018-03-01', '2018-03-02', '2018-03-03'],
+ dtype='datetime64[ns]', freq='D')
+
+ localize DatetimeIndex in US/Eastern time zone.
+
+ >>> tz_aware = dti.tz_localize(tz='US/Eastern')
+
+ >>> tz_aware
+ DatetimeIndex(['2018-03-01 00:00:00-05:00',
+ '2018-03-02 00:00:00-05:00', '2018-03-03 00:00:00-05:00'],
+ dtype='datetime64[ns, US/Eastern]', freq='D')
+
+ localize aware time zone into naive time zone.
+
+ >>> tz_naive = tz_aware.tz_localize(None)
+
+ >>> tz_naive
+ DatetimeIndex(['2018-03-01', '2018-03-02', '2018-03-03'],
+ dtype='datetime64[ns]', freq='D')
+
Raises
------
TypeError
If the DatetimeIndex is tz-aware and tz is not None.
+
"""
if self.tz is not None:
if tz is None:
| "################################################################################
######################## Docstring (pandas.Period.day) ########################
################################################################################
Return the number of days of particular month of a given Period.
This attribute returns the total number of days of given month on which
the particular date occurs.
Returns
-------
Int
Number of days till particular date
See also
--------
Period.dayofweek
Return the day of the week
Period.dayofyear
Return the day of the year
Examples
--------
>>> import pandas as pd
>>> p = pd.Period("2018-03-11", freq='H')
>>> p.day
11
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.day" correct. :)
" | https://api.github.com/repos/pandas-dev/pandas/pulls/20283 | 2018-03-11T15:16:10Z | 2018-03-11T15:27:17Z | null | 2018-03-11T15:27:17Z |
DOC: update the pandas.Series.str.split docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index fac607f4621a8..11081535cf63f 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1095,24 +1095,88 @@ def str_pad(arr, width, side='left', fillchar=' '):
def str_split(arr, pat=None, n=None):
"""
- Split each string (a la re.split) in the Series/Index by given
- pattern, propagating NA values. Equivalent to :meth:`str.split`.
+ Split strings around given separator/delimiter.
+
+ Split each string in the caller's values by given
+ pattern, propagating NaN values. Equivalent to :meth:`str.split`.
Parameters
----------
- pat : string, default None
- String or regular expression to split on. If None, splits on whitespace
+ pat : str, optional
+ String or regular expression to split on.
+ If not specified, split on whitespace.
n : int, default -1 (all)
- None, 0 and -1 will be interpreted as return all splits
+ Limit number of splits in output.
+ ``None``, 0 and -1 will be interpreted as return all splits.
expand : bool, default False
- * If True, return DataFrame/MultiIndex expanding dimensionality.
- * If False, return Series/Index.
+ Expand the splitted strings into separate columns.
- return_type : deprecated, use `expand`
+ * If ``True``, return DataFrame/MultiIndex expanding dimensionality.
+ * If ``False``, return Series/Index, containing lists of strings.
Returns
-------
split : Series/Index or DataFrame/MultiIndex of objects
+ Type matches caller unless ``expand=True`` (return type is DataFrame or
+ MultiIndex)
+
+ Notes
+ -----
+ The handling of the `n` keyword depends on the number of found splits:
+
+ - If found splits > `n`, make first `n` splits only
+ - If found splits <= `n`, make all splits
+ - If for a certain row the number of found splits < `n`,
+ append `None` for padding up to `n` if ``expand=True``
+
+ Examples
+ --------
+ >>> s = pd.Series(["this is good text", "but this is even better"])
+
+ By default, split will return an object of the same size
+ having lists containing the split elements
+
+ >>> s.str.split()
+ 0 [this, is, good, text]
+ 1 [but, this, is, even, better]
+ dtype: object
+ >>> s.str.split("random")
+ 0 [this is good text]
+ 1 [but this is even better]
+ dtype: object
+
+ When using ``expand=True``, the split elements will
+ expand out into separate columns.
+
+ >>> s.str.split(expand=True)
+ 0 1 2 3 4
+ 0 this is good text None
+ 1 but this is even better
+ >>> s.str.split(" is ", expand=True)
+ 0 1
+ 0 this good text
+ 1 but this even better
+
+ Parameter `n` can be used to limit the number of splits in the output.
+
+ >>> s.str.split("is", n=1)
+ 0 [th, is good text]
+ 1 [but th, is even better]
+ dtype: object
+ >>> s.str.split("is", n=1, expand=True)
+ 0 1
+ 0 th is good text
+ 1 but th is even better
+
+ If NaN is present, it is propagated throughout the columns
+ during the split.
+
+ >>> s = pd.Series(["this is good text", "but this is even better", np.nan])
+ >>> s.str.split(n=3, expand=True)
+ 0 1 2 3
+ 0 this is good text
+ 1 but this is even better
+ 2 NaN NaN NaN NaN
"""
if pat is None:
if n is None or n == 0:
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
```
################################################################################
##################### Docstring (pandas.Series.str.split) #####################
################################################################################
Split strings around given separator/delimiter.
Split each str in the caller's values by given
pattern, propagating NaN values. Equivalent to :meth:`str.split`.
Parameters
----------
pat : string, default None
String or regular expression to split on.
If `None`, split on whitespace.
n : int, default -1 (all)
Vary dimensionality of output.
* `None`, 0 and -1 will be interpreted as return all splits
expand : bool, default False
Expand the split strings into separate columns.
* If `True`, return DataFrame/MultiIndex expanding dimensionality.
* If `False`, return Series/Index.
Returns
-------
Type matches caller unless `expand=True` (return type is `DataFrame`)
split : Series/Index or DataFrame/MultiIndex of objects
Notes
-----
If `expand` parameter is `True` and:
- If n >= default splits, makes all splits
- If n < default splits, makes first n splits only
- Appends `None` for padding.
Examples
--------
>>> s = pd.Series(["this is good text", "but this is even better"])
By default, split will return an object of the same size
having lists containing the split elements
>>> s.str.split()
0 [this, is, good, text]
1 [but, this, is, even, better]
dtype: object
>>> s.str.split("random")
0 [this is good text]
1 [but this is even better]
dtype: object
When using `expand=True`, the split elements will
expand out into separate columns.
>>> s.str.split(expand=True)
0 1 2 3 4
0 this is good text None
1 but this is even better
>>> s.str.split(" is ", expand=True)
0 1
0 this good text
1 but this even better
Parameter `n` can be used to limit the number of columns in
expansion of output.
>>> s.str.split("is", n=1, expand=True)
0 1
0 th is good text
1 but th is even better
If NaN is present, it is propagated throughout the columns
during the split.
>>> s = pd.Series(["this is good text", "but this is even better", np.nan])
>>> s.str.split(n=3, expand=True)
0 1 2 3
0 this is good text
1 but this is even better
2 NaN NaN NaN NaN
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameter "n" description should finish with "."
See Also section not found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20282 | 2018-03-11T15:08:00Z | 2018-03-12T15:30:27Z | 2018-03-12T15:30:27Z | 2018-03-12T15:52:28Z |
DOC: update the Period.dayofweek attribute docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..e84642b4a8f2c 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1246,6 +1246,37 @@ cdef class _Period(object):
@property
def dayofweek(self):
+ """
+ Return the day of the week.
+
+ This attribute returns the day of the week on which the particular
+ date for the given period occurs depending on the frequency with
+ Monday=0, Sunday=6.
+
+ Returns
+ -------
+ Int
+ Range from 0 to 6 (included).
+
+ See also
+ --------
+ Period.dayofyear : Return the day of the year.
+ Period.daysinmonth : Return the number of days in that month.
+
+ Examples
+ --------
+ >>> period1 = pd.Period('2012-1-1 19:00', freq='H')
+ >>> period1
+ Period('2012-01-01 19:00', 'H')
+ >>> period1.dayofweek
+ 6
+
+ >>> period2 = pd.Period('2013-1-9 11:00', freq='H')
+ >>> period2
+ Period('2013-01-09 11:00', 'H')
+ >>> period2.dayofweek
+ 2
+ """
base, mult = get_freq_code(self.freq)
return pweekday(self.ordinal, base)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
(pandas_dev) root@kalih:~/pythonpanda/pandas# python scripts/validate_docstrings.py pandas.Period.dayofweek
################################################################################
##################### Docstring (pandas.Period.dayofweek) #####################
################################################################################
Return the day of the week.
This attribute returns the day of the week on which the particular
starting date for the given period occurs with Monday=0, Sunday=6.
Returns
-------
Int
Range of 0 to 6
See also
--------
Period.dayofyear
Return the day of year.
Period.daysinmonth
Return the days in that month.
Examples
--------
>>> period1 = pd.Period('2012-1-1 19:00', freq='H')
>>> period1
Period('2012-01-01 19:00', 'H')
>>> period1.dayofweek
6
>>> period2 = pd.Period('2013-1-9 11:00', freq='H')
>>> period2
Period('2013-01-09 11:00', 'H')
>>> period2.dayofweek
2
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.dayofweek" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20280 | 2018-03-11T11:17:23Z | 2018-03-13T10:05:03Z | 2018-03-13T10:05:03Z | 2018-03-13T10:11:54Z |
DOC: update the pandas.Series.dt.is_quarter_end docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e5e9bba269fd4..4c81eafabf24a 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1739,7 +1739,44 @@ def freq(self, value):
is_quarter_end = _field_accessor(
'is_quarter_end',
'is_quarter_end',
- "Logical indicating if last day of quarter (defined by frequency)")
+ """
+ Indicator for whether the date is the last day of a quarter.
+
+ Returns
+ -------
+ is_quarter_end : Series or DatetimeIndex
+ The same type as the original data with boolean values. Series will
+ have the same name and index. DatetimeIndex will have the same
+ name.
+
+ See Also
+ --------
+ quarter : Return the quarter of the date.
+ is_quarter_start : Similar method indicating the quarter start.
+
+ Examples
+ --------
+ This method is available on Series with datetime values under
+ the ``.dt`` accessor, and directly on DatetimeIndex.
+
+ >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
+ ... periods=4)})
+ >>> df.assign(quarter=df.dates.dt.quarter,
+ ... is_quarter_end=df.dates.dt.is_quarter_end)
+ dates quarter is_quarter_end
+ 0 2017-03-30 1 False
+ 1 2017-03-31 1 True
+ 2 2017-04-01 2 False
+ 3 2017-04-02 2 False
+
+ >>> idx = pd.date_range('2017-03-30', periods=4)
+ >>> idx
+ DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],
+ dtype='datetime64[ns]', freq='D')
+
+ >>> idx.is_quarter_end
+ array([False, True, False, False])
+ """)
is_year_start = _field_accessor(
'is_year_start',
'is_year_start',
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################# Docstring (pandas.Series.dt.is_quarter_end) #################
################################################################################
Return a boolean indicating whether the date is the last day of a
quarter.
Returns
-------
is_quarter_end : Series of boolean.
See Also
--------
quarter : Return the quarter of the date.
is_quarter_start : Return a boolean indicating whether the date is the
first day of the quarter.
Examples
--------
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
... periods=4)})
>>> df.assign(quarter = df.dates.dt.quarter,
... is_quarter_end = df.dates.dt.is_quarter_end)
dates quarter is_quarter_end
0 2017-03-30 1 False
1 2017-03-31 1 True
2 2017-04-01 2 False
3 2017-04-02 2 False
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No summary found (a short summary in a single line should be present at the beginning of the docstring)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
* No extended summary required.
* Short summary exceeds one line by one word, hope that is ok
| https://api.github.com/repos/pandas-dev/pandas/pulls/20279 | 2018-03-11T11:08:40Z | 2018-03-14T18:41:30Z | 2018-03-14T18:41:30Z | 2018-03-14T18:47:55Z |
DOC: update the pandas.Series.dt.is_quarter_start docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e5e9bba269fd4..6f8fb766f5dfa 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1735,7 +1735,44 @@ def freq(self, value):
is_quarter_start = _field_accessor(
'is_quarter_start',
'is_quarter_start',
- "Logical indicating if first day of quarter (defined by frequency)")
+ """
+ Indicator for whether the date is the first day of a quarter.
+
+ Returns
+ -------
+ is_quarter_start : Series or DatetimeIndex
+ The same type as the original data with boolean values. Series will
+ have the same name and index. DatetimeIndex will have the same
+ name.
+
+ See Also
+ --------
+ quarter : Return the quarter of the date.
+ is_quarter_end : Similar method for indicating the start of a quarter.
+
+ Examples
+ --------
+ This method is available on Series with datetime values under
+ the ``.dt`` accessor, and directly on DatetimeIndex.
+
+ >>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30",
+ ... periods=4)})
+ >>> df.assign(quarter=df.dates.dt.quarter,
+ ... is_quarter_start=df.dates.dt.is_quarter_start)
+ dates quarter is_quarter_start
+ 0 2017-03-30 1 False
+ 1 2017-03-31 1 False
+ 2 2017-04-01 2 True
+ 3 2017-04-02 2 False
+
+ >>> idx = pd.date_range('2017-03-30', periods=4)
+ >>> idx
+ DatetimeIndex(['2017-03-30', '2017-03-31', '2017-04-01', '2017-04-02'],
+ dtype='datetime64[ns]', freq='D')
+
+ >>> idx.is_quarter_start
+ array([False, False, True, False])
+ """)
is_quarter_end = _field_accessor(
'is_quarter_end',
'is_quarter_end',
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################ Docstring (pandas.Series.dt.is_quarter_start) ################
################################################################################
Return a boolean indicating whether the date is the first day of a
quarter.
Returns
-------
is_quarter_start : Series of boolean.
See Also
--------
quarter : Return the quarter of the date.
is_quarter_end : Return a boolean indicating whether the date is the
last day of the quarter.
Examples
--------
>>> df = pd.DataFrame({'dates': pd.date_range("2017-03-30", periods=4)})
>>> df.assign(quarter = df.dates.dt.quarter,
... is_quarter_start = df.dates.dt.is_quarter_start)
dates quarter is_quarter_start
0 2017-03-30 1 False
1 2017-03-31 1 False
2 2017-04-01 2 True
3 2017-04-02 2 False
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No summary found (a short summary in a single line should be present at the beginning of the docstring)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
* Short summary exceeds single line by one word, hope that is ok. | https://api.github.com/repos/pandas-dev/pandas/pulls/20278 | 2018-03-11T10:51:33Z | 2018-03-14T18:47:45Z | 2018-03-14T18:47:45Z | 2018-03-14T18:47:48Z |
DOC: update the Period.dayofyear docstring | diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 89f38724cde1a..e14efaf04bd41 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1255,6 +1255,36 @@ cdef class _Period(object):
@property
def dayofyear(self):
+ """
+ Return the day of the year.
+
+ This attribute returns the day of the year on which the particular
+ date occurs. The return value ranges between 1 to 365 for regular
+ years and 1 to 366 for leap years.
+
+ Returns
+ -------
+ int
+ The day of year.
+
+ See Also
+ --------
+ Period.day : Return the day of the month.
+ Period.dayofweek : Return the day of week.
+ PeriodIndex.dayofyear : Return the day of year of all indexes.
+
+ Examples
+ --------
+ >>> period = pd.Period("2015-10-23", freq='H')
+ >>> period.dayofyear
+ 296
+ >>> period = pd.Period("2012-12-31", freq='D')
+ >>> period.dayofyear
+ 366
+ >>> period = pd.Period("2013-01-01", freq='D')
+ >>> period.dayofyear
+ 1
+ """
base, mult = get_freq_code(self.freq)
return pday_of_year(self.ordinal, base)
| Signed-off-by: Tushar Mittal <chiragmittal.mittal@gmail.com>
Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
##################### Docstring (pandas.Period.dayofyear) #####################
################################################################################
Return the day of the year.
This attribute returns the day of the year on which the particular
date occurs. The return value ranges between 1 to 365 for regular
years and 1 to 366 for leap years.
Returns
-------
int
The day of year.
See Also
--------
Period.dayofweek : Return the day of week.
Period.daysinmonth : Return the days in that month.
PeriodIndex.dayofyear : Return the day of year of all indexes.
Examples
--------
>>> period = pd.Period("2015-10-23", freq='H')
>>> period.dayofyear
296
>>> period = pd.Period("2012-12-31", freq='D')
>>> period.dayofyear
366
>>> period = pd.Period("2013-01-01", freq='D')
>>> period.dayofyear
1
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.Period.dayofyear" correct. :)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20277 | 2018-03-11T10:46:41Z | 2018-03-13T08:55:51Z | 2018-03-13T08:55:51Z | 2018-03-13T09:07:31Z |
DOC: update the aggregate docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 79265e35ef6e6..12a7141bdb0ee 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -107,6 +107,10 @@
_shared_doc_kwargs = dict(
axes='index, columns', klass='DataFrame',
axes_single_arg="{0 or 'index', 1 or 'columns'}",
+ axis="""
+ axis : {0 or 'index', 1 or 'columns'}, default 0
+ - 0 or 'index': apply function to each column.
+ - 1 or 'columns': apply function to each row.""",
optional_by="""
by : str or list of str
Name or list of names to sort by.
@@ -4460,9 +4464,9 @@ def pivot(self, index=None, columns=None, values=None):
Reshape data (produce a "pivot" table) based on column values. Uses
unique values from specified `index` / `columns` to form axes of the
- resulting DataFrame. This function does not support data aggregation,
- multiple values will result in a MultiIndex in the columns. See the
- :ref:`User Guide <reshaping>` for more on reshaping.
+ resulting DataFrame. This function does not support data
+ aggregation, multiple values will result in a MultiIndex in the
+ columns. See the :ref:`User Guide <reshaping>` for more on reshaping.
Parameters
----------
@@ -4980,36 +4984,59 @@ def _gotitem(self, key, ndim, subset=None):
return self[key]
_agg_doc = dedent("""
+ Notes
+ -----
+ The aggregation operations are always performed over an axis, either the
+ index (default) or the column axis. This behavior is different from
+ `numpy` aggregation functions (`mean`, `median`, `prod`, `sum`, `std`,
+ `var`), where the default is to compute the aggregation of the flattened
+ array, e.g., ``numpy.mean(arr_2d)`` as opposed to ``numpy.mean(arr_2d,
+ axis=0)``.
+
+ `agg` is an alias for `aggregate`. Use the alias.
+
Examples
--------
+ >>> df = pd.DataFrame([[1, 2, 3],
+ ... [4, 5, 6],
+ ... [7, 8, 9],
+ ... [np.nan, np.nan, np.nan]],
+ ... columns=['A', 'B', 'C'])
- >>> df = pd.DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C'],
- ... index=pd.date_range('1/1/2000', periods=10))
- >>> df.iloc[3:7] = np.nan
-
- Aggregate these functions across all columns
+ Aggregate these functions over the rows.
>>> df.agg(['sum', 'min'])
- A B C
- sum -0.182253 -0.614014 -2.909534
- min -1.916563 -1.460076 -1.568297
+ A B C
+ sum 12.0 15.0 18.0
+ min 1.0 2.0 3.0
- Different aggregations per column
+ Different aggregations per column.
>>> df.agg({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
- A B
- max NaN 1.514318
- min -1.916563 -1.460076
- sum -0.182253 NaN
+ A B
+ max NaN 8.0
+ min 1.0 2.0
+ sum 12.0 NaN
+
+ Aggregate over the columns.
+
+ >>> df.agg("mean", axis="columns")
+ 0 2.0
+ 1 5.0
+ 2 8.0
+ 3 NaN
+ dtype: float64
See also
--------
- pandas.DataFrame.apply
- pandas.DataFrame.transform
- pandas.DataFrame.groupby.aggregate
- pandas.DataFrame.resample.aggregate
- pandas.DataFrame.rolling.aggregate
-
+ DataFrame.apply : Perform any type of operations.
+ DataFrame.transform : Perform transformation type operations.
+ pandas.core.groupby.GroupBy : Perform operations over groups.
+ pandas.core.resample.Resampler : Perform operations over resampled bins.
+ pandas.core.window.Rolling : Perform operations over rolling window.
+ pandas.core.window.Expanding : Perform operations over expanding window.
+ pandas.core.window.EWM : Perform operation over exponential weighted
+ window.
""")
@Appender(_agg_doc)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index bfb251b0995ec..494351dd27ca5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3937,36 +3937,37 @@ def pipe(self, func, *args, **kwargs):
return com._pipe(self, func, *args, **kwargs)
_shared_docs['aggregate'] = ("""
- Aggregate using callable, string, dict, or list of string/callables
+ Aggregate using one or more operations over the specified axis.
%(versionadded)s
Parameters
----------
- func : callable, string, dictionary, or list of string/callables
+ func : function, string, dictionary, or list of string/functions
Function to use for aggregating the data. If a function, must either
work when passed a %(klass)s or when passed to %(klass)s.apply. For
a DataFrame, can pass a dict, if the keys are DataFrame column names.
- Accepted Combinations are:
+ Accepted combinations are:
- - string function name
- - function
- - list of functions
- - dict of column names -> functions (or list of functions)
+ - string function name.
+ - function.
+ - list of functions.
+ - dict of column names -> functions (or list of functions).
- Notes
- -----
- Numpy functions mean/median/prod/sum/std/var are special cased so the
- default behavior is applying the function along axis=0
- (e.g., np.mean(arr_2d, axis=0)) as opposed to
- mimicking the default Numpy behavior (e.g., np.mean(arr_2d)).
-
- `agg` is an alias for `aggregate`. Use the alias.
+ %(axis)s
+ *args
+ Positional arguments to pass to `func`.
+ **kwargs
+ Keyword arguments to pass to `func`.
Returns
-------
aggregated : %(klass)s
+
+ Notes
+ -----
+ `agg` is an alias for `aggregate`. Use the alias.
""")
_shared_docs['transform'] = ("""
@@ -4014,7 +4015,6 @@ def pipe(self, func, *args, **kwargs):
--------
pandas.%(klass)s.aggregate
pandas.%(klass)s.apply
-
""")
# ----------------------------------------------------------------------
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index a89b8714db6a0..4352a001aa989 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -3432,7 +3432,8 @@ def apply(self, func, *args, **kwargs):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
klass='Series',
- versionadded=''))
+ versionadded='',
+ axis=''))
def aggregate(self, func_or_funcs, *args, **kwargs):
_level = kwargs.pop('_level', None)
if isinstance(func_or_funcs, compat.string_types):
@@ -4611,7 +4612,8 @@ class DataFrameGroupBy(NDFrameGroupBy):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
klass='DataFrame',
- versionadded=''))
+ versionadded='',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 4f9c22ca98f1a..004d572375234 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -334,7 +334,8 @@ def plot(self, *args, **kwargs):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
klass='DataFrame',
- versionadded=''))
+ versionadded='',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
self._set_binner()
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 46d1f4468b4d0..4d6bbedc51922 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -77,6 +77,10 @@
_shared_doc_kwargs = dict(
axes='index', klass='Series', axes_single_arg="{0 or 'index'}",
+ axis="""
+ axis : {0 or 'index'}
+ Parameter needed for compatibility with DataFrame.
+ """,
inplace="""inplace : boolean, default False
If True, performs operation inplace and returns None.""",
unique='np.ndarray', duplicated='Series',
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 59cf9ad2920ca..e70a3cb5e911b 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -626,7 +626,8 @@ def f(arg, *args, **kwargs):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
versionadded='',
- klass='Series/DataFrame'))
+ klass='Series/DataFrame',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
result, how = self._aggregate(arg, *args, **kwargs)
if result is None:
@@ -1300,7 +1301,8 @@ def _validate_freq(self):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
versionadded='',
- klass='Series/DataFrame'))
+ klass='Series/DataFrame',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
return super(Rolling, self).aggregate(arg, *args, **kwargs)
@@ -1566,7 +1568,8 @@ def _get_window(self, other=None):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
versionadded='',
- klass='Series/DataFrame'))
+ klass='Series/DataFrame',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
return super(Expanding, self).aggregate(arg, *args, **kwargs)
@@ -1869,7 +1872,8 @@ def _constructor(self):
@Appender(_agg_doc)
@Appender(_shared_docs['aggregate'] % dict(
versionadded='',
- klass='Series/DataFrame'))
+ klass='Series/DataFrame',
+ axis=''))
def aggregate(self, arg, *args, **kwargs):
return super(EWM, self).aggregate(arg, *args, **kwargs)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
#################### Docstring (pandas.DataFrame.aggregate) ####################
################################################################################
Aggregate using one or multiple operations along the specified axis.
.. versionadded:: 0.20.0
Parameters
----------
func : function, string, dictionary, or list of string/functions
Function to use for aggregating the data. If a function, must either
work when passed a DataFrame or when passed to DataFrame.apply. For
a DataFrame, can pass a dict, if the keys are DataFrame column names.
Accepted combinations are:
- string function name.
- function.
- list of functions.
- dict of column names -> functions (or list of functions).
axis : {0 or 'index', 1 or 'columns'}, default 0
- 0 or 'index': apply function to each column.
- 1 or 'columns': apply function to each row.
args
Optional positional arguments to pass to the function.
kwargs
Optional keyword arguments to pass to the function.
Returns
-------
aggregated : DataFrame
Notes
-----
`agg` is an alias for `aggregate`. Use the alias.
Notes
-----
The default behavior of aggregating over the axis 0 is different from
`numpy` functions `mean`/`median`/`prod`/`sum`/`std`/`var`, where the
default is to compute the aggregation of the flattened array (e.g.,
`numpy.mean(arr_2d)` as opposed to `numpy.mean(arr_2d, axis=0)`).
`agg` is an alias for `aggregate`. Use the alias.
Examples
--------
>>> df = df = pd.DataFrame([[1,2,3],
... [4,5,6],
... [7,8,9],
... [np.nan, np.nan, np.nan]],
... columns=['A', 'B', 'C'])
Aggregate these functions across all columns
>>> df.aggregate(['sum', 'min'])
A B C
sum 12.0 15.0 18.0
min 1.0 2.0 3.0
Different aggregations per column
>>> df.aggregate({'A' : ['sum', 'min'], 'B' : ['min', 'max']})
A B
max NaN 8.0
min 1.0 2.0
sum 12.0 NaN
See also
--------
pandas.DataFrame.apply : Perform any type of operations.
pandas.DataFrame.transform : Perform transformation type operations.
pandas.DataFrame.groupby.aggregate : Perform aggregation type operations
over groups.
pandas.DataFrame.resample.aggregate : Perform aggregation type operations
over resampled bins.
pandas.DataFrame.rolling.aggregate : Perform aggregation type operations
over rolling window.
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameter "axis" description should start with capital letter
Parameter "args" has no type
Parameter "kwargs" has no type
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
- args and kwargs have no type
- axis : {0 or 'index', 1 or 'columns'}, default 0
This PR has been made in collaboration with @rochamatcomp
Question to reviewers: we have added to the docstring the parameter `axis` because this is appropriate for DataFrame. However this dosctring is generic (shared with Series, groupby, window and resample) and the parameter `axis` is wrong for them (for Series the parameter has only one possible value). What should be the right way to fix this?
| https://api.github.com/repos/pandas-dev/pandas/pulls/20276 | 2018-03-11T10:40:36Z | 2018-03-13T14:11:25Z | 2018-03-13T14:11:25Z | 2018-03-13T14:56:41Z |
DOC: update the pandas.Series.dt.is_year_start docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e5e9bba269fd4..4e299f4045fbb 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1743,7 +1743,46 @@ def freq(self, value):
is_year_start = _field_accessor(
'is_year_start',
'is_year_start',
- "Logical indicating if first day of year (defined by frequency)")
+ """
+ Indicate whether the date is the first day of a year.
+
+ Returns
+ -------
+ Series or DatetimeIndex
+ The same type as the original data with boolean values. Series will
+ have the same name and index. DatetimeIndex will have the same
+ name.
+
+ See Also
+ --------
+ is_year_end : Similar method indicating the last day of the year.
+
+ Examples
+ --------
+ This method is available on Series with datetime values under
+ the ``.dt`` accessor, and directly on DatetimeIndex.
+
+ >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
+ >>> dates
+ 0 2017-12-30
+ 1 2017-12-31
+ 2 2018-01-01
+ dtype: datetime64[ns]
+
+ >>> dates.dt.is_year_start
+ 0 False
+ 1 False
+ 2 True
+ dtype: bool
+
+ >>> idx = pd.date_range("2017-12-30", periods=3)
+ >>> idx
+ DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
+ dtype='datetime64[ns]', freq='D')
+
+ >>> idx.is_year_start
+ array([False, False, True])
+ """)
is_year_end = _field_accessor(
'is_year_end',
'is_year_end',
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################## Docstring (pandas.Series.dt.is_year_start) ##################
################################################################################
Return a boolean indicating whether the date is the first day of the
year.
Returns
-------
is_year_start : Series of boolean.
See Also
--------
is_year_end : Return a boolean indicating whether the date is the
last day of the year.
Examples
--------
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_start
0 False
1 False
2 True
dtype: bool
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No summary found (a short summary in a single line should be present at the beginning of the docstring)
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
* short summary exceeds one line by one word, hope that is ok
| https://api.github.com/repos/pandas-dev/pandas/pulls/20275 | 2018-03-11T10:04:43Z | 2018-03-14T18:51:24Z | 2018-03-14T18:51:24Z | 2018-03-14T18:51:26Z |
DOC: update the pandas.Series.dt.is_year_end docstring | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 7496334b0a86f..b82cc3980bae8 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -1790,7 +1790,7 @@ def freq(self, value):
See Also
--------
quarter : Return the quarter of the date.
- is_quarter_end : Similar method for indicating the start of a quarter.
+ is_quarter_end : Similar property for indicating the start of a quarter.
Examples
--------
@@ -1831,7 +1831,7 @@ def freq(self, value):
See Also
--------
quarter : Return the quarter of the date.
- is_quarter_start : Similar method indicating the quarter start.
+ is_quarter_start : Similar property indicating the quarter start.
Examples
--------
@@ -1871,7 +1871,7 @@ def freq(self, value):
See Also
--------
- is_year_end : Similar method indicating the last day of the year.
+ is_year_end : Similar property indicating the last day of the year.
Examples
--------
@@ -1902,7 +1902,46 @@ def freq(self, value):
is_year_end = _field_accessor(
'is_year_end',
'is_year_end',
- "Logical indicating if last day of year (defined by frequency)")
+ """
+ Indicate whether the date is the last day of the year.
+
+ Returns
+ -------
+ Series or DatetimeIndex
+ The same type as the original data with boolean values. Series will
+ have the same name and index. DatetimeIndex will have the same
+ name.
+
+ See Also
+ --------
+ is_year_start : Similar property indicating the start of the year.
+
+ Examples
+ --------
+ This method is available on Series with datetime values under
+ the ``.dt`` accessor, and directly on DatetimeIndex.
+
+ >>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
+ >>> dates
+ 0 2017-12-30
+ 1 2017-12-31
+ 2 2018-01-01
+ dtype: datetime64[ns]
+
+ >>> dates.dt.is_year_end
+ 0 False
+ 1 True
+ 2 False
+ dtype: bool
+
+ >>> idx = pd.date_range("2017-12-30", periods=3)
+ >>> idx
+ DatetimeIndex(['2017-12-30', '2017-12-31', '2018-01-01'],
+ dtype='datetime64[ns]', freq='D')
+
+ >>> idx.is_year_end
+ array([False, True, False])
+ """)
is_leap_year = _field_accessor(
'is_leap_year',
'is_leap_year',
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [X] PR title is "DOC: update the <your-function-or-method> docstring"
- [X] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [X] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [X] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################### Docstring (pandas.Series.dt.is_year_end) ###################
################################################################################
Return a boolean indicating whether the date is the last day of the year.
Returns
-------
is_year_end : Series of boolean.
See Also
--------
is_year_start : Return a boolean indicating whether the date is the first day of the year.
Examples
--------
>>> dates = pd.Series(pd.date_range("2017-12-30", periods=3))
>>> dates
0 2017-12-30
1 2017-12-31
2 2018-01-01
dtype: datetime64[ns]
>>> dates.dt.is_year_end
0 False
1 True
2 False
dtype: bool
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
* No extended summary required
| https://api.github.com/repos/pandas-dev/pandas/pulls/20274 | 2018-03-11T09:52:54Z | 2018-03-14T18:56:52Z | 2018-03-14T18:56:52Z | 2018-03-14T18:56:52Z |
DOC: update the pandas.Series.str.slice_replace docstring | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 08a1cc29b8367..2eb2d284d8518 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1322,19 +1322,75 @@ def str_slice(arr, start=None, stop=None, step=None):
def str_slice_replace(arr, start=None, stop=None, repl=None):
"""
- Replace a slice of each string in the Series/Index with another
- string.
+ Replace a positional slice of a string with another value.
Parameters
----------
- start : int or None
- stop : int or None
- repl : str or None
- String for replacement
+ start : int, optional
+ Left index position to use for the slice. If not specified (None),
+ the slice is unbounded on the left, i.e. slice from the start
+ of the string.
+ stop : int, optional
+ Right index position to use for the slice. If not specified (None),
+ the slice is unbounded on the right, i.e. slice until the
+ end of the string.
+ repl : str, optional
+ String for replacement. If not specified (None), the sliced region
+ is replaced with an empty string.
Returns
-------
- replaced : Series/Index of objects
+ replaced : Series or Index
+ Same type as the original object.
+
+ See Also
+ --------
+ Series.str.slice : Just slicing without replacement.
+
+ Examples
+ --------
+ >>> s = pd.Series(['a', 'ab', 'abc', 'abdc', 'abcde'])
+ >>> s
+ 0 a
+ 1 ab
+ 2 abc
+ 3 abdc
+ 4 abcde
+ dtype: object
+
+ Specify just `start`, meaning replace `start` until the end of the
+ string with `repl`.
+
+ >>> s.str.slice_replace(1, repl='X')
+ 0 aX
+ 1 aX
+ 2 aX
+ 3 aX
+ 4 aX
+ dtype: object
+
+ Specify just `stop`, meaning the start of the string to `stop` is replaced
+ with `repl`, and the rest of the string is included.
+
+ >>> s.str.slice_replace(stop=2, repl='X')
+ 0 X
+ 1 X
+ 2 Xc
+ 3 Xdc
+ 4 Xcde
+ dtype: object
+
+ Specify `start` and `stop`, meaning the slice from `start` to `stop` is
+ replaced with `repl`. Everything before or after `start` and `stop` is
+ included as is.
+
+ >>> s.str.slice_replace(start=1, stop=3, repl='X')
+ 0 aX
+ 1 aX
+ 2 aX
+ 3 aXc
+ 4 aXde
+ dtype: object
"""
if repl is None:
repl = ''
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [x] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
################################################################################
################# Docstring (pandas.Series.str.slice_replace) #################
################################################################################
Replace a sliced string.
Replace a slice of each string in the Series/Index with another
string.
Parameters
----------
start : int or None
Left edge index.
stop : int or None
Right edge index.
repl : str or None
String for replacement.
Returns
-------
replaced : Series/Index of objects
Examples
--------
>>> s = pd.Series(['This is a Test 1', 'This is a Test 2'])
>>> s
0 This is a Test 1
1 This is a Test 2
dtype: object
>>> s = s.str.slice_replace(8, 14, 'an Example')
>>> s
0 This is an Example 1
1 This is an Example 2
dtype: object
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
See Also section not found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/20273 | 2018-03-11T07:50:32Z | 2018-03-16T11:47:50Z | 2018-03-16T11:47:50Z | 2018-04-07T10:38:09Z |
DOC: Update the pandas.core.window.x.sum docstring | diff --git a/pandas/core/window.py b/pandas/core/window.py
index e70a3cb5e911b..358ef98e1c072 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -320,7 +320,79 @@ def aggregate(self, arg, *args, **kwargs):
agg = aggregate
_shared_docs['sum'] = dedent("""
- %(name)s sum""")
+ Calculate %(name)s sum of given DataFrame or Series.
+
+ Parameters
+ ----------
+ *args, **kwargs
+ For compatibility with other %(name)s methods. Has no effect
+ on the computed value.
+
+ Returns
+ -------
+ Series or DataFrame
+ Same type as the input, with the same index, containing the
+ %(name)s sum.
+
+ See Also
+ --------
+ Series.sum : Reducing sum for Series.
+ DataFrame.sum : Reducing sum for DataFrame.
+
+ Examples
+ --------
+ >>> s = pd.Series([1, 2, 3, 4, 5])
+ >>> s
+ 0 1
+ 1 2
+ 2 3
+ 3 4
+ 4 5
+ dtype: int64
+
+ >>> s.rolling(3).sum()
+ 0 NaN
+ 1 NaN
+ 2 6.0
+ 3 9.0
+ 4 12.0
+ dtype: float64
+
+ >>> s.expanding(3).sum()
+ 0 NaN
+ 1 NaN
+ 2 6.0
+ 3 10.0
+ 4 15.0
+ dtype: float64
+
+ >>> s.rolling(3, center=True).sum()
+ 0 NaN
+ 1 6.0
+ 2 9.0
+ 3 12.0
+ 4 NaN
+ dtype: float64
+
+ For DataFrame, each %(name)s sum is computed column-wise.
+
+ >>> df = pd.DataFrame({"A": s, "B": s ** 2})
+ >>> df
+ A B
+ 0 1 1
+ 1 2 4
+ 2 3 9
+ 3 4 16
+ 4 5 25
+
+ >>> df.rolling(3).sum()
+ A B
+ 0 NaN NaN
+ 1 NaN NaN
+ 2 6.0 14.0
+ 3 9.0 29.0
+ 4 12.0 50.0
+ """)
_shared_docs['mean'] = dedent("""
%(name)s mean""")
@@ -640,7 +712,6 @@ def aggregate(self, arg, *args, **kwargs):
agg = aggregate
@Substitution(name='window')
- @Appender(_doc_template)
@Appender(_shared_docs['sum'])
def sum(self, *args, **kwargs):
nv.validate_window_func('sum', args, kwargs)
@@ -1326,7 +1397,6 @@ def apply(self, func, args=(), kwargs={}):
return super(Rolling, self).apply(func, args=args, kwargs=kwargs)
@Substitution(name='rolling')
- @Appender(_doc_template)
@Appender(_shared_docs['sum'])
def sum(self, *args, **kwargs):
nv.validate_rolling_func('sum', args, kwargs)
@@ -1588,7 +1658,6 @@ def apply(self, func, args=(), kwargs={}):
return super(Expanding, self).apply(func, args=args, kwargs=kwargs)
@Substitution(name='expanding')
- @Appender(_doc_template)
@Appender(_shared_docs['sum'])
def sum(self, *args, **kwargs):
nv.validate_expanding_func('sum', args, kwargs)
| Checklist for the pandas documentation sprint (ignore this if you are doing
an unrelated PR):
- [x ] PR title is "DOC: update the <your-function-or-method> docstring"
- [x ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x ] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x ] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ x] It has been proofread on language by another sprint participant
Please include the output of the validation script below between the "```" ticks:
```
# paste output of "scripts/validate_docstrings.py <your-function-or-method>" here
# between the "```"
(pandas-dev) D:\GitHub\pandas>python scripts/validate_docstrings.py pandas.core.window.Rolling.sum
################################################################################
################## Docstring (pandas.core.window.Rolling.sum) ##################
################################################################################
Calculate rolling sum of given DataFrame or Series.
Parameters
----------
*args
Under Review.
**kwargs
Under Review.
Returns
-------
Series or DataFrame
Like-indexed object containing the result of function application
See Also
--------
Series.rolling : Calling object with Series data
DataFrame.rolling : Calling object with DataFrame data
Series.sum : Equivalent method for Series
DataFrame.sum : Equivalent method for DataFrame
Examples
--------
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
Errors in parameters section
Parameters {'kwargs', 'args'} not documented
Unknown parameters {'**kwargs', '*args'}
Parameter "*args" has no type
Parameter "**kwargs" has no type
(pandas-dev) D:\GitHub\pandas>python scripts/validate_docstrings.py pandas.core.window.Expanding.sum
################################################################################
################# Docstring (pandas.core.window.Expanding.sum) #################
################################################################################
Calculate expanding sum of given DataFrame or Series.
Parameters
----------
*args
Under Review.
**kwargs
Under Review.
Returns
-------
Series or DataFrame
Like-indexed object containing the result of function application
See Also
--------
Series.expanding : Calling object with Series data
DataFrame.expanding : Calling object with DataFrame data
Series.sum : Equivalent method for Series
DataFrame.sum : Equivalent method for DataFrame
Examples
--------
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
Errors in parameters section
Parameters {'kwargs', 'args'} not documented
Unknown parameters {'**kwargs', '*args'}
Parameter "*args" has no type
Parameter "**kwargs" has no type
(pandas-dev) D:\GitHub\pandas>python scripts/validate_docstrings.py pandas.core.window.Window.sum
################################################################################
################## Docstring (pandas.core.window.Window.sum) ##################
################################################################################
Calculate window sum of given DataFrame or Series.
Parameters
----------
*args
Under Review.
**kwargs
Under Review.
Returns
-------
Series or DataFrame
Like-indexed object containing the result of function application
See Also
--------
Series.window : Calling object with Series data
DataFrame.window : Calling object with DataFrame data
Series.sum : Equivalent method for Series
DataFrame.sum : Equivalent method for DataFrame
Examples
--------
>>> s = pd.Series([1, 2, 3, 4, 5])
>>> s.rolling(3).sum()
0 NaN
1 NaN
2 6.0
3 9.0
4 12.0
dtype: float64
>>> s.expanding(3).sum()
0 NaN
1 NaN
2 6.0
3 10.0
4 15.0
dtype: float64
>>> s.rolling(3, center=True).sum()
0 NaN
1 6.0
2 9.0
3 12.0
4 NaN
dtype: float64
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
No extended summary found
Errors in parameters section
Parameters {'args', 'kwargs'} not documented
Unknown parameters {'*args', '**kwargs'}
Parameter "*args" has no type
Parameter "**kwargs" has no type
```
If the validation script still gives errors, but you think there is a good reason
to deviate in this case (and there are certainly such cases), please state this
explicitly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/20272 | 2018-03-11T03:55:19Z | 2018-03-14T19:30:05Z | 2018-03-14T19:30:05Z | 2018-03-14T19:30:09Z |
DOC: update the pandas.DataFrame.replace docstring | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 397726181d2fb..5534d21b43ff7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4867,28 +4867,33 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
limit=limit, downcast=downcast)
_shared_docs['replace'] = ("""
- Replace values given in 'to_replace' with 'value'.
+ Replace values given in `to_replace` with `value`.
+
+ Values of the %(klass)s are replaced with other values dynamically.
+ This differs from updating with ``.loc`` or ``.iloc``, which require
+ you to specify a location to update with some value.
Parameters
----------
- to_replace : str, regex, list, dict, Series, numeric, or None
+ to_replace : str, regex, list, dict, Series, int, float, or None
+ How to find the values that will be replaced.
* numeric, str or regex:
- - numeric: numeric values equal to ``to_replace`` will be
- replaced with ``value``
- - str: string exactly matching ``to_replace`` will be replaced
- with ``value``
- - regex: regexs matching ``to_replace`` will be replaced with
- ``value``
+ - numeric: numeric values equal to `to_replace` will be
+ replaced with `value`
+ - str: string exactly matching `to_replace` will be replaced
+ with `value`
+ - regex: regexs matching `to_replace` will be replaced with
+ `value`
* list of str, regex, or numeric:
- - First, if ``to_replace`` and ``value`` are both lists, they
+ - First, if `to_replace` and `value` are both lists, they
**must** be the same length.
- Second, if ``regex=True`` then all of the strings in **both**
lists will be interpreted as regexs otherwise they will match
- directly. This doesn't matter much for ``value`` since there
+ directly. This doesn't matter much for `value` since there
are only a few possible substitution regexes you can use.
- str, regex and numeric rules apply as above.
@@ -4896,20 +4901,20 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
- Dicts can be used to specify different replacement values
for different existing values. For example,
- {'a': 'b', 'y': 'z'} replaces the value 'a' with 'b' and
- 'y' with 'z'. To use a dict in this way the ``value``
- parameter should be ``None``.
+ ``{'a': 'b', 'y': 'z'}`` replaces the value 'a' with 'b' and
+ 'y' with 'z'. To use a dict in this way the `value`
+ parameter should be `None`.
- For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
- {'a': 1, 'b': 'z'} looks for the value 1 in column 'a' and
- the value 'z' in column 'b' and replaces these values with
- whatever is specified in ``value``. The ``value`` parameter
+ ``{'a': 1, 'b': 'z'}`` looks for the value 1 in column 'a'
+ and the value 'z' in column 'b' and replaces these values
+ with whatever is specified in `value`. The `value` parameter
should not be ``None`` in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
- For a DataFrame nested dictionaries, e.g.,
- {'a': {'b': np.nan}}, are read as follows: look in column 'a'
- for the value 'b' and replace it with NaN. The ``value``
+ ``{'a': {'b': np.nan}}``, are read as follows: look in column
+ 'a' for the value 'b' and replace it with NaN. The `value`
parameter should be ``None`` to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
@@ -4917,14 +4922,14 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
* None:
- - This means that the ``regex`` argument must be a string,
- compiled regular expression, or list, dict, ndarray or Series
- of such elements. If ``value`` is also ``None`` then this
- **must** be a nested dictionary or ``Series``.
+ - This means that the `regex` argument must be a string,
+ compiled regular expression, or list, dict, ndarray or
+ Series of such elements. If `value` is also ``None`` then
+ this **must** be a nested dictionary or Series.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
- Value to replace any values matching ``to_replace`` with.
+ Value to replace any values matching `to_replace` with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
@@ -4934,45 +4939,50 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
other views on this object (e.g. a column from a DataFrame).
Returns the caller if this is True.
limit : int, default None
- Maximum size gap to forward or backward fill
- regex : bool or same types as ``to_replace``, default False
- Whether to interpret ``to_replace`` and/or ``value`` as regular
- expressions. If this is ``True`` then ``to_replace`` *must* be a
+ Maximum size gap to forward or backward fill.
+ regex : bool or same types as `to_replace`, default False
+ Whether to interpret `to_replace` and/or `value` as regular
+ expressions. If this is ``True`` then `to_replace` *must* be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
- ``to_replace`` must be ``None``.
- method : string, optional, {'pad', 'ffill', 'bfill'}
- The method to use when for replacement, when ``to_replace`` is a
- scalar, list or tuple and ``value`` is None.
+ `to_replace` must be ``None``.
+ method : {'pad', 'ffill', 'bfill', `None`}
+ The method to use when for replacement, when `to_replace` is a
+ scalar, list or tuple and `value` is ``None``.
- .. versionchanged:: 0.23.0
- Added to DataFrame
+ .. versionchanged:: 0.23.0
+ Added to DataFrame.
+ axis : None
+ .. deprecated:: 0.13.0
+ Has no effect and will be removed.
See Also
--------
- %(klass)s.fillna : Fill NA/NaN values
+ %(klass)s.fillna : Fill NA values
%(klass)s.where : Replace values based on boolean condition
+ Series.str.replace : Simple string replacement.
Returns
-------
- filled : %(klass)s
+ %(klass)s
+ Object after replacement.
Raises
------
AssertionError
- * If ``regex`` is not a ``bool`` and ``to_replace`` is not
+ * If `regex` is not a ``bool`` and `to_replace` is not
``None``.
TypeError
- * If ``to_replace`` is a ``dict`` and ``value`` is not a ``list``,
+ * If `to_replace` is a ``dict`` and `value` is not a ``list``,
``dict``, ``ndarray``, or ``Series``
- * If ``to_replace`` is ``None`` and ``regex`` is not compilable
+ * If `to_replace` is ``None`` and `regex` is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
* When replacing multiple ``bool`` or ``datetime64`` objects and
- the arguments to ``to_replace`` does not match the type of the
+ the arguments to `to_replace` does not match the type of the
value being replaced
ValueError
- * If a ``list`` or an ``ndarray`` is passed to ``to_replace`` and
+ * If a ``list`` or an ``ndarray`` is passed to `to_replace` and
`value` but they are not the same length.
Notes
@@ -4986,10 +4996,15 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
numbers *are* strings, then you can do this.
* This method has *a lot* of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
+ * When dict is used as the `to_replace` value, it is like
+ key(s) in the dict are the to_replace part and
+ value(s) in the dict are the value parameter.
Examples
--------
+ **Scalar `to_replace` and `value`**
+
>>> s = pd.Series([0, 1, 2, 3, 4])
>>> s.replace(0, 5)
0 5
@@ -4998,6 +5013,7 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
3 3
4 4
dtype: int64
+
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
@@ -5009,6 +5025,8 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
3 3 8 d
4 4 9 e
+ **List-like `to_replace`**
+
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
@@ -5016,6 +5034,7 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
2 4 7 c
3 4 8 d
4 4 9 e
+
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
@@ -5023,6 +5042,7 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
2 2 7 c
3 1 8 d
4 4 9 e
+
>>> s.replace([1, 2], method='bfill')
0 0
1 3
@@ -5031,6 +5051,8 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
4 4
dtype: int64
+ **dict-like `to_replace`**
+
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
@@ -5038,6 +5060,7 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
2 2 7 c
3 3 8 d
4 4 9 e
+
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
@@ -5045,6 +5068,7 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
2 2 7 c
3 3 8 d
4 4 9 e
+
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
@@ -5053,6 +5077,8 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
3 3 8 d
4 400 9 e
+ **Regular expression `to_replace`**
+
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
@@ -5060,21 +5086,25 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
0 new abc
1 foo new
2 bait xyz
+
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
+
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
+
>>> df.replace(regex={r'^ba.$':'new', 'foo':'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
+
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
@@ -5082,16 +5112,52 @@ def bfill(self, axis=None, inplace=False, limit=None, downcast=None):
2 bait xyz
Note that when replacing multiple ``bool`` or ``datetime64`` objects,
- the data types in the ``to_replace`` parameter must match the data
+ the data types in the `to_replace` parameter must match the data
type of the value being replaced:
>>> df = pd.DataFrame({'A': [True, False, True],
... 'B': [False, True, False]})
>>> df.replace({'a string': 'new value', True: False}) # raises
+ Traceback (most recent call last):
+ ...
TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
This raises a ``TypeError`` because one of the ``dict`` keys is not of
the correct type for replacement.
+
+ Compare the behavior of ``s.replace({'a': None})`` and
+ ``s.replace('a', None)`` to understand the pecularities
+ of the `to_replace` parameter:
+
+ >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
+
+ When one uses a dict as the `to_replace` value, it is like the
+ value(s) in the dict are equal to the `value` parameter.
+ ``s.replace({'a': None})`` is equivalent to
+ ``s.replace(to_replace={'a': None}, value=None, method=None)``:
+
+ >>> s.replace({'a': None})
+ 0 10
+ 1 None
+ 2 None
+ 3 b
+ 4 None
+ dtype: object
+
+ When ``value=None`` and `to_replace` is a scalar, list or
+ tuple, `replace` uses the method parameter (default 'pad') to do the
+ replacement. So this is why the 'a' values are being replaced by 10
+ in rows 1 and 2 and 'b' in row 4 in this case.
+ The command ``s.replace('a', None)`` is actually equivalent to
+ ``s.replace(to_replace='a', value=None, method='pad')``:
+
+ >>> s.replace('a', None)
+ 0 10
+ 1 10
+ 2 10
+ 3 b
+ 4 b
+ dtype: object
""")
@Appender(_shared_docs['replace'] % _shared_doc_kwargs)
| - [x] PR title is "DOC: update the <your-function-or-method> docstring"
- [ ] The validation script passes: `scripts/validate_docstrings.py <your-function-or-method>`
- [x] The PEP8 style check passes: `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] The html version looks good: `python doc/make.py --single <your-function-or-method>`
- [ ] It has been proofread on language by another sprint participant
**Note:** Just did a minor improvement, not a full change!
Still a few verification errors:
- Errors in parameters section
- Parameter "to_replace" description should start with capital letter
- Parameter "axis" description should finish with "."
- Examples do not pass tests
```
################################################################################
##################### Docstring (pandas.DataFrame.replace) #####################
################################################################################
Replace values given in 'to_replace' with 'value'.
Values of the DataFrame or a Series are being replaced with
other values. One or several values can be replaced with one
or several values.
Parameters
----------
to_replace : str, regex, list, dict, Series, numeric, or None
* numeric, str or regex:
- numeric: numeric values equal to ``to_replace`` will be
replaced with ``value``
- str: string exactly matching ``to_replace`` will be replaced
with ``value``
- regex: regexs matching ``to_replace`` will be replaced with
``value``
* list of str, regex, or numeric:
- First, if ``to_replace`` and ``value`` are both lists, they
**must** be the same length.
- Second, if ``regex=True`` then all of the strings in **both**
lists will be interpreted as regexs otherwise they will match
directly. This doesn't matter much for ``value`` since there
are only a few possible substitution regexes you can use.
- str, regex and numeric rules apply as above.
* dict:
- Dicts can be used to specify different replacement values
for different existing values. For example,
{'a': 'b', 'y': 'z'} replaces the value 'a' with 'b' and
'y' with 'z'. To use a dict in this way the ``value``
parameter should be ``None``.
- For a DataFrame a dict can specify that different values
should be replaced in different columns. For example,
{'a': 1, 'b': 'z'} looks for the value 1 in column 'a' and
the value 'z' in column 'b' and replaces these values with
whatever is specified in ``value``. The ``value`` parameter
should not be ``None`` in this case. You can treat this as a
special case of passing two lists except that you are
specifying the column to search in.
- For a DataFrame nested dictionaries, e.g.,
{'a': {'b': np.nan}}, are read as follows: look in column 'a'
for the value 'b' and replace it with NaN. The ``value``
parameter should be ``None`` to use a nested dict in this
way. You can nest regular expressions as well. Note that
column names (the top-level dictionary keys in a nested
dictionary) **cannot** be regular expressions.
* None:
- This means that the ``regex`` argument must be a string,
compiled regular expression, or list, dict, ndarray or Series
of such elements. If ``value`` is also ``None`` then this
**must** be a nested dictionary or ``Series``.
See the examples section for examples of each of these.
value : scalar, dict, list, str, regex, default None
Value to replace any values matching ``to_replace`` with.
For a DataFrame a dict of values can be used to specify which
value to use for each column (columns not in the dict will not be
filled). Regular expressions, strings and lists or dicts of such
objects are also allowed.
inplace : boolean, default False
If True, in place. Note: this will modify any
other views on this object (e.g. a column from a DataFrame).
Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill.
regex : bool or same types as ``to_replace``, default False
Whether to interpret ``to_replace`` and/or ``value`` as regular
expressions. If this is ``True`` then ``to_replace`` *must* be a
string. Alternatively, this could be a regular expression or a
list, dict, or array of regular expressions in which case
``to_replace`` must be ``None``.
method : string, optional, {'pad', 'ffill', 'bfill'}, default is 'pad'
The method to use when for replacement, when ``to_replace`` is a
scalar, list or tuple and ``value`` is None.
axis : None
Deprecated.
.. versionchanged:: 0.23.0
Added to DataFrame
See Also
--------
DataFrame.fillna : Fill NA/NaN values
DataFrame.where : Replace values based on boolean condition
Returns
-------
DataFrame
Some values have been substituted for new values.
Raises
------
AssertionError
* If ``regex`` is not a ``bool`` and ``to_replace`` is not
``None``.
TypeError
* If ``to_replace`` is a ``dict`` and ``value`` is not a ``list``,
``dict``, ``ndarray``, or ``Series``
* If ``to_replace`` is ``None`` and ``regex`` is not compilable
into a regular expression or is a list, dict, ndarray, or
Series.
* When replacing multiple ``bool`` or ``datetime64`` objects and
the arguments to ``to_replace`` does not match the type of the
value being replaced
ValueError
* If a ``list`` or an ``ndarray`` is passed to ``to_replace`` and
`value` but they are not the same length.
Notes
-----
* Regex substitution is performed under the hood with ``re.sub``. The
rules for substitution for ``re.sub`` are the same.
* Regular expressions will only substitute on strings, meaning you
cannot provide, for example, a regular expression matching floating
point numbers and expect the columns in your frame that have a
numeric dtype to be matched. However, if those floating point
numbers *are* strings, then you can do this.
* This method has *a lot* of options. You are encouraged to experiment
and play with this method to gain intuition about how it works.
Examples
--------
>>> s = pd.Series([0, 1, 2, 3, 4])
>>> s.replace(0, 5)
0 5
1 1
2 2
3 3
4 4
dtype: int64
>>> df = pd.DataFrame({'A': [0, 1, 2, 3, 4],
... 'B': [5, 6, 7, 8, 9],
... 'C': ['a', 'b', 'c', 'd', 'e']})
>>> df.replace(0, 5)
A B C
0 5 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], 4)
A B C
0 4 5 a
1 4 6 b
2 4 7 c
3 4 8 d
4 4 9 e
>>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
A B C
0 4 5 a
1 3 6 b
2 2 7 c
3 1 8 d
4 4 9 e
>>> s.replace([1, 2], method='bfill')
0 0
1 3
2 3
3 3
4 4
dtype: int64
>>> df.replace({0: 10, 1: 100})
A B C
0 10 5 a
1 100 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': 0, 'B': 5}, 100)
A B C
0 100 100 a
1 1 6 b
2 2 7 c
3 3 8 d
4 4 9 e
>>> df.replace({'A': {0: 100, 4: 400}})
A B C
0 100 5 a
1 1 6 b
2 2 7 c
3 3 8 d
4 400 9 e
>>> df = pd.DataFrame({'A': ['bat', 'foo', 'bait'],
... 'B': ['abc', 'bar', 'xyz']})
>>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace({'A': r'^ba.$'}, {'A': 'new'}, regex=True)
A B
0 new abc
1 foo bar
2 bait xyz
>>> df.replace(regex=r'^ba.$', value='new')
A B
0 new abc
1 foo new
2 bait xyz
>>> df.replace(regex={r'^ba.$':'new', 'foo':'xyz'})
A B
0 new abc
1 xyz new
2 bait xyz
>>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
A B
0 new abc
1 new new
2 bait xyz
Note that when replacing multiple ``bool`` or ``datetime64`` objects,
the data types in the ``to_replace`` parameter must match the data
type of the value being replaced:
>>> df = pd.DataFrame({'A': [True, False, True],
... 'B': [False, True, False]})
>>> df.replace({'a string': 'new value', True: False}) # raises
TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
This raises a ``TypeError`` because one of the ``dict`` keys is not of
the correct type for replacement.
Compare the behavior of
``s.replace('a', None)`` and ``s.replace({'a': None})`` to understand
the pecularities of the ``to_replace`` parameter.
``s.replace('a', None)`` is actually equivalent to
``s.replace(to_replace='a', value=None, method='pad')``,
because when ``value=None`` and ``to_replace`` is a scalar, list or
tuple, ``replace`` uses the method parameter to do the replacement.
So this is why the 'a' values are being replaced by 30 in rows 3 and 4
and 'b' in row 6 in this case. However, this behaviour does not occur
when you use a dict as the ``to_replace`` value. In this case, it is
like the value(s) in the dict are equal to the value parameter.
>>> s = pd.Series([10, 20, 30, 'a', 'a', 'b', 'a'])
>>> print(s)
0 10
1 20
2 30
3 a
4 a
5 b
6 a
dtype: object
>>> print(s.replace('a', None))
0 10
1 20
2 30
3 30
4 30
5 b
6 b
dtype: object
>>> print(s.replace({'a': None}))
0 10
1 20
2 30
3 None
4 None
5 b
6 None
dtype: object
################################################################################
################################## Validation ##################################
################################################################################
Errors found:
Errors in parameters section
Parameter "to_replace" description should start with capital letter
Parameter "axis" description should finish with "."
Examples do not pass tests
################################################################################
################################### Doctests ###################################
################################################################################
**********************************************************************
Line 229, in pandas.DataFrame.replace
Failed example:
df.replace({'a string': 'new value', True: False}) # raises
Exception raised:
Traceback (most recent call last):
File "C:\Users\thisi\AppData\Local\conda\conda\envs\pandas_dev\lib\doctest.py", line 1330, in __run
compileflags, 1), test.globs)
File "<doctest pandas.DataFrame.replace[17]>", line 1, in <module>
df.replace({'a string': 'new value', True: False}) # raises
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\frame.py", line 3136, in replace
method=method, axis=axis)
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\generic.py", line 5208, in replace
limit=limit, regex=regex)
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\frame.py", line 3136, in replace
method=method, axis=axis)
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\generic.py", line 5257, in replace
regex=regex)
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\internals.py", line 3696, in replace_list
masks = [comp(s) for i, s in enumerate(src_list)]
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\internals.py", line 3696, in <listcomp>
masks = [comp(s) for i, s in enumerate(src_list)]
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\internals.py", line 3694, in comp
return _maybe_compare(values, getattr(s, 'asm8', s), operator.eq)
File "C:\Users\thisi\Documents\GitHub\pandas\pandas\core\internals.py", line 5122, in _maybe_compare
b=type_names[1]))
TypeError: Cannot compare types 'ndarray(dtype=bool)' and 'str'
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/20271 | 2018-03-11T03:15:21Z | 2018-04-22T14:22:17Z | 2018-04-22T14:22:17Z | 2018-04-22T14:22:30Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.