title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: fix DataFrame.nlargest and DataFrame.nsmallest doctests | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index b829cbefe8f7a..04813cc31f603 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -123,7 +123,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests frame.py' ; echo $MSG
pytest -q --doctest-modules pandas/core/frame.py \
- -k"-axes -combine -itertuples -join -nlargest -nsmallest -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_stata"
+ -k"-axes -combine -itertuples -join -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_stata"
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests series.py' ; echo $MSG
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a58d34574d28d..9c85139fffcc4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4694,60 +4694,63 @@ def nlargest(self, n, columns, keep='first'):
Examples
--------
- >>> df = pd.DataFrame({'a': [1, 10, 8, 11, 8, 2],
- ... 'b': list('abdcef'),
- ... 'c': [1.0, 2.0, np.nan, 3.0, 4.0, 9.0]})
+ >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
+ ... 434000, 434000, 337000, 11300,
+ ... 11300, 11300],
+ ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
+ ... 17036, 182, 38, 311],
+ ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
+ ... "IS", "NR", "TV", "AI"]},
+ ... index=["Italy", "France", "Malta",
+ ... "Maldives", "Brunei", "Iceland",
+ ... "Nauru", "Tuvalu", "Anguilla"])
>>> df
- a b c
- 0 1 a 1.0
- 1 10 b 2.0
- 2 8 d NaN
- 3 11 c 3.0
- 4 8 e 4.0
- 5 2 f 9.0
+ population GDP alpha-2
+ Italy 59000000 1937894 IT
+ France 65000000 2583560 FR
+ Malta 434000 12011 MT
+ Maldives 434000 4520 MV
+ Brunei 434000 12128 BN
+ Iceland 337000 17036 IS
+ Nauru 11300 182 NR
+ Tuvalu 11300 38 TV
+ Anguilla 11300 311 AI
In the following example, we will use ``nlargest`` to select the three
- rows having the largest values in column "a".
+ rows having the largest values in column "population".
- >>> df.nlargest(3, 'a')
- a b c
- 3 11 c 3.0
- 1 10 b 2.0
- 2 8 d NaN
+ >>> df.nlargest(3, 'population')
+ population GDP alpha-2
+ France 65000000 2583560 FR
+ Italy 59000000 1937894 IT
+ Malta 434000 12011 MT
When using ``keep='last'``, ties are resolved in reverse order:
- >>> df.nlargest(3, 'a', keep='last')
- a b c
- 3 11 c 3.0
- 1 10 b 2.0
- 4 8 e 4.0
+ >>> df.nlargest(3, 'population', keep='last')
+ population GDP alpha-2
+ France 65000000 2583560 FR
+ Italy 59000000 1937894 IT
+ Brunei 434000 12128 BN
When using ``keep='all'``, all duplicate items are maintained:
- >>> df.nlargest(3, 'a', keep='all')
- a b c
- 3 11 c 3.0
- 1 10 b 2.0
- 2 8 d NaN
- 4 8 e 4.0
+ >>> df.nlargest(3, 'population', keep='all')
+ population GDP alpha-2
+ France 65000000 2583560 FR
+ Italy 59000000 1937894 IT
+ Malta 434000 12011 MT
+ Maldives 434000 4520 MV
+ Brunei 434000 12128 BN
- To order by the largest values in column "a" and then "c", we can
- specify multiple columns like in the next example.
-
- >>> df.nlargest(3, ['a', 'c'])
- a b c
- 4 8 e 4.0
- 3 11 c 3.0
- 1 10 b 2.0
-
- Attempting to use ``nlargest`` on non-numeric dtypes will raise a
- ``TypeError``:
-
- >>> df.nlargest(3, 'b')
+ To order by the largest values in column "population" and then "GDP",
+ we can specify multiple columns like in the next example.
- Traceback (most recent call last):
- TypeError: Column 'b' has dtype object, cannot use method 'nlargest'
+ >>> df.nlargest(3, ['population', 'GDP'])
+ population GDP alpha-2
+ France 65000000 2583560 FR
+ Italy 59000000 1937894 IT
+ Brunei 434000 12128 BN
"""
return algorithms.SelectNFrame(self,
n=n,
@@ -4755,15 +4758,23 @@ def nlargest(self, n, columns, keep='first'):
columns=columns).nlargest()
def nsmallest(self, n, columns, keep='first'):
- """Get the rows of a DataFrame sorted by the `n` smallest
- values of `columns`.
+ """
+ Return the first `n` rows ordered by `columns` in ascending order.
+
+ Return the first `n` rows with the smallest values in `columns`, in
+ ascending order. The columns that are not specified are returned as
+ well, but not used for ordering.
+
+ This method is equivalent to
+ ``df.sort_values(columns, ascending=True).head(n)``, but more
+ performant.
Parameters
----------
n : int
- Number of items to retrieve
+ Number of items to retrieve.
columns : list or str
- Column name or names to order by
+ Column name or names to order by.
keep : {'first', 'last', 'all'}, default 'first'
Where there are duplicate values:
@@ -4778,62 +4789,70 @@ def nsmallest(self, n, columns, keep='first'):
-------
DataFrame
+ See Also
+ --------
+ DataFrame.nlargest : Return the first `n` rows ordered by `columns` in
+ descending order.
+ DataFrame.sort_values : Sort DataFrame by the values.
+ DataFrame.head : Return the first `n` rows without re-ordering.
+
Examples
--------
- >>> df = pd.DataFrame({'a': [1, 10, 8, 11, 8, 2],
- ... 'b': list('abdcef'),
- ... 'c': [1.0, 2.0, np.nan, 3.0, 4.0, 9.0]})
+ >>> df = pd.DataFrame({'population': [59000000, 65000000, 434000,
+ ... 434000, 434000, 337000, 11300,
+ ... 11300, 11300],
+ ... 'GDP': [1937894, 2583560 , 12011, 4520, 12128,
+ ... 17036, 182, 38, 311],
+ ... 'alpha-2': ["IT", "FR", "MT", "MV", "BN",
+ ... "IS", "NR", "TV", "AI"]},
+ ... index=["Italy", "France", "Malta",
+ ... "Maldives", "Brunei", "Iceland",
+ ... "Nauru", "Tuvalu", "Anguilla"])
>>> df
- a b c
- 0 1 a 1.0
- 1 10 b 2.0
- 2 8 d NaN
- 3 11 c 3.0
- 4 8 e 4.0
- 5 2 f 9.0
+ population GDP alpha-2
+ Italy 59000000 1937894 IT
+ France 65000000 2583560 FR
+ Malta 434000 12011 MT
+ Maldives 434000 4520 MV
+ Brunei 434000 12128 BN
+ Iceland 337000 17036 IS
+ Nauru 11300 182 NR
+ Tuvalu 11300 38 TV
+ Anguilla 11300 311 AI
In the following example, we will use ``nsmallest`` to select the
three rows having the smallest values in column "a".
- >>> df.nsmallest(3, 'a')
- a b c
- 0 1 a 1.0
- 5 2 f 9.0
- 2 8 d NaN
+ >>> df.nsmallest(3, 'population')
+ population GDP alpha-2
+ Nauru 11300 182 NR
+ Tuvalu 11300 38 TV
+ Anguilla 11300 311 AI
When using ``keep='last'``, ties are resolved in reverse order:
- >>> df.nsmallest(3, 'a', keep='last')
- a b c
- 0 1 a 1.0
- 5 2 f 9.0
- 4 8 e 4.0
+ >>> df.nsmallest(3, 'population', keep='last')
+ population GDP alpha-2
+ Anguilla 11300 311 AI
+ Tuvalu 11300 38 TV
+ Nauru 11300 182 NR
When using ``keep='all'``, all duplicate items are maintained:
- >>> df.nsmallest(3, 'a', keep='all')
- a b c
- 0 1 a 1.0
- 5 2 f 9.0
- 2 8 d NaN
- 4 8 e 4.0
+ >>> df.nsmallest(3, 'population', keep='all')
+ population GDP alpha-2
+ Nauru 11300 182 NR
+ Tuvalu 11300 38 TV
+ Anguilla 11300 311 AI
To order by the largest values in column "a" and then "c", we can
specify multiple columns like in the next example.
- >>> df.nsmallest(3, ['a', 'c'])
- a b c
- 0 1 a 1.0
- 5 2 f 9.0
- 4 8 e 4.0
-
- Attempting to use ``nsmallest`` on non-numeric dtypes will raise a
- ``TypeError``:
-
- >>> df.nsmallest(3, 'b')
-
- Traceback (most recent call last):
- TypeError: Column 'b' has dtype object, cannot use method 'nsmallest'
+ >>> df.nsmallest(3, ['population', 'GDP'])
+ population GDP alpha-2
+ Tuvalu 11300 38 TV
+ Nauru 11300 182 NR
+ Anguilla 11300 311 AI
"""
return algorithms.SelectNFrame(self,
n=n,
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Based on #22459. Fix the docstring for `DataFrame.nlargest` and `DataFrame.nsmallest` (since they are quite similar). I also updated `ci/code_check.sh` I extended the existing doctest of `Series.nlargest` and `Series.nsmallest`. | https://api.github.com/repos/pandas-dev/pandas/pulls/23202 | 2018-10-17T14:10:07Z | 2018-11-05T08:15:06Z | 2018-11-05T08:15:06Z | 2018-11-05T21:16:17Z |
DOC: Fix pandas.Series.resample docstring | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index c4b483a794c21..eba96f0c6c2fc 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -151,7 +151,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests generic.py' ; echo $MSG
pytest -q --doctest-modules pandas/core/generic.py \
- -k"-_set_axis_name -_xs -describe -droplevel -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -resample -to_json -transpose -values -xs"
+ -k"-_set_axis_name -_xs -describe -droplevel -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -to_json -transpose -values -xs"
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests top-level reshaping functions' ; echo $MSG
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 53cdc46fdd16b..cfdc6b34274bf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7500,46 +7500,67 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
label=None, convention='start', kind=None, loffset=None,
limit=None, base=0, on=None, level=None):
"""
+ Resample time-series data.
+
Convenience method for frequency conversion and resampling of time
- series. Object must have a datetime-like index (DatetimeIndex,
- PeriodIndex, or TimedeltaIndex), or pass datetime-like values
- to the on or level keyword.
+ series. Object must have a datetime-like index (`DatetimeIndex`,
+ `PeriodIndex`, or `TimedeltaIndex`), or pass datetime-like values
+ to the `on` or `level` keyword.
Parameters
----------
- rule : string
- the offset string or object representing target conversion
- axis : int, optional, default 0
- closed : {'right', 'left'}
+ rule : str
+ The offset string or object representing target conversion.
+ how : str
+ Method for down/re-sampling, default to 'mean' for downsampling.
+
+ .. deprecated:: 0.18.0
+ The new syntax is ``.resample(...).mean()``, or
+ ``.resample(...).apply(<func>)``
+ axis : {0 or 'index', 1 or 'columns'}, default 0
+ Which axis to use for up- or down-sampling. For `Series` this
+ will default to 0, i.e. along the rows. Must be
+ `DatetimeIndex`, `TimedeltaIndex` or `PeriodIndex`.
+ fill_method : str, default None
+ Filling method for upsampling.
+
+ .. deprecated:: 0.18.0
+ The new syntax is ``.resample(...).<func>()``,
+ e.g. ``.resample(...).pad()``
+ closed : {'right', 'left'}, default None
Which side of bin interval is closed. The default is 'left'
for all frequency offsets except for 'M', 'A', 'Q', 'BM',
'BA', 'BQ', and 'W' which all have a default of 'right'.
- label : {'right', 'left'}
+ label : {'right', 'left'}, default None
Which bin edge label to label bucket with. The default is 'left'
for all frequency offsets except for 'M', 'A', 'Q', 'BM',
'BA', 'BQ', and 'W' which all have a default of 'right'.
- convention : {'start', 'end', 's', 'e'}
- For PeriodIndex only, controls whether to use the start or end of
- `rule`
- kind: {'timestamp', 'period'}, optional
+ convention : {'start', 'end', 's', 'e'}, default 'start'
+ For `PeriodIndex` only, controls whether to use the start or
+ end of `rule`.
+ kind : {'timestamp', 'period'}, optional, default None
Pass 'timestamp' to convert the resulting index to a
- ``DateTimeIndex`` or 'period' to convert it to a ``PeriodIndex``.
+ `DateTimeIndex` or 'period' to convert it to a `PeriodIndex`.
By default the input representation is retained.
- loffset : timedelta
- Adjust the resampled time labels
+ loffset : timedelta, default None
+ Adjust the resampled time labels.
+ limit : int, default None
+ Maximum size gap when reindexing with `fill_method`.
+
+ .. deprecated:: 0.18.0
base : int, default 0
For frequencies that evenly subdivide 1 day, the "origin" of the
aggregated intervals. For example, for '5min' frequency, base could
- range from 0 through 4. Defaults to 0
- on : string, optional
+ range from 0 through 4. Defaults to 0.
+ on : str, optional
For a DataFrame, column to use instead of index for resampling.
Column must be datetime-like.
.. versionadded:: 0.19.0
- level : string or int, optional
+ level : str or int, optional
For a MultiIndex, level (name or number) to use for
- resampling. Level must be datetime-like.
+ resampling. `level` must be datetime-like.
.. versionadded:: 0.19.0
@@ -7556,6 +7577,12 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
To learn more about the offset strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
+ See Also
+ --------
+ groupby : Group by mapping, function, label, or list of labels.
+ Series.resample : Resample a Series.
+ DataFrame.resample: Resample a DataFrame.
+
Examples
--------
@@ -7612,7 +7639,7 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
Upsample the series into 30 second bins.
- >>> series.resample('30S').asfreq()[0:5] #select first 5 rows
+ >>> series.resample('30S').asfreq()[0:5] # Select first 5 rows
2000-01-01 00:00:00 0.0
2000-01-01 00:00:30 NaN
2000-01-01 00:01:00 1.0
@@ -7645,8 +7672,8 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
Pass a custom function via ``apply``
>>> def custom_resampler(array_like):
- ... return np.sum(array_like)+5
-
+ ... return np.sum(array_like) + 5
+ ...
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
2000-01-01 00:03:00 17
@@ -7656,73 +7683,106 @@ def resample(self, rule, how=None, axis=0, fill_method=None, closed=None,
For a Series with a PeriodIndex, the keyword `convention` can be
used to control whether to use the start or end of `rule`.
+ Resample a year by quarter using 'start' `convention`. Values are
+ assigned to the first quarter of the period.
+
>>> s = pd.Series([1, 2], index=pd.period_range('2012-01-01',
- freq='A',
- periods=2))
+ ... freq='A',
+ ... periods=2))
>>> s
2012 1
2013 2
Freq: A-DEC, dtype: int64
-
- Resample by month using 'start' `convention`. Values are assigned to
- the first month of the period.
-
- >>> s.resample('M', convention='start').asfreq().head()
- 2012-01 1.0
- 2012-02 NaN
- 2012-03 NaN
- 2012-04 NaN
- 2012-05 NaN
- Freq: M, dtype: float64
-
- Resample by month using 'end' `convention`. Values are assigned to
- the last month of the period.
-
- >>> s.resample('M', convention='end').asfreq()
- 2012-12 1.0
- 2013-01 NaN
- 2013-02 NaN
- 2013-03 NaN
- 2013-04 NaN
- 2013-05 NaN
- 2013-06 NaN
- 2013-07 NaN
- 2013-08 NaN
- 2013-09 NaN
- 2013-10 NaN
- 2013-11 NaN
- 2013-12 2.0
+ >>> s.resample('Q', convention='start').asfreq()
+ 2012Q1 1.0
+ 2012Q2 NaN
+ 2012Q3 NaN
+ 2012Q4 NaN
+ 2013Q1 2.0
+ 2013Q2 NaN
+ 2013Q3 NaN
+ 2013Q4 NaN
+ Freq: Q-DEC, dtype: float64
+
+ Resample quarters by month using 'end' `convention`. Values are
+ assigned to the last month of the period.
+
+ >>> q = pd.Series([1, 2, 3, 4], index=pd.period_range('2018-01-01',
+ ... freq='Q',
+ ... periods=4))
+ >>> q
+ 2018Q1 1
+ 2018Q2 2
+ 2018Q3 3
+ 2018Q4 4
+ Freq: Q-DEC, dtype: int64
+ >>> q.resample('M', convention='end').asfreq()
+ 2018-03 1.0
+ 2018-04 NaN
+ 2018-05 NaN
+ 2018-06 2.0
+ 2018-07 NaN
+ 2018-08 NaN
+ 2018-09 3.0
+ 2018-10 NaN
+ 2018-11 NaN
+ 2018-12 4.0
Freq: M, dtype: float64
- For DataFrame objects, the keyword ``on`` can be used to specify the
+ For DataFrame objects, the keyword `on` can be used to specify the
column instead of the index for resampling.
- >>> df = pd.DataFrame(data=9*[range(4)], columns=['a', 'b', 'c', 'd'])
- >>> df['time'] = pd.date_range('1/1/2000', periods=9, freq='T')
- >>> df.resample('3T', on='time').sum()
- a b c d
- time
- 2000-01-01 00:00:00 0 3 6 9
- 2000-01-01 00:03:00 0 3 6 9
- 2000-01-01 00:06:00 0 3 6 9
-
- For a DataFrame with MultiIndex, the keyword ``level`` can be used to
- specify on level the resampling needs to take place.
-
- >>> time = pd.date_range('1/1/2000', periods=5, freq='T')
- >>> df2 = pd.DataFrame(data=10*[range(4)],
- columns=['a', 'b', 'c', 'd'],
- index=pd.MultiIndex.from_product([time, [1, 2]])
- )
- >>> df2.resample('3T', level=0).sum()
- a b c d
- 2000-01-01 00:00:00 0 6 12 18
- 2000-01-01 00:03:00 0 4 8 12
-
- See also
- --------
- groupby : Group by mapping, function, label, or list of labels.
+ >>> d = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19],
+ ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]})
+ >>> df = pd.DataFrame(d)
+ >>> df['week_starting'] = pd.date_range('01/01/2018',
+ ... periods=8,
+ ... freq='W')
+ >>> df
+ price volume week_starting
+ 0 10 50 2018-01-07
+ 1 11 60 2018-01-14
+ 2 9 40 2018-01-21
+ 3 13 100 2018-01-28
+ 4 14 50 2018-02-04
+ 5 18 100 2018-02-11
+ 6 17 40 2018-02-18
+ 7 19 50 2018-02-25
+ >>> df.resample('M', on='week_starting').mean()
+ price volume
+ week_starting
+ 2018-01-31 10.75 62.5
+ 2018-02-28 17.00 60.0
+
+ For a DataFrame with MultiIndex, the keyword `level` can be used to
+ specify on which level the resampling needs to take place.
+
+ >>> days = pd.date_range('1/1/2000', periods=4, freq='D')
+ >>> d2 = dict({'price': [10, 11, 9, 13, 14, 18, 17, 19],
+ ... 'volume': [50, 60, 40, 100, 50, 100, 40, 50]})
+ >>> df2 = pd.DataFrame(d2,
+ ... index=pd.MultiIndex.from_product([days,
+ ... ['morning',
+ ... 'afternoon']]
+ ... ))
+ >>> df2
+ price volume
+ 2000-01-01 morning 10 50
+ afternoon 11 60
+ 2000-01-02 morning 9 40
+ afternoon 13 100
+ 2000-01-03 morning 14 50
+ afternoon 18 100
+ 2000-01-04 morning 17 40
+ afternoon 19 50
+ >>> df2.resample('D', level=0).sum()
+ price volume
+ 2000-01-01 21 110
+ 2000-01-02 22 140
+ 2000-01-03 32 150
+ 2000-01-04 36 90
"""
+
from pandas.core.resample import (resample,
_maybe_process_deprecations)
axis = self._get_axis_number(axis)
| - [x] closes #22894
- [x] tests added / passed:
- [x] `./scripts/validate_docstrings.py pandas.Series.resample`
- [x] `flake8 --doctests pandas/core/generic.py`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry --> Needed?
- Followed pandas docstring [conventions](https://pandas.pydata.org/pandas-docs/stable/contributing_docstring.html) and general reST.
- Added deprecation directive on some parameters. Had some doubts but after investigation the recommended API changed after v0.18.0 for these parameters.
Please let me know if I can further improve this.
Thanks! | https://api.github.com/repos/pandas-dev/pandas/pulls/23197 | 2018-10-17T11:46:16Z | 2018-11-10T21:47:24Z | 2018-11-10T21:47:24Z | 2018-11-10T21:47:25Z |
CLN: Merge_asof null error message | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 88b1ec7e47bbb..ed9466795f97f 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1390,18 +1390,18 @@ def flip(xs):
self.right_join_keys[-1])
tolerance = self.tolerance
- # we required sortedness and non-missingness in the join keys
+ # we require sortedness and non-null values in the join keys
msg_sorted = "{side} keys must be sorted"
msg_missings = "Merge keys contain null values on {side} side"
if not Index(left_values).is_monotonic:
- if isnull(left_values).sum() > 0:
+ if isnull(left_values).any():
raise ValueError(msg_missings.format(side='left'))
else:
raise ValueError(msg_sorted.format(side='left'))
if not Index(right_values).is_monotonic:
- if isnull(right_values).sum() > 0:
+ if isnull(right_values).any():
raise ValueError(msg_missings.format(side='right'))
else:
raise ValueError(msg_sorted.format(side='right'))
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index ba0cdda61a12c..cf39293f47082 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1008,24 +1008,19 @@ def test_merge_datatype_error(self):
with tm.assert_raises_regex(MergeError, msg):
merge_asof(left, right, on='a')
- def test_merge_on_nans_int(self):
- # 23189
- msg = "Merge keys contain null values on left side"
- left = pd.DataFrame({'a': [1.0, 5.0, 10.0, 12.0, np.nan],
- 'left_val': ['a', 'b', 'c', 'd', 'e']})
- right = pd.DataFrame({'a': [1.0, 5.0, 10.0, 12.0],
- 'right_val': [1, 6, 11, 15]})
-
- with tm.assert_raises_regex(ValueError, msg):
- merge_asof(left, right, on='a')
-
- def test_merge_on_nans_datetime(self):
- # 23189
- msg = "Merge keys contain null values on right side"
- left = pd.DataFrame({"a": pd.date_range('20130101', periods=5)})
- date_vals = pd.date_range('20130102', periods=5)\
- .append(pd.Index([None]))
- right = pd.DataFrame({"a": date_vals})
+ @pytest.mark.parametrize('func', [lambda x: x, lambda x: to_datetime(x)],
+ ids=['numeric', 'datetime'])
+ @pytest.mark.parametrize('side', ['left', 'right'])
+ def test_merge_on_nans(self, func, side):
+ # GH 23189
+ msg = "Merge keys contain null values on {} side".format(side)
+ nulls = func([1.0, 5.0, np.nan])
+ non_nulls = func([1.0, 5.0, 10.])
+ df_null = pd.DataFrame({'a': nulls, 'left_val': ['a', 'b', 'c']})
+ df = pd.DataFrame({'a': non_nulls, 'right_val': [1, 6, 11]})
with tm.assert_raises_regex(ValueError, msg):
- merge_asof(left, right, on='a')
+ if side == 'left':
+ merge_asof(df_null, df, on='a')
+ else:
+ merge_asof(df, df_null, on='a')
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Post #23190 cleanup based on a review I wanted to give:
* `any()` instead of `sum() > 0`
* pytest parametrize
cc @jreback | https://api.github.com/repos/pandas-dev/pandas/pulls/23195 | 2018-10-17T07:07:22Z | 2018-10-17T12:24:34Z | 2018-10-17T12:24:34Z | 2018-10-17T16:08:27Z |
CLN import from pandas.core.arrays when possible | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 4cc33d7afd6c8..0f07a9cf3c0e0 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -508,7 +508,7 @@ def _add_delta(self, delta):
The result's name is set outside of _add_delta by the calling
method (__add__ or __sub__)
"""
- from pandas.core.arrays.timedeltas import TimedeltaArrayMixin
+ from pandas.core.arrays import TimedeltaArrayMixin
if isinstance(delta, (Tick, timedelta, np.timedelta64)):
new_values = self._add_delta_td(delta)
@@ -803,7 +803,7 @@ def to_period(self, freq=None):
pandas.PeriodIndex: Immutable ndarray holding ordinal values
pandas.DatetimeIndex.to_pydatetime: Return DatetimeIndex as object
"""
- from pandas.core.arrays.period import PeriodArrayMixin
+ from pandas.core.arrays import PeriodArrayMixin
if self.tz is not None:
warnings.warn("Converting to PeriodArray/Index representation "
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 8624ddd8965e8..16fa9ccb43b4d 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -302,7 +302,7 @@ def to_timestamp(self, freq=None, how='start'):
-------
DatetimeArray/Index
"""
- from pandas.core.arrays.datetimes import DatetimeArrayMixin
+ from pandas.core.arrays import DatetimeArrayMixin
how = libperiod._validate_end_alias(how)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 957f3be8cf6ae..63bf67854e5cd 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -44,7 +44,7 @@
from pandas.core.dtypes.cast import maybe_downcast_to_dtype
from pandas.core.base import SpecificationError, DataError
from pandas.core.index import Index, MultiIndex, CategoricalIndex
-from pandas.core.arrays.categorical import Categorical
+from pandas.core.arrays import Categorical
from pandas.core.internals import BlockManager, make_block
from pandas.compat.numpy import _np_version_under1p13
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 6bb4241451b3f..bfce5fb1462d9 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -5,9 +5,9 @@
import pandas as pd
import pandas.util.testing as tm
-from pandas.core.arrays.datetimes import DatetimeArrayMixin
-from pandas.core.arrays.timedeltas import TimedeltaArrayMixin
-from pandas.core.arrays.period import PeriodArrayMixin
+from pandas.core.arrays import (DatetimeArrayMixin,
+ TimedeltaArrayMixin,
+ PeriodArrayMixin)
# TODO: more freq variants
| - [ ] closes #xxxx
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
`pandas.core.arrays.__init__.py` imports some classes by default, so other classes can import directly from `pandas.core.arrays` instead of accessing the specific files | https://api.github.com/repos/pandas-dev/pandas/pulls/23193 | 2018-10-17T01:31:43Z | 2018-10-17T11:06:10Z | 2018-10-17T11:06:10Z | 2018-10-17T11:06:10Z |
Merge asof with nans | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3053625721560..16f0b9ee99909 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -972,6 +972,7 @@ Reshaping
- Bug in :func:`merge` when merging ``datetime64[ns, tz]`` data that contained a DST transition (:issue:`18885`)
- Bug in :func:`merge_asof` when merging on float values within defined tolerance (:issue:`22981`)
- Bug in :func:`pandas.concat` when concatenating a multicolumn DataFrame with tz-aware data against a DataFrame with a different number of columns (:issue`22796`)
+- Bug in :func:`merge_asof` where confusing error message raised when attempting to merge with missing values (:issue:`23189`)
.. _whatsnew_0240.bug_fixes.sparse:
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index d0c7b66978661..88b1ec7e47bbb 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -36,7 +36,7 @@
ensure_float64,
ensure_object,
_get_dtype)
-from pandas.core.dtypes.missing import na_value_for_dtype
+from pandas.core.dtypes.missing import na_value_for_dtype, isnull
from pandas.core.internals import (items_overlap_with_suffix,
concatenate_block_managers)
from pandas.util._decorators import Appender, Substitution
@@ -1390,12 +1390,21 @@ def flip(xs):
self.right_join_keys[-1])
tolerance = self.tolerance
- # we required sortedness in the join keys
- msg = "{side} keys must be sorted"
+ # we required sortedness and non-missingness in the join keys
+ msg_sorted = "{side} keys must be sorted"
+ msg_missings = "Merge keys contain null values on {side} side"
+
if not Index(left_values).is_monotonic:
- raise ValueError(msg.format(side='left'))
+ if isnull(left_values).sum() > 0:
+ raise ValueError(msg_missings.format(side='left'))
+ else:
+ raise ValueError(msg_sorted.format(side='left'))
+
if not Index(right_values).is_monotonic:
- raise ValueError(msg.format(side='right'))
+ if isnull(right_values).sum() > 0:
+ raise ValueError(msg_missings.format(side='right'))
+ else:
+ raise ValueError(msg_sorted.format(side='right'))
# initial type conversion as needed
if needs_i8_conversion(left_values):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index c75a6a707cafc..ba0cdda61a12c 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1007,3 +1007,25 @@ def test_merge_datatype_error(self):
with tm.assert_raises_regex(MergeError, msg):
merge_asof(left, right, on='a')
+
+ def test_merge_on_nans_int(self):
+ # 23189
+ msg = "Merge keys contain null values on left side"
+ left = pd.DataFrame({'a': [1.0, 5.0, 10.0, 12.0, np.nan],
+ 'left_val': ['a', 'b', 'c', 'd', 'e']})
+ right = pd.DataFrame({'a': [1.0, 5.0, 10.0, 12.0],
+ 'right_val': [1, 6, 11, 15]})
+
+ with tm.assert_raises_regex(ValueError, msg):
+ merge_asof(left, right, on='a')
+
+ def test_merge_on_nans_datetime(self):
+ # 23189
+ msg = "Merge keys contain null values on right side"
+ left = pd.DataFrame({"a": pd.date_range('20130101', periods=5)})
+ date_vals = pd.date_range('20130102', periods=5)\
+ .append(pd.Index([None]))
+ right = pd.DataFrame({"a": date_vals})
+
+ with tm.assert_raises_regex(ValueError, msg):
+ merge_asof(left, right, on='a')
| Fix to raise a more meaningful error message when attempting to merge_asof with missing values
- [ ] closes #23189
- [ ] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23190 | 2018-10-16T20:05:17Z | 2018-10-17T01:15:06Z | 2018-10-17T01:15:06Z | 2018-10-17T06:02:08Z |
BUG: sets in str.cat | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index d0aa156cf5059..3f3a01cf6e850 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -210,6 +210,7 @@ Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- A newly constructed empty :class:`DataFrame` with integer as the ``dtype`` will now only be cast to ``float64`` if ``index`` is specified (:issue:`22858`)
+- :meth:`Series.str.cat` will now raise if `others` is a `set` (:issue:`23009`)
.. _whatsnew_0240.api_breaking.deps:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 4086021bc61a6..c824ad1712a5a 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1996,12 +1996,12 @@ def _get_series_list(self, others, ignore_index=False):
elif isinstance(others, np.ndarray) and others.ndim == 2:
others = DataFrame(others, index=idx)
return ([others[x] for x in others], False)
- elif is_list_like(others):
+ elif is_list_like(others, allow_sets=False):
others = list(others) # ensure iterators do not get read twice etc
# in case of list-like `others`, all elements must be
# either one-dimensional list-likes or scalars
- if all(is_list_like(x) for x in others):
+ if all(is_list_like(x, allow_sets=False) for x in others):
los = []
join_warn = False
depr_warn = False
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 75b1bcb8b2938..87bf89229a64e 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -301,7 +301,17 @@ def test_str_cat_mixed_inputs(self, box):
with tm.assert_raises_regex(TypeError, rgx):
s.str.cat([u, [u, d]])
- # forbidden input type, e.g. int
+ # forbidden input type: set
+ # GH 23009
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat(set(u))
+
+ # forbidden input type: set in list
+ # GH 23009
+ with tm.assert_raises_regex(TypeError, rgx):
+ s.str.cat([u, set(u)])
+
+ # other forbidden input type, e.g. int
with tm.assert_raises_regex(TypeError, rgx):
s.str.cat(1)
| - [x] closes #23009
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This is split off from #23065. | https://api.github.com/repos/pandas-dev/pandas/pulls/23187 | 2018-10-16T19:15:24Z | 2018-10-23T03:10:07Z | 2018-10-23T03:10:07Z | 2018-10-23T05:39:59Z |
BUG: SparseArray.unique with all sparse | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 3053625721560..6fb066d1b8fe6 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -985,6 +985,7 @@ Sparse
- Improved performance of :meth:`Series.shift` for non-NA ``fill_value``, as values are no longer converted to a dense array.
- Bug in ``DataFrame.groupby`` not including ``fill_value`` in the groups for non-NA ``fill_value`` when grouping by a sparse column (:issue:`5078`)
- Bug in unary inversion operator (``~``) on a ``SparseSeries`` with boolean values. The performance of this has also been improved (:issue:`22835`)
+- Bug in :meth:`SparseArary.unique` not returning the unique values (:issue:`19595`)
Build Changes
^^^^^^^^^^^^^
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index f5e54e4425444..e6ca7b8de83e4 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -809,7 +809,7 @@ def _first_fill_value_loc(self):
return -1
indices = self.sp_index.to_int_index().indices
- if indices[0] > 0:
+ if not len(indices) or indices[0] > 0:
return 0
diff = indices[1:] - indices[:-1]
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 0257d996228df..c6f777863265c 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -1065,6 +1065,14 @@ def test_unique_na_fill(arr, fill_value):
tm.assert_numpy_array_equal(a, b)
+def test_unique_all_sparse():
+ # https://github.com/pandas-dev/pandas/issues/23168
+ arr = SparseArray([0, 0])
+ result = arr.unique()
+ expected = SparseArray([0])
+ tm.assert_sp_array_equal(result, expected)
+
+
def test_map():
arr = SparseArray([0, 1, 2])
expected = SparseArray([10, 11, 12], fill_value=10)
| xref https://github.com/pandas-dev/pandas/issues/23168
Closes https://github.com/pandas-dev/pandas/issues/19595 | https://api.github.com/repos/pandas-dev/pandas/pulls/23186 | 2018-10-16T16:17:24Z | 2018-10-18T11:25:58Z | 2018-10-18T11:25:57Z | 2018-10-18T11:26:01Z |
API: Add sparse Acessor | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 1ec2a56dcd094..6e8eb83577c46 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -851,6 +851,22 @@ Sparse
SparseSeries.to_coo
SparseSeries.from_coo
+.. autosummary::
+ :toctree: generated/
+ :template: autosummary/accessor_attribute.rst
+
+ Series.sparse.npoints
+ Series.sparse.density
+ Series.sparse.fill_value
+ Series.sparse.sp_values
+
+
+.. autosummary::
+ :toctree: generated/
+
+ Series.sparse.from_coo
+ Series.sparse.to_coo
+
.. _api.dataframe:
DataFrame
diff --git a/doc/source/sparse.rst b/doc/source/sparse.rst
index 2bb99dd1822b6..884512981e1c9 100644
--- a/doc/source/sparse.rst
+++ b/doc/source/sparse.rst
@@ -62,6 +62,26 @@ Any sparse object can be converted back to the standard dense form by calling
sts.to_dense()
+.. _sparse.accessor:
+
+Sparse Accessor
+---------------
+
+.. versionadded:: 0.24.0
+
+Pandas provides a ``.sparse`` accessor, similar to ``.str`` for string data, ``.cat``
+for categorical data, and ``.dt`` for datetime-like data. This namespace provides
+attributes and methods that are specific to sparse data.
+
+.. ipython:: python
+
+ s = pd.Series([0, 0, 1, 2], dtype="Sparse[int]")
+ s.sparse.density
+ s.sparse.fill_value
+
+This accessor is available only on data with ``SparseDtype``, and on the :class:`Series`
+class itself for creating a Series with sparse data from a scipy COO matrix with.
+
.. _sparse.array:
SparseArray
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index e4b31b21b11ac..64b377ea0843d 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -524,7 +524,6 @@ changes were made:
- ``SparseDataFrame.combine`` and ``DataFrame.combine_first`` no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
- Setting :attr:`SparseArray.fill_value` to a fill value with a different dtype is now allowed.
-
Some new warnings are issued for operations that require or are likely to materialize a large dense array:
- A :class:`errors.PerformanceWarning` is issued when using fillna with a ``method``, as a dense array is constructed to create the filled array. Filling with a ``value`` is the efficient way to fill a sparse array.
@@ -532,6 +531,13 @@ Some new warnings are issued for operations that require or are likely to materi
In addition to these API breaking changes, many :ref:`performance improvements and bug fixes have been made <whatsnew_0240.bug_fixes.sparse>`.
+Finally, a ``Series.sparse`` accessor was added to provide sparse-specific methods like :meth:`Series.sparse.from_coo`.
+
+.. ipython:: python
+
+ s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')
+ s.sparse.density
+
.. _whatsnew_0240.api_breaking.frame_to_dict_index_orient:
Raise ValueError in ``DataFrame.to_dict(orient='index')``
diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index eab529584d1fb..bc91372e3ac7d 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -113,15 +113,18 @@ def delegate_names(delegate, accessors, typ, overwrite=False):
Parameters
----------
- delegate : the class to get methods/properties & doc-strings
- acccessors : string list of accessors to add
- typ : 'property' or 'method'
+ delegate : object
+ the class to get methods/properties & doc-strings
+ acccessors : Sequence[str]
+ List of accessor to add
+ typ : {'property', 'method'}
overwrite : boolean, default False
overwrite the method/property in the target class if it exists
Returns
-------
- decorator
+ callable
+ A class decorator.
Examples
--------
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 920a9f8286f0d..72527cfa5d12e 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -17,6 +17,7 @@
from pandas.errors import PerformanceWarning
from pandas.compat.numpy import function as nv
+from pandas.core.accessor import PandasDelegate, delegate_names
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
import pandas.core.common as com
from pandas.core.dtypes.base import ExtensionDtype
@@ -178,6 +179,7 @@ def _is_boolean(self):
@property
def kind(self):
+ """The sparse kind. Either 'integer', or 'block'."""
return self.subtype.kind
@property
@@ -648,10 +650,22 @@ def _from_factorized(cls, values, original):
# ------------------------------------------------------------------------
@property
def sp_index(self):
+ """
+ The SparseIndex containing the location of non- ``fill_value`` points.
+ """
return self._sparse_index
@property
def sp_values(self):
+ """
+ An ndarray containing the non- ``fill_value`` values.
+
+ Examples
+ --------
+ >>> s = SparseArray([0, 0, 1, 0, 2], fill_value=0)
+ >>> s.sp_values
+ array([1, 2])
+ """
return self._sparse_values
@property
@@ -704,6 +718,31 @@ def _fill_value_matches(self, fill_value):
def nbytes(self):
return self.sp_values.nbytes + self.sp_index.nbytes
+ @property
+ def density(self):
+ """The percent of non- ``fill_value`` points, as decimal.
+
+ Examples
+ --------
+ >>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
+ >>> s.density
+ 0.6
+ """
+ r = float(self.sp_index.npoints) / float(self.sp_index.length)
+ return r
+
+ @property
+ def npoints(self):
+ """The number of non- ``fill_value`` points.
+
+ Examples
+ --------
+ >>> s = SparseArray([0, 0, 1, 1, 1], fill_value=0)
+ >>> s.npoints
+ 3
+ """
+ return self.sp_index.npoints
+
@property
def values(self):
"""
@@ -1744,3 +1783,138 @@ def _make_index(length, indices, kind):
else: # pragma: no cover
raise ValueError('must be block or integer type')
return index
+
+
+# ----------------------------------------------------------------------------
+# Accessor
+
+@delegate_names(SparseArray, ['npoints', 'density', 'fill_value',
+ 'sp_values'],
+ typ='property')
+class SparseAccessor(PandasDelegate):
+ def __init__(self, data=None):
+ self._validate(data)
+ # Store the Series since we need that for to_coo
+ self._parent = data
+
+ @staticmethod
+ def _validate(data):
+ if not isinstance(data.dtype, SparseDtype):
+ msg = "Can only use the '.sparse' accessor with Sparse data."
+ raise AttributeError(msg)
+
+ def _delegate_property_get(self, name, *args, **kwargs):
+ return getattr(self._parent.values, name)
+
+ def _delegate_method(self, name, *args, **kwargs):
+ if name == 'from_coo':
+ return self.from_coo(*args, **kwargs)
+ elif name == 'to_coo':
+ return self.to_coo(*args, **kwargs)
+ else:
+ raise ValueError
+
+ @classmethod
+ def from_coo(cls, A, dense_index=False):
+ """
+ Create a SparseSeries from a scipy.sparse.coo_matrix.
+
+ Parameters
+ ----------
+ A : scipy.sparse.coo_matrix
+ dense_index : bool, default False
+ If False (default), the SparseSeries index consists of only the
+ coords of the non-null entries of the original coo_matrix.
+ If True, the SparseSeries index consists of the full sorted
+ (row, col) coordinates of the coo_matrix.
+
+ Returns
+ -------
+ s : SparseSeries
+
+ Examples
+ ---------
+ >>> from scipy import sparse
+ >>> A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])),
+ shape=(3, 4))
+ >>> A
+ <3x4 sparse matrix of type '<class 'numpy.float64'>'
+ with 3 stored elements in COOrdinate format>
+ >>> A.todense()
+ matrix([[ 0., 0., 1., 2.],
+ [ 3., 0., 0., 0.],
+ [ 0., 0., 0., 0.]])
+ >>> ss = pd.SparseSeries.from_coo(A)
+ >>> ss
+ 0 2 1
+ 3 2
+ 1 0 3
+ dtype: float64
+ BlockIndex
+ Block locations: array([0], dtype=int32)
+ Block lengths: array([3], dtype=int32)
+ """
+ from pandas.core.sparse.scipy_sparse import _coo_to_sparse_series
+ from pandas import Series
+
+ result = _coo_to_sparse_series(A, dense_index=dense_index)
+ # SparseSeries -> Series[sparse]
+ result = Series(result.values, index=result.index, copy=False)
+
+ return result
+
+ def to_coo(self, row_levels=(0, ), column_levels=(1, ), sort_labels=False):
+ """
+ Create a scipy.sparse.coo_matrix from a SparseSeries with MultiIndex.
+
+ Use row_levels and column_levels to determine the row and column
+ coordinates respectively. row_levels and column_levels are the names
+ (labels) or numbers of the levels. {row_levels, column_levels} must be
+ a partition of the MultiIndex level names (or numbers).
+
+ Parameters
+ ----------
+ row_levels : tuple/list
+ column_levels : tuple/list
+ sort_labels : bool, default False
+ Sort the row and column labels before forming the sparse matrix.
+
+ Returns
+ -------
+ y : scipy.sparse.coo_matrix
+ rows : list (row labels)
+ columns : list (column labels)
+
+ Examples
+ --------
+ >>> s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
+ >>> s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0),
+ (1, 2, 'a', 1),
+ (1, 1, 'b', 0),
+ (1, 1, 'b', 1),
+ (2, 1, 'b', 0),
+ (2, 1, 'b', 1)],
+ names=['A', 'B', 'C', 'D'])
+ >>> ss = s.to_sparse()
+ >>> A, rows, columns = ss.to_coo(row_levels=['A', 'B'],
+ column_levels=['C', 'D'],
+ sort_labels=True)
+ >>> A
+ <3x4 sparse matrix of type '<class 'numpy.float64'>'
+ with 3 stored elements in COOrdinate format>
+ >>> A.todense()
+ matrix([[ 0., 0., 1., 3.],
+ [ 3., 0., 0., 0.],
+ [ 0., 0., 0., 0.]])
+ >>> rows
+ [(1, 1), (1, 2), (2, 1)]
+ >>> columns
+ [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
+ """
+ from pandas.core.sparse.scipy_sparse import _sparse_series_to_coo
+
+ A, rows, columns = _sparse_series_to_coo(self._parent,
+ row_levels,
+ column_levels,
+ sort_labels=sort_labels)
+ return A, rows, columns
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index a1868980faed3..2b4a23b81312e 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -1,7 +1,6 @@
"""
datetimelike delegation
"""
-
import numpy as np
from pandas.core.dtypes.generic import ABCSeries
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7ebbe0dfb4bb7..9d61d4b8ee8a7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -26,6 +26,7 @@
from pandas.core.accessor import CachedAccessor
from pandas.core.arrays import ExtensionArray
from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
+from pandas.core.arrays.sparse import SparseAccessor
from pandas.core.config import get_option
from pandas.core.dtypes.cast import (
construct_1d_arraylike_from_scalar, construct_1d_ndarray_preserving_na,
@@ -141,7 +142,7 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
Copy input data
"""
_metadata = ['name']
- _accessors = {'dt', 'cat', 'str'}
+ _accessors = {'dt', 'cat', 'str', 'sparse'}
_deprecations = generic.NDFrame._deprecations | frozenset(
['asobject', 'sortlevel', 'reshape', 'get_value', 'set_value',
'from_csv', 'valid'])
@@ -4149,6 +4150,7 @@ def to_period(self, freq=None, copy=True):
dt = CachedAccessor("dt", CombinedDatetimelikeProperties)
cat = CachedAccessor("cat", CategoricalAccessor)
plot = CachedAccessor("plot", gfx.SeriesPlotMethods)
+ sparse = CachedAccessor("sparse", SparseAccessor)
# ----------------------------------------------------------------------
# Add plotting methods to Series
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 5a747c6e4b1d1..ff32712f9056a 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -27,6 +27,7 @@
from pandas.core.arrays import (
SparseArray,
)
+from pandas.core.arrays.sparse import SparseAccessor
from pandas._libs.sparse import BlockIndex, IntIndex
import pandas._libs.sparse as splib
@@ -183,7 +184,7 @@ def sp_values(self):
@property
def npoints(self):
- return self.sp_index.npoints
+ return self.values.npoints
@classmethod
def from_array(cls, arr, index=None, name=None, copy=False,
@@ -452,8 +453,7 @@ def to_dense(self):
@property
def density(self):
- r = float(self.sp_index.npoints) / float(self.sp_index.length)
- return r
+ return self.values.density
def copy(self, deep=True):
"""
@@ -580,99 +580,16 @@ def combine_first(self, other):
dense_combined = self.to_dense().combine_first(other)
return dense_combined.to_sparse(fill_value=self.fill_value)
+ @Appender(SparseAccessor.to_coo.__doc__)
def to_coo(self, row_levels=(0, ), column_levels=(1, ), sort_labels=False):
- """
- Create a scipy.sparse.coo_matrix from a SparseSeries with MultiIndex.
-
- Use row_levels and column_levels to determine the row and column
- coordinates respectively. row_levels and column_levels are the names
- (labels) or numbers of the levels. {row_levels, column_levels} must be
- a partition of the MultiIndex level names (or numbers).
-
- Parameters
- ----------
- row_levels : tuple/list
- column_levels : tuple/list
- sort_labels : bool, default False
- Sort the row and column labels before forming the sparse matrix.
-
- Returns
- -------
- y : scipy.sparse.coo_matrix
- rows : list (row labels)
- columns : list (column labels)
-
- Examples
- --------
- >>> s = pd.Series([3.0, np.nan, 1.0, 3.0, np.nan, np.nan])
- >>> s.index = pd.MultiIndex.from_tuples([(1, 2, 'a', 0),
- (1, 2, 'a', 1),
- (1, 1, 'b', 0),
- (1, 1, 'b', 1),
- (2, 1, 'b', 0),
- (2, 1, 'b', 1)],
- names=['A', 'B', 'C', 'D'])
- >>> ss = s.to_sparse()
- >>> A, rows, columns = ss.to_coo(row_levels=['A', 'B'],
- column_levels=['C', 'D'],
- sort_labels=True)
- >>> A
- <3x4 sparse matrix of type '<class 'numpy.float64'>'
- with 3 stored elements in COOrdinate format>
- >>> A.todense()
- matrix([[ 0., 0., 1., 3.],
- [ 3., 0., 0., 0.],
- [ 0., 0., 0., 0.]])
- >>> rows
- [(1, 1), (1, 2), (2, 1)]
- >>> columns
- [('a', 0), ('a', 1), ('b', 0), ('b', 1)]
- """
A, rows, columns = _sparse_series_to_coo(self, row_levels,
column_levels,
sort_labels=sort_labels)
return A, rows, columns
@classmethod
+ @Appender(SparseAccessor.from_coo.__doc__)
def from_coo(cls, A, dense_index=False):
- """
- Create a SparseSeries from a scipy.sparse.coo_matrix.
-
- Parameters
- ----------
- A : scipy.sparse.coo_matrix
- dense_index : bool, default False
- If False (default), the SparseSeries index consists of only the
- coords of the non-null entries of the original coo_matrix.
- If True, the SparseSeries index consists of the full sorted
- (row, col) coordinates of the coo_matrix.
-
- Returns
- -------
- s : SparseSeries
-
- Examples
- ---------
- >>> from scipy import sparse
- >>> A = sparse.coo_matrix(([3.0, 1.0, 2.0], ([1, 0, 0], [0, 2, 3])),
- shape=(3, 4))
- >>> A
- <3x4 sparse matrix of type '<class 'numpy.float64'>'
- with 3 stored elements in COOrdinate format>
- >>> A.todense()
- matrix([[ 0., 0., 1., 2.],
- [ 3., 0., 0., 0.],
- [ 0., 0., 0., 0.]])
- >>> ss = pd.SparseSeries.from_coo(A)
- >>> ss
- 0 2 1
- 3 2
- 1 0 3
- dtype: float64
- BlockIndex
- Block locations: array([0], dtype=int32)
- Block lengths: array([3], dtype=int32)
- """
return _coo_to_sparse_series(A, dense_index=dense_index)
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index e211b8626b53c..cc9512c0759fc 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -996,6 +996,55 @@ def test_asarray_datetime64(self):
)
np.asarray(s)
+ def test_density(self):
+ arr = SparseArray([0, 1])
+ assert arr.density == 0.5
+
+ def test_npoints(self):
+ arr = SparseArray([0, 1])
+ assert arr.npoints == 1
+
+
+class TestAccessor(object):
+
+ @pytest.mark.parametrize('attr', [
+ 'npoints', 'density', 'fill_value', 'sp_values',
+ ])
+ def test_get_attributes(self, attr):
+ arr = SparseArray([0, 1])
+ ser = pd.Series(arr)
+
+ result = getattr(ser.sparse, attr)
+ expected = getattr(arr, attr)
+ assert result == expected
+
+ def test_from_coo(self):
+ sparse = pytest.importorskip("scipy.sparse")
+
+ row = [0, 3, 1, 0]
+ col = [0, 3, 1, 2]
+ data = [4, 5, 7, 9]
+ sp_array = sparse.coo_matrix(data, (row, col))
+ result = pd.Series.sparse.from_coo(sp_array)
+
+ index = pd.MultiIndex.from_product([[0], [0, 1, 2, 3]])
+ expected = pd.Series(data, index=index, dtype='Sparse[int]')
+ tm.assert_series_equal(result, expected)
+
+ def test_to_coo(self):
+ sparse = pytest.importorskip("scipy.sparse")
+ ser = pd.Series([1, 2, 3],
+ index=pd.MultiIndex.from_product([[0], [1, 2, 3]],
+ names=['a', 'b']),
+ dtype='Sparse[int]')
+ A, _, _ = ser.sparse.to_coo()
+ assert isinstance(A, sparse.coo.coo_matrix)
+
+ def test_non_sparse_raises(self):
+ ser = pd.Series([1, 2, 3])
+ with tm.assert_raises_regex(AttributeError, '.sparse'):
+ ser.sparse.density
+
def test_setting_fill_value_fillna_still_works():
# This is why letting users update fill_value / dtype is bad
| * Adds a Series.sparse accessor
* Adds several methods to SparseArray to for use via the accessor
Closes #23148.
This should provide all the methods / attributes that were available on SparseSeries, but not Series.
Right now the docs for `.sparse.from_coo` and `to_coo` seem to be broken. It's a bit strange since they're implemented on the accessor (they don't make sense on the Array) | https://api.github.com/repos/pandas-dev/pandas/pulls/23183 | 2018-10-16T14:35:48Z | 2018-10-26T01:28:55Z | 2018-10-26T01:28:55Z | 2018-10-26T01:31:50Z |
Windows CI | diff --git a/ci/azure-windows-36.yaml b/ci/azure-windows-36.yaml
index 6230e9b6a1885..656a6a31d92b4 100644
--- a/ci/azure-windows-36.yaml
+++ b/ci/azure-windows-36.yaml
@@ -5,12 +5,14 @@ channels:
dependencies:
- blosc
- bottleneck
+ - boost-cpp<1.67
- fastparquet
- feather-format
- matplotlib
- numexpr
- numpy=1.14*
- openpyxl=2.5.5
+ - parquet-cpp
- pyarrow
- pytables
- python-dateutil
diff --git a/pandas/_libs/src/headers/cmath b/pandas/_libs/src/headers/cmath
index 2bccf9bb13d77..632e1fc2390d0 100644
--- a/pandas/_libs/src/headers/cmath
+++ b/pandas/_libs/src/headers/cmath
@@ -1,16 +1,36 @@
#ifndef _PANDAS_MATH_H_
#define _PANDAS_MATH_H_
+// MSVC 2017 has a bug where `x == x` can be true for NaNs.
+// MSC_VER from https://stackoverflow.com/a/70630/1889400
+// Place upper bound on this check once a fixed MSVC is released.
+#if defined(_MSC_VER) && (_MSC_VER < 1800)
+#include <cmath>
// In older versions of Visual Studio there wasn't a std::signbit defined
// This defines it using _copysign
-#if defined(_MSC_VER) && (_MSC_VER < 1800)
+namespace std {
+ __inline int isnan(double x) { return _isnan(x); }
+ __inline int signbit(double num) { return _copysign(1.0, num) < 0; }
+ __inline int notnan(double x) { return !isnan(x); }
+}
+#elif defined(_MSC_VER) && (_MSC_VER >= 1900)
+#include <cmath>
+namespace std {
+ __inline int isnan(double x) { return _isnan(x); }
+ __inline int notnan(double x) { return !isnan(x); }
+}
+#elif defined(_MSC_VER)
#include <cmath>
namespace std {
__inline int isnan(double x) { return _isnan(x); }
- __inline int signbit(double num) { return _copysign(1.0, num) < 0; }
+ __inline int notnan(double x) { return x == x; }
}
#else
#include <cmath>
-#endif
+namespace std {
+ __inline int notnan(double x) { return x == x; }
+}
+
+#endif
#endif
diff --git a/pandas/_libs/window.pyx b/pandas/_libs/window.pyx
index d4b61b8611b68..989dc4dd17a37 100644
--- a/pandas/_libs/window.pyx
+++ b/pandas/_libs/window.pyx
@@ -15,6 +15,7 @@ cnp.import_array()
cdef extern from "src/headers/cmath" namespace "std":
bint isnan(double) nogil
+ bint notnan(double) nogil
int signbit(double) nogil
double sqrt(double x) nogil
@@ -381,7 +382,7 @@ def roll_count(ndarray[double_t] input, int64_t win, int64_t minp,
count_x = 0.0
for j in range(s, e):
val = input[j]
- if val == val:
+ if notnan(val):
count_x += 1.0
else:
@@ -389,13 +390,13 @@ def roll_count(ndarray[double_t] input, int64_t win, int64_t minp,
# calculate deletes
for j in range(start[i - 1], s):
val = input[j]
- if val == val:
+ if notnan(val):
count_x -= 1.0
# calculate adds
for j in range(end[i - 1], e):
val = input[j]
- if val == val:
+ if notnan(val):
count_x += 1.0
if count_x >= minp:
@@ -424,7 +425,7 @@ cdef inline void add_sum(double val, int64_t *nobs, double *sum_x) nogil:
""" add a value from the sum calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] + 1
sum_x[0] = sum_x[0] + val
@@ -432,7 +433,7 @@ cdef inline void add_sum(double val, int64_t *nobs, double *sum_x) nogil:
cdef inline void remove_sum(double val, int64_t *nobs, double *sum_x) nogil:
""" remove a value from the sum calc """
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] - 1
sum_x[0] = sum_x[0] - val
@@ -538,7 +539,7 @@ cdef inline void add_mean(double val, Py_ssize_t *nobs, double *sum_x,
""" add a value from the mean calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] + 1
sum_x[0] = sum_x[0] + val
if signbit(val):
@@ -549,7 +550,7 @@ cdef inline void remove_mean(double val, Py_ssize_t *nobs, double *sum_x,
Py_ssize_t *neg_ct) nogil:
""" remove a value from the mean calc """
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] - 1
sum_x[0] = sum_x[0] - val
if signbit(val):
@@ -671,8 +672,7 @@ cdef inline void remove_var(double val, double *nobs, double *mean_x,
""" remove a value from the var calc """
cdef double delta
- # Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] - 1
if nobs[0]:
# a part of Welford's method for the online variance-calculation
@@ -760,7 +760,7 @@ def roll_var(ndarray[double_t] input, int64_t win, int64_t minp,
val = input[i]
prev = input[i - win]
- if val == val:
+ if notnan(val):
if prev == prev:
# Adding one observation and removing another one
@@ -822,7 +822,7 @@ cdef inline void add_skew(double val, int64_t *nobs, double *x, double *xx,
""" add a value from the skew calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] + 1
# seriously don't ask me why this is faster
@@ -836,7 +836,7 @@ cdef inline void remove_skew(double val, int64_t *nobs, double *x, double *xx,
""" remove a value from the skew calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] - 1
# seriously don't ask me why this is faster
@@ -959,7 +959,7 @@ cdef inline void add_kurt(double val, int64_t *nobs, double *x, double *xx,
""" add a value from the kurotic calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] + 1
# seriously don't ask me why this is faster
@@ -974,7 +974,7 @@ cdef inline void remove_kurt(double val, int64_t *nobs, double *x, double *xx,
""" remove a value from the kurotic calc """
# Not NaN
- if val == val:
+ if notnan(val):
nobs[0] = nobs[0] - 1
# seriously don't ask me why this is faster
@@ -1089,7 +1089,7 @@ def roll_median_c(ndarray[float64_t] input, int64_t win, int64_t minp,
# setup
val = input[i]
- if val == val:
+ if notnan(val):
nobs += 1
err = skiplist_insert(sl, val) != 1
if err:
@@ -1100,14 +1100,14 @@ def roll_median_c(ndarray[float64_t] input, int64_t win, int64_t minp,
# calculate deletes
for j in range(start[i - 1], s):
val = input[j]
- if val == val:
+ if notnan(val):
skiplist_remove(sl, val)
nobs -= 1
# calculate adds
for j in range(end[i - 1], e):
val = input[j]
- if val == val:
+ if notnan(val):
nobs += 1
err = skiplist_insert(sl, val) != 1
if err:
@@ -1472,7 +1472,7 @@ def roll_quantile(ndarray[float64_t, cast=True] input, int64_t win,
# setup
val = input[i]
- if val == val:
+ if notnan(val):
nobs += 1
skiplist_insert(skiplist, val)
@@ -1481,14 +1481,14 @@ def roll_quantile(ndarray[float64_t, cast=True] input, int64_t win,
# calculate deletes
for j in range(start[i - 1], s):
val = input[j]
- if val == val:
+ if notnan(val):
skiplist_remove(skiplist, val)
nobs -= 1
# calculate adds
for j in range(end[i - 1], e):
val = input[j]
- if val == val:
+ if notnan(val):
nobs += 1
skiplist_insert(skiplist, val)
| xref https://github.com/pandas-dev/pandas/issues/23180
Trying to ensure we get the pyarrow from main, not conda-forge. | https://api.github.com/repos/pandas-dev/pandas/pulls/23182 | 2018-10-16T13:45:42Z | 2018-10-18T02:06:53Z | 2018-10-18T02:06:53Z | 2018-10-18T02:06:58Z |
try pinning numpy dev | diff --git a/ci/travis-37-numpydev.yaml b/ci/travis-37-numpydev.yaml
index 82c75b7c91b1f..957941b7379aa 100644
--- a/ci/travis-37-numpydev.yaml
+++ b/ci/travis-37-numpydev.yaml
@@ -13,5 +13,5 @@ dependencies:
- "git+git://github.com/dateutil/dateutil.git"
- "-f https://7933911d6844c6c53a7d-47bd50c35cd79bd838daf386af554a83.ssl.cf2.rackcdn.com"
- "--pre"
- - "numpy"
+ - "numpy<=1.16.0.dev0+20181015190246"
- "scipy"
| xref #22960 | https://api.github.com/repos/pandas-dev/pandas/pulls/23178 | 2018-10-16T11:00:22Z | 2018-10-16T11:24:37Z | 2018-10-16T11:24:37Z | 2018-10-16T11:40:52Z |
REF: use fused types for join_helper | diff --git a/pandas/_libs/join.pyx b/pandas/_libs/join.pyx
index ebb7bd40694ec..7c791ab8a1b00 100644
--- a/pandas/_libs/join.pyx
+++ b/pandas/_libs/join.pyx
@@ -239,4 +239,420 @@ def ffill_indexer(ndarray[int64_t] indexer):
return result
-include "join_helper.pxi"
+# ----------------------------------------------------------------------
+# left_join_indexer, inner_join_indexer, outer_join_indexer
+# ----------------------------------------------------------------------
+
+ctypedef fused join_t:
+ float64_t
+ float32_t
+ object
+ int32_t
+ int64_t
+ uint64_t
+
+
+# Joins on ordered, unique indices
+
+# right might contain non-unique values
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def left_join_indexer_unique(ndarray[join_t] left, ndarray[join_t] right):
+ cdef:
+ Py_ssize_t i, j, nleft, nright
+ ndarray[int64_t] indexer
+ join_t lval, rval
+
+ i = 0
+ j = 0
+ nleft = len(left)
+ nright = len(right)
+
+ indexer = np.empty(nleft, dtype=np.int64)
+ while True:
+ if i == nleft:
+ break
+
+ if j == nright:
+ indexer[i] = -1
+ i += 1
+ continue
+
+ rval = right[j]
+
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+
+ if left[i] == right[j]:
+ indexer[i] = j
+ i += 1
+ while i < nleft - 1 and left[i] == rval:
+ indexer[i] = j
+ i += 1
+ j += 1
+ elif left[i] > rval:
+ indexer[i] = -1
+ j += 1
+ else:
+ indexer[i] = -1
+ i += 1
+ return indexer
+
+
+left_join_indexer_unique_float64 = left_join_indexer_unique["float64_t"]
+left_join_indexer_unique_float32 = left_join_indexer_unique["float32_t"]
+left_join_indexer_unique_object = left_join_indexer_unique["object"]
+left_join_indexer_unique_int32 = left_join_indexer_unique["int32_t"]
+left_join_indexer_unique_int64 = left_join_indexer_unique["int64_t"]
+left_join_indexer_unique_uint64 = left_join_indexer_unique["uint64_t"]
+
+
+# @cython.wraparound(False)
+# @cython.boundscheck(False)
+def left_join_indexer(ndarray[join_t] left, ndarray[join_t] right):
+ """
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ """
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ join_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[join_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ count += nleft - i
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=left.dtype)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0:
+ while i < nleft:
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ i += 1
+ count += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ else:
+ j += 1
+
+ return result, lindexer, rindexer
+
+
+left_join_indexer_float64 = left_join_indexer["float64_t"]
+left_join_indexer_float32 = left_join_indexer["float32_t"]
+left_join_indexer_object = left_join_indexer["object"]
+left_join_indexer_int32 = left_join_indexer["int32_t"]
+left_join_indexer_int64 = left_join_indexer["int64_t"]
+left_join_indexer_uint64 = left_join_indexer["uint64_t"]
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def inner_join_indexer(ndarray[join_t] left, ndarray[join_t] right):
+ """
+ Two-pass algorithm for monotonic indexes. Handles many-to-one merges
+ """
+ cdef:
+ Py_ssize_t i, j, k, nright, nleft, count
+ join_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[join_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0 and nright > 0:
+ while True:
+ if i == nleft:
+ break
+ if j == nright:
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ i += 1
+ else:
+ j += 1
+
+ # do it again now that result size is known
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=left.dtype)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft > 0 and nright > 0:
+ while True:
+ if i == nleft:
+ break
+ if j == nright:
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ i += 1
+ else:
+ j += 1
+
+ return result, lindexer, rindexer
+
+
+inner_join_indexer_float64 = inner_join_indexer["float64_t"]
+inner_join_indexer_float32 = inner_join_indexer["float32_t"]
+inner_join_indexer_object = inner_join_indexer["object"]
+inner_join_indexer_int32 = inner_join_indexer["int32_t"]
+inner_join_indexer_int64 = inner_join_indexer["int64_t"]
+inner_join_indexer_uint64 = inner_join_indexer["uint64_t"]
+
+
+@cython.wraparound(False)
+@cython.boundscheck(False)
+def outer_join_indexer(ndarray[join_t] left, ndarray[join_t] right):
+ cdef:
+ Py_ssize_t i, j, nright, nleft, count
+ join_t lval, rval
+ ndarray[int64_t] lindexer, rindexer
+ ndarray[join_t] result
+
+ nleft = len(left)
+ nright = len(right)
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft == 0:
+ count = nright
+ elif nright == 0:
+ count = nleft
+ else:
+ while True:
+ if i == nleft:
+ count += nright - j
+ break
+ if j == nright:
+ count += nleft - i
+ break
+
+ lval = left[i]
+ rval = right[j]
+ if lval == rval:
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ count += 1
+ i += 1
+ else:
+ count += 1
+ j += 1
+
+ lindexer = np.empty(count, dtype=np.int64)
+ rindexer = np.empty(count, dtype=np.int64)
+ result = np.empty(count, dtype=left.dtype)
+
+ # do it again, but populate the indexers / result
+
+ i = 0
+ j = 0
+ count = 0
+ if nleft == 0:
+ for j in range(nright):
+ lindexer[j] = -1
+ rindexer[j] = j
+ result[j] = right[j]
+ elif nright == 0:
+ for i in range(nleft):
+ lindexer[i] = i
+ rindexer[i] = -1
+ result[i] = left[i]
+ else:
+ while True:
+ if i == nleft:
+ while j < nright:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = right[j]
+ count += 1
+ j += 1
+ break
+ if j == nright:
+ while i < nleft:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = left[i]
+ count += 1
+ i += 1
+ break
+
+ lval = left[i]
+ rval = right[j]
+
+ if lval == rval:
+ lindexer[count] = i
+ rindexer[count] = j
+ result[count] = lval
+ count += 1
+ if i < nleft - 1:
+ if j < nright - 1 and right[j + 1] == rval:
+ j += 1
+ else:
+ i += 1
+ if left[i] != rval:
+ j += 1
+ elif j < nright - 1:
+ j += 1
+ if lval != right[j]:
+ i += 1
+ else:
+ # end of the road
+ break
+ elif lval < rval:
+ lindexer[count] = i
+ rindexer[count] = -1
+ result[count] = lval
+ count += 1
+ i += 1
+ else:
+ lindexer[count] = -1
+ rindexer[count] = j
+ result[count] = rval
+ count += 1
+ j += 1
+
+ return result, lindexer, rindexer
+
+
+outer_join_indexer_float64 = outer_join_indexer["float64_t"]
+outer_join_indexer_float32 = outer_join_indexer["float32_t"]
+outer_join_indexer_object = outer_join_indexer["object"]
+outer_join_indexer_int32 = outer_join_indexer["int32_t"]
+outer_join_indexer_int64 = outer_join_indexer["int64_t"]
+outer_join_indexer_uint64 = outer_join_indexer["uint64_t"]
diff --git a/pandas/_libs/join_helper.pxi.in b/pandas/_libs/join_helper.pxi.in
deleted file mode 100644
index 35dedf90f8ca4..0000000000000
--- a/pandas/_libs/join_helper.pxi.in
+++ /dev/null
@@ -1,424 +0,0 @@
-"""
-Template for each `dtype` helper function for join
-
-WARNING: DO NOT edit .pxi FILE directly, .pxi is generated from .pxi.in
-"""
-
-# ----------------------------------------------------------------------
-# left_join_indexer, inner_join_indexer, outer_join_indexer
-# ----------------------------------------------------------------------
-
-ctypedef fused join_t:
- float64_t
- float32_t
- object
- int32_t
- int64_t
- uint64_t
-
-
-# Joins on ordered, unique indices
-
-# right might contain non-unique values
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def left_join_indexer_unique(ndarray[join_t] left, ndarray[join_t] right):
- cdef:
- Py_ssize_t i, j, nleft, nright
- ndarray[int64_t] indexer
- join_t lval, rval
-
- i = 0
- j = 0
- nleft = len(left)
- nright = len(right)
-
- indexer = np.empty(nleft, dtype=np.int64)
- while True:
- if i == nleft:
- break
-
- if j == nright:
- indexer[i] = -1
- i += 1
- continue
-
- rval = right[j]
-
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
-
- if left[i] == right[j]:
- indexer[i] = j
- i += 1
- while i < nleft - 1 and left[i] == rval:
- indexer[i] = j
- i += 1
- j += 1
- elif left[i] > rval:
- indexer[i] = -1
- j += 1
- else:
- indexer[i] = -1
- i += 1
- return indexer
-
-
-left_join_indexer_unique_float64 = left_join_indexer_unique["float64_t"]
-left_join_indexer_unique_float32 = left_join_indexer_unique["float32_t"]
-left_join_indexer_unique_object = left_join_indexer_unique["object"]
-left_join_indexer_unique_int32 = left_join_indexer_unique["int32_t"]
-left_join_indexer_unique_int64 = left_join_indexer_unique["int64_t"]
-left_join_indexer_unique_uint64 = left_join_indexer_unique["uint64_t"]
-
-
-{{py:
-
-# name, c_type, dtype
-dtypes = [('float64', 'float64_t', 'np.float64'),
- ('float32', 'float32_t', 'np.float32'),
- ('object', 'object', 'object'),
- ('int32', 'int32_t', 'np.int32'),
- ('int64', 'int64_t', 'np.int64'),
- ('uint64', 'uint64_t', 'np.uint64')]
-
-def get_dispatch(dtypes):
-
- for name, c_type, dtype in dtypes:
- yield name, c_type, dtype
-
-}}
-
-{{for name, c_type, dtype in get_dispatch(dtypes)}}
-
-
-# @cython.wraparound(False)
-# @cython.boundscheck(False)
-def left_join_indexer_{{name}}(ndarray[{{c_type}}] left,
- ndarray[{{c_type}}] right):
- """
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- """
- cdef:
- Py_ssize_t i, j, k, nright, nleft, count
- {{c_type}} lval, rval
- ndarray[int64_t] lindexer, rindexer
- ndarray[{{c_type}}] result
-
- nleft = len(left)
- nright = len(right)
-
- i = 0
- j = 0
- count = 0
- if nleft > 0:
- while i < nleft:
- if j == nright:
- count += nleft - i
- break
-
- lval = left[i]
- rval = right[j]
-
- if lval == rval:
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- count += 1
- i += 1
- else:
- j += 1
-
- # do it again now that result size is known
-
- lindexer = np.empty(count, dtype=np.int64)
- rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype={{dtype}})
-
- i = 0
- j = 0
- count = 0
- if nleft > 0:
- while i < nleft:
- if j == nright:
- while i < nleft:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = left[i]
- i += 1
- count += 1
- break
-
- lval = left[i]
- rval = right[j]
-
- if lval == rval:
- lindexer[count] = i
- rindexer[count] = j
- result[count] = lval
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = left[i]
- count += 1
- i += 1
- else:
- j += 1
-
- return result, lindexer, rindexer
-
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def inner_join_indexer_{{name}}(ndarray[{{c_type}}] left,
- ndarray[{{c_type}}] right):
- """
- Two-pass algorithm for monotonic indexes. Handles many-to-one merges
- """
- cdef:
- Py_ssize_t i, j, k, nright, nleft, count
- {{c_type}} lval, rval
- ndarray[int64_t] lindexer, rindexer
- ndarray[{{c_type}}] result
-
- nleft = len(left)
- nright = len(right)
-
- i = 0
- j = 0
- count = 0
- if nleft > 0 and nright > 0:
- while True:
- if i == nleft:
- break
- if j == nright:
- break
-
- lval = left[i]
- rval = right[j]
- if lval == rval:
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- i += 1
- else:
- j += 1
-
- # do it again now that result size is known
-
- lindexer = np.empty(count, dtype=np.int64)
- rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype={{dtype}})
-
- i = 0
- j = 0
- count = 0
- if nleft > 0 and nright > 0:
- while True:
- if i == nleft:
- break
- if j == nright:
- break
-
- lval = left[i]
- rval = right[j]
- if lval == rval:
- lindexer[count] = i
- rindexer[count] = j
- result[count] = rval
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- i += 1
- else:
- j += 1
-
- return result, lindexer, rindexer
-
-
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def outer_join_indexer_{{name}}(ndarray[{{c_type}}] left,
- ndarray[{{c_type}}] right):
- cdef:
- Py_ssize_t i, j, nright, nleft, count
- {{c_type}} lval, rval
- ndarray[int64_t] lindexer, rindexer
- ndarray[{{c_type}}] result
-
- nleft = len(left)
- nright = len(right)
-
- i = 0
- j = 0
- count = 0
- if nleft == 0:
- count = nright
- elif nright == 0:
- count = nleft
- else:
- while True:
- if i == nleft:
- count += nright - j
- break
- if j == nright:
- count += nleft - i
- break
-
- lval = left[i]
- rval = right[j]
- if lval == rval:
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- count += 1
- i += 1
- else:
- count += 1
- j += 1
-
- lindexer = np.empty(count, dtype=np.int64)
- rindexer = np.empty(count, dtype=np.int64)
- result = np.empty(count, dtype={{dtype}})
-
- # do it again, but populate the indexers / result
-
- i = 0
- j = 0
- count = 0
- if nleft == 0:
- for j in range(nright):
- lindexer[j] = -1
- rindexer[j] = j
- result[j] = right[j]
- elif nright == 0:
- for i in range(nleft):
- lindexer[i] = i
- rindexer[i] = -1
- result[i] = left[i]
- else:
- while True:
- if i == nleft:
- while j < nright:
- lindexer[count] = -1
- rindexer[count] = j
- result[count] = right[j]
- count += 1
- j += 1
- break
- if j == nright:
- while i < nleft:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = left[i]
- count += 1
- i += 1
- break
-
- lval = left[i]
- rval = right[j]
-
- if lval == rval:
- lindexer[count] = i
- rindexer[count] = j
- result[count] = lval
- count += 1
- if i < nleft - 1:
- if j < nright - 1 and right[j + 1] == rval:
- j += 1
- else:
- i += 1
- if left[i] != rval:
- j += 1
- elif j < nright - 1:
- j += 1
- if lval != right[j]:
- i += 1
- else:
- # end of the road
- break
- elif lval < rval:
- lindexer[count] = i
- rindexer[count] = -1
- result[count] = lval
- count += 1
- i += 1
- else:
- lindexer[count] = -1
- rindexer[count] = j
- result[count] = rval
- count += 1
- j += 1
-
- return result, lindexer, rindexer
-
-{{endfor}}
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 51c84d6e28cb4..55b4201f41b2a 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -222,11 +222,21 @@ class Index(IndexOpsMixin, PandasObject):
# To hand over control to subclasses
_join_precedence = 1
- # Cython methods
- _left_indexer_unique = libjoin.left_join_indexer_unique_object
- _left_indexer = libjoin.left_join_indexer_object
- _inner_indexer = libjoin.inner_join_indexer_object
- _outer_indexer = libjoin.outer_join_indexer_object
+ # Cython methods; see github.com/cython/cython/issues/2647
+ # for why we need to wrap these instead of making them class attributes
+ # Moreover, cython will choose the appropriate-dtyped sub-function
+ # given the dtypes of the passed arguments
+ def _left_indexer_unique(self, left, right):
+ return libjoin.left_join_indexer_unique(left, right)
+
+ def _left_indexer(self, left, right):
+ return libjoin.left_join_indexer(left, right)
+
+ def _inner_indexer(self, left, right):
+ return libjoin.inner_join_indexer(left, right)
+
+ def _outer_indexer(self, left, right):
+ return libjoin.outer_join_indexer(left, right)
_typ = 'index'
_data = None
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 7f64fb744c682..eabbb43d155f6 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -1,6 +1,5 @@
import numpy as np
-from pandas._libs import (index as libindex,
- join as libjoin)
+from pandas._libs import index as libindex
from pandas.core.dtypes.common import (
is_dtype_equal,
pandas_dtype,
@@ -185,10 +184,6 @@ class Int64Index(IntegerIndex):
__doc__ = _num_index_shared_docs['class_descr'] % _int64_descr_args
_typ = 'int64index'
- _left_indexer_unique = libjoin.left_join_indexer_unique_int64
- _left_indexer = libjoin.left_join_indexer_int64
- _inner_indexer = libjoin.inner_join_indexer_int64
- _outer_indexer = libjoin.outer_join_indexer_int64
_can_hold_na = False
_engine_type = libindex.Int64Engine
_default_dtype = np.int64
@@ -243,10 +238,6 @@ class UInt64Index(IntegerIndex):
__doc__ = _num_index_shared_docs['class_descr'] % _uint64_descr_args
_typ = 'uint64index'
- _left_indexer_unique = libjoin.left_join_indexer_unique_uint64
- _left_indexer = libjoin.left_join_indexer_uint64
- _inner_indexer = libjoin.inner_join_indexer_uint64
- _outer_indexer = libjoin.outer_join_indexer_uint64
_can_hold_na = False
_engine_type = libindex.UInt64Engine
_default_dtype = np.uint64
@@ -321,11 +312,6 @@ class Float64Index(NumericIndex):
_typ = 'float64index'
_engine_type = libindex.Float64Engine
- _left_indexer_unique = libjoin.left_join_indexer_unique_float64
- _left_indexer = libjoin.left_join_indexer_float64
- _inner_indexer = libjoin.inner_join_indexer_float64
- _outer_indexer = libjoin.outer_join_indexer_float64
-
_default_dtype = np.float64
@property
diff --git a/setup.py b/setup.py
index f31aaa7e79a0d..adffddc61cbac 100755
--- a/setup.py
+++ b/setup.py
@@ -76,7 +76,7 @@ def is_platform_windows():
'_libs/algos_take_helper.pxi.in',
'_libs/algos_rank_helper.pxi.in'],
'groupby': ['_libs/groupby_helper.pxi.in'],
- 'join': ['_libs/join_helper.pxi.in', '_libs/join_func_helper.pxi.in'],
+ 'join': ['_libs/join_func_helper.pxi.in'],
'hashtable': ['_libs/hashtable_class_helper.pxi.in',
'_libs/hashtable_func_helper.pxi.in'],
'index': ['_libs/index_class_helper.pxi.in'],
| Related but orthogonal to #23022. | https://api.github.com/repos/pandas-dev/pandas/pulls/23171 | 2018-10-15T21:34:57Z | 2018-10-17T12:35:21Z | 2018-10-17T12:35:21Z | 2018-10-17T15:59:56Z |
API: Series.str-accessor infers dtype (and Index.str does not raise on all-NA) | diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index f7fdfcf8bf882..87c75e8bcd91f 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -70,6 +70,16 @@ and replacing any remaining whitespaces with underscores:
``.str`` methods which operate on elements of type ``list`` are not available on such a
``Series``.
+.. _text.warn_types:
+
+.. warning::
+
+ Before v.0.25.0, the ``.str``-accessor did only the most rudimentary type checks. Starting with
+ v.0.25.0, the type of the Series is inferred and the allowed types (i.e. strings) are enforced more rigorously.
+
+ Generally speaking, the ``.str`` accessor is intended to work only on strings. With very few
+ exceptions, other uses are not supported, and may be disabled at a later point.
+
Splitting and Replacing Strings
-------------------------------
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 89a9da4a73b35..2dc3a0655a7c8 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -195,6 +195,43 @@ returned if all the columns were dummy encoded, and a :class:`DataFrame` otherwi
Providing any ``SparseSeries`` or ``SparseDataFrame`` to :func:`concat` will
cause a ``SparseSeries`` or ``SparseDataFrame`` to be returned, as before.
+The ``.str``-accessor performs stricter type checks
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Due to the lack of more fine-grained dtypes, :attr:`Series.str` so far only checked whether the data was
+of ``object`` dtype. :attr:`Series.str` will now infer the dtype data *within* the Series; in particular,
+``'bytes'``-only data will raise an exception (except for :meth:`Series.str.decode`, :meth:`Series.str.get`,
+:meth:`Series.str.len`, :meth:`Series.str.slice`), see :issue:`23163`, :issue:`23011`, :issue:`23551`.
+
+*Previous Behaviour*:
+
+.. code-block:: python
+
+ In [1]: s = pd.Series(np.array(['a', 'ba', 'cba'], 'S'), dtype=object)
+
+ In [2]: s
+ Out[2]:
+ 0 b'a'
+ 1 b'ba'
+ 2 b'cba'
+ dtype: object
+
+ In [3]: s.str.startswith(b'a')
+ Out[3]:
+ 0 True
+ 1 False
+ 2 False
+ dtype: bool
+
+*New Behaviour*:
+
+.. ipython:: python
+ :okexcept:
+
+ s = pd.Series(np.array(['a', 'ba', 'cba'], 'S'), dtype=object)
+ s
+ s.str.startswith(b'a')
+
.. _whatsnew_0250.api_breaking.incompatible_index_unions
Incompatible Index Type Unions
@@ -267,7 +304,6 @@ This change is backward compatible for direct usage of Pandas, but if you subcla
Pandas objects *and* give your subclasses specific ``__str__``/``__repr__`` methods,
you may have to adjust your ``__str__``/``__repr__`` methods (:issue:`26495`).
-
.. _whatsnew_0250.api_breaking.deps:
Increased minimum versions for dependencies
@@ -472,7 +508,7 @@ Conversion
Strings
^^^^^^^
--
+- Bug in the ``__name__`` attribute of several methods of :class:`Series.str`, which were set incorrectly (:issue:`23551`)
-
-
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index ee3796241690d..bd756491abd2f 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -1,4 +1,5 @@
import codecs
+from functools import wraps
import re
import textwrap
from typing import Dict
@@ -12,8 +13,8 @@
from pandas.core.dtypes.common import (
ensure_object, is_bool_dtype, is_categorical_dtype, is_integer,
- is_list_like, is_object_dtype, is_re, is_scalar, is_string_like)
-from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
+ is_list_like, is_re, is_scalar, is_string_like)
+from pandas.core.dtypes.generic import ABCIndexClass, ABCMultiIndex, ABCSeries
from pandas.core.dtypes.missing import isna
from pandas.core.algorithms import take_1d
@@ -1720,12 +1721,78 @@ def str_encode(arr, encoding, errors="strict"):
return _na_map(f, arr)
-def _noarg_wrapper(f, docstring=None, **kargs):
+def forbid_nonstring_types(forbidden, name=None):
+ """
+ Decorator to forbid specific types for a method of StringMethods.
+
+ For calling `.str.{method}` on a Series or Index, it is necessary to first
+ initialize the :class:`StringMethods` object, and then call the method.
+ However, different methods allow different input types, and so this can not
+ be checked during :meth:`StringMethods.__init__`, but must be done on a
+ per-method basis. This decorator exists to facilitate this process, and
+ make it explicit which (inferred) types are disallowed by the method.
+
+ :meth:`StringMethods.__init__` allows the *union* of types its different
+ methods allow (after skipping NaNs; see :meth:`StringMethods._validate`),
+ namely: ['string', 'empty', 'bytes', 'mixed', 'mixed-integer'].
+
+ The default string types ['string', 'empty'] are allowed for all methods.
+ For the additional types ['bytes', 'mixed', 'mixed-integer'], each method
+ then needs to forbid the types it is not intended for.
+
+ Parameters
+ ----------
+ forbidden : list-of-str or None
+ List of forbidden non-string types, may be one or more of
+ `['bytes', 'mixed', 'mixed-integer']`.
+ name : str, default None
+ Name of the method to use in the error message. By default, this is
+ None, in which case the name from the method being wrapped will be
+ copied. However, for working with further wrappers (like _pat_wrapper
+ and _noarg_wrapper), it is necessary to specify the name.
+
+ Returns
+ -------
+ func : wrapper
+ The method to which the decorator is applied, with an added check that
+ enforces the inferred type to not be in the list of forbidden types.
+
+ Raises
+ ------
+ TypeError
+ If the inferred type of the underlying data is in `forbidden`.
+ """
+
+ # deal with None
+ forbidden = [] if forbidden is None else forbidden
+
+ allowed_types = {'string', 'empty', 'bytes',
+ 'mixed', 'mixed-integer'} - set(forbidden)
+
+ def _forbid_nonstring_types(func):
+ func_name = func.__name__ if name is None else name
+
+ @wraps(func)
+ def wrapper(self, *args, **kwargs):
+ if self._inferred_dtype not in allowed_types:
+ msg = ('Cannot use .str.{name} with values of inferred dtype '
+ '{inf_type!r}.'.format(name=func_name,
+ inf_type=self._inferred_dtype))
+ raise TypeError(msg)
+ return func(self, *args, **kwargs)
+ wrapper.__name__ = func_name
+ return wrapper
+ return _forbid_nonstring_types
+
+
+def _noarg_wrapper(f, name=None, docstring=None, forbidden_types=['bytes'],
+ **kargs):
+ @forbid_nonstring_types(forbidden_types, name=name)
def wrapper(self):
result = _na_map(f, self._parent, **kargs)
return self._wrap_result(result)
- wrapper.__name__ = f.__name__
+ wrapper.__name__ = f.__name__ if name is None else name
if docstring is not None:
wrapper.__doc__ = docstring
else:
@@ -1734,22 +1801,26 @@ def wrapper(self):
return wrapper
-def _pat_wrapper(f, flags=False, na=False, **kwargs):
+def _pat_wrapper(f, flags=False, na=False, name=None,
+ forbidden_types=['bytes'], **kwargs):
+ @forbid_nonstring_types(forbidden_types, name=name)
def wrapper1(self, pat):
result = f(self._parent, pat)
return self._wrap_result(result)
+ @forbid_nonstring_types(forbidden_types, name=name)
def wrapper2(self, pat, flags=0, **kwargs):
result = f(self._parent, pat, flags=flags, **kwargs)
return self._wrap_result(result)
+ @forbid_nonstring_types(forbidden_types, name=name)
def wrapper3(self, pat, na=np.nan):
result = f(self._parent, pat, na=na)
return self._wrap_result(result)
wrapper = wrapper3 if na else wrapper2 if flags else wrapper1
- wrapper.__name__ = f.__name__
+ wrapper.__name__ = f.__name__ if name is None else name
if f.__doc__:
wrapper.__doc__ = f.__doc__
@@ -1780,7 +1851,7 @@ class StringMethods(NoNewAttributesMixin):
"""
def __init__(self, data):
- self._validate(data)
+ self._inferred_dtype = self._validate(data)
self._is_categorical = is_categorical_dtype(data)
# .values.categories works for both Series/Index
@@ -1791,38 +1862,44 @@ def __init__(self, data):
@staticmethod
def _validate(data):
- from pandas.core.index import Index
-
- if (isinstance(data, ABCSeries) and
- not ((is_categorical_dtype(data.dtype) and
- is_object_dtype(data.values.categories)) or
- (is_object_dtype(data.dtype)))):
- # it's neither a string series not a categorical series with
- # strings inside the categories.
- # this really should exclude all series with any non-string values
- # (instead of test for object dtype), but that isn't practical for
- # performance reasons until we have a str dtype (GH 9343)
+ """
+ Auxiliary function for StringMethods, infers and checks dtype of data.
+
+ This is a "first line of defence" at the creation of the StringMethods-
+ object (see _make_accessor), and just checks that the dtype is in the
+ *union* of the allowed types over all string methods below; this
+ restriction is then refined on a per-method basis using the decorator
+ @forbid_nonstring_types (more info in the corresponding docstring).
+
+ This really should exclude all series/index with any non-string values,
+ but that isn't practical for performance reasons until we have a str
+ dtype (GH 9343 / 13877)
+
+ Parameters
+ ----------
+ data : The content of the Series
+
+ Returns
+ -------
+ dtype : inferred dtype of data
+ """
+ if isinstance(data, ABCMultiIndex):
+ raise AttributeError('Can only use .str accessor with Index, '
+ 'not MultiIndex')
+
+ # see _libs/lib.pyx for list of inferred types
+ allowed_types = ['string', 'empty', 'bytes', 'mixed', 'mixed-integer']
+
+ values = getattr(data, 'values', data) # Series / Index
+ values = getattr(values, 'categories', values) # categorical / normal
+
+ # missing values obfuscate type inference -> skip
+ inferred_dtype = lib.infer_dtype(values, skipna=True)
+
+ if inferred_dtype not in allowed_types:
raise AttributeError("Can only use .str accessor with string "
- "values, which use np.object_ dtype in "
- "pandas")
- elif isinstance(data, Index):
- # can't use ABCIndex to exclude non-str
-
- # see src/inference.pyx which can contain string values
- allowed_types = ('string', 'unicode', 'mixed', 'mixed-integer')
- if is_categorical_dtype(data.dtype):
- inf_type = data.categories.inferred_type
- else:
- inf_type = data.inferred_type
- if inf_type not in allowed_types:
- message = ("Can only use .str accessor with string values "
- "(i.e. inferred_type is 'string', 'unicode' or "
- "'mixed')")
- raise AttributeError(message)
- if data.nlevels > 1:
- message = ("Can only use .str accessor with Index, not "
- "MultiIndex")
- raise AttributeError(message)
+ "values!")
+ return inferred_dtype
def __getitem__(self, key):
if isinstance(key, slice):
@@ -2025,12 +2102,13 @@ def _get_series_list(self, others, ignore_index=False):
warnings.warn('list-likes other than Series, Index, or '
'np.ndarray WITHIN another list-like are '
'deprecated and will be removed in a future '
- 'version.', FutureWarning, stacklevel=3)
+ 'version.', FutureWarning, stacklevel=4)
return (los, join_warn)
elif all(not is_list_like(x) for x in others):
return ([Series(others, index=idx)], False)
raise TypeError(err_msg)
+ @forbid_nonstring_types(['bytes', 'mixed', 'mixed-integer'])
def cat(self, others=None, sep=None, na_rep=None, join=None):
"""
Concatenate strings in the Series/Index with given separator.
@@ -2211,7 +2289,7 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
"Index/DataFrame in `others`. To enable alignment "
"and silence this warning, pass `join='left'|"
"'outer'|'inner'|'right'`. The future default will "
- "be `join='left'`.", FutureWarning, stacklevel=2)
+ "be `join='left'`.", FutureWarning, stacklevel=3)
# if join is None, _get_series_list already force-aligned indexes
join = 'left' if join is None else join
@@ -2384,6 +2462,7 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
@Appender(_shared_docs['str_split'] % {
'side': 'beginning',
'method': 'split'})
+ @forbid_nonstring_types(['bytes'])
def split(self, pat=None, n=-1, expand=False):
result = str_split(self._parent, pat, n=n)
return self._wrap_result(result, expand=expand)
@@ -2391,6 +2470,7 @@ def split(self, pat=None, n=-1, expand=False):
@Appender(_shared_docs['str_split'] % {
'side': 'end',
'method': 'rsplit'})
+ @forbid_nonstring_types(['bytes'])
def rsplit(self, pat=None, n=-1, expand=False):
result = str_rsplit(self._parent, pat, n=n)
return self._wrap_result(result, expand=expand)
@@ -2485,6 +2565,7 @@ def rsplit(self, pat=None, n=-1, expand=False):
'`sep`.'
})
@deprecate_kwarg(old_arg_name='pat', new_arg_name='sep')
+ @forbid_nonstring_types(['bytes'])
def partition(self, sep=' ', expand=True):
f = lambda x: x.partition(sep)
result = _na_map(f, self._parent)
@@ -2498,6 +2579,7 @@ def partition(self, sep=' ', expand=True):
'`sep`.'
})
@deprecate_kwarg(old_arg_name='pat', new_arg_name='sep')
+ @forbid_nonstring_types(['bytes'])
def rpartition(self, sep=' ', expand=True):
f = lambda x: x.rpartition(sep)
result = _na_map(f, self._parent)
@@ -2509,33 +2591,39 @@ def get(self, i):
return self._wrap_result(result)
@copy(str_join)
+ @forbid_nonstring_types(['bytes'])
def join(self, sep):
result = str_join(self._parent, sep)
return self._wrap_result(result)
@copy(str_contains)
+ @forbid_nonstring_types(['bytes'])
def contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
result = str_contains(self._parent, pat, case=case, flags=flags, na=na,
regex=regex)
return self._wrap_result(result, fill_value=na)
@copy(str_match)
+ @forbid_nonstring_types(['bytes'])
def match(self, pat, case=True, flags=0, na=np.nan):
result = str_match(self._parent, pat, case=case, flags=flags, na=na)
return self._wrap_result(result, fill_value=na)
@copy(str_replace)
+ @forbid_nonstring_types(['bytes'])
def replace(self, pat, repl, n=-1, case=None, flags=0, regex=True):
result = str_replace(self._parent, pat, repl, n=n, case=case,
flags=flags, regex=regex)
return self._wrap_result(result)
@copy(str_repeat)
+ @forbid_nonstring_types(['bytes'])
def repeat(self, repeats):
result = str_repeat(self._parent, repeats)
return self._wrap_result(result)
@copy(str_pad)
+ @forbid_nonstring_types(['bytes'])
def pad(self, width, side='left', fillchar=' '):
result = str_pad(self._parent, width, side=side, fillchar=fillchar)
return self._wrap_result(result)
@@ -2559,17 +2647,21 @@ def pad(self, width, side='left', fillchar=' '):
@Appender(_shared_docs['str_pad'] % dict(side='left and right',
method='center'))
+ @forbid_nonstring_types(['bytes'])
def center(self, width, fillchar=' '):
return self.pad(width, side='both', fillchar=fillchar)
@Appender(_shared_docs['str_pad'] % dict(side='right', method='ljust'))
+ @forbid_nonstring_types(['bytes'])
def ljust(self, width, fillchar=' '):
return self.pad(width, side='right', fillchar=fillchar)
@Appender(_shared_docs['str_pad'] % dict(side='left', method='rjust'))
+ @forbid_nonstring_types(['bytes'])
def rjust(self, width, fillchar=' '):
return self.pad(width, side='left', fillchar=fillchar)
+ @forbid_nonstring_types(['bytes'])
def zfill(self, width):
"""
Pad strings in the Series/Index by prepending '0' characters.
@@ -2639,16 +2731,19 @@ def slice(self, start=None, stop=None, step=None):
return self._wrap_result(result)
@copy(str_slice_replace)
+ @forbid_nonstring_types(['bytes'])
def slice_replace(self, start=None, stop=None, repl=None):
result = str_slice_replace(self._parent, start, stop, repl)
return self._wrap_result(result)
@copy(str_decode)
def decode(self, encoding, errors="strict"):
+ # need to allow bytes here
result = str_decode(self._parent, encoding, errors)
return self._wrap_result(result)
@copy(str_encode)
+ @forbid_nonstring_types(['bytes'])
def encode(self, encoding, errors="strict"):
result = str_encode(self._parent, encoding, errors)
return self._wrap_result(result)
@@ -2718,28 +2813,33 @@ def encode(self, encoding, errors="strict"):
@Appender(_shared_docs['str_strip'] % dict(side='left and right sides',
method='strip'))
+ @forbid_nonstring_types(['bytes'])
def strip(self, to_strip=None):
result = str_strip(self._parent, to_strip, side='both')
return self._wrap_result(result)
@Appender(_shared_docs['str_strip'] % dict(side='left side',
method='lstrip'))
+ @forbid_nonstring_types(['bytes'])
def lstrip(self, to_strip=None):
result = str_strip(self._parent, to_strip, side='left')
return self._wrap_result(result)
@Appender(_shared_docs['str_strip'] % dict(side='right side',
method='rstrip'))
+ @forbid_nonstring_types(['bytes'])
def rstrip(self, to_strip=None):
result = str_strip(self._parent, to_strip, side='right')
return self._wrap_result(result)
@copy(str_wrap)
+ @forbid_nonstring_types(['bytes'])
def wrap(self, width, **kwargs):
result = str_wrap(self._parent, width, **kwargs)
return self._wrap_result(result)
@copy(str_get_dummies)
+ @forbid_nonstring_types(['bytes'])
def get_dummies(self, sep='|'):
# we need to cast to Series of strings as only that has all
# methods available for making the dummies...
@@ -2749,20 +2849,23 @@ def get_dummies(self, sep='|'):
name=name, expand=True)
@copy(str_translate)
+ @forbid_nonstring_types(['bytes'])
def translate(self, table):
result = str_translate(self._parent, table)
return self._wrap_result(result)
- count = _pat_wrapper(str_count, flags=True)
- startswith = _pat_wrapper(str_startswith, na=True)
- endswith = _pat_wrapper(str_endswith, na=True)
- findall = _pat_wrapper(str_findall, flags=True)
+ count = _pat_wrapper(str_count, flags=True, name='count')
+ startswith = _pat_wrapper(str_startswith, na=True, name='startswith')
+ endswith = _pat_wrapper(str_endswith, na=True, name='endswith')
+ findall = _pat_wrapper(str_findall, flags=True, name='findall')
@copy(str_extract)
+ @forbid_nonstring_types(['bytes'])
def extract(self, pat, flags=0, expand=True):
return str_extract(self, pat, flags=flags, expand=expand)
@copy(str_extractall)
+ @forbid_nonstring_types(['bytes'])
def extractall(self, pat, flags=0):
return str_extractall(self._orig, pat, flags=flags)
@@ -2792,6 +2895,7 @@ def extractall(self, pat, flags=0):
@Appender(_shared_docs['find'] %
dict(side='lowest', method='find',
also='rfind : Return highest indexes in each strings.'))
+ @forbid_nonstring_types(['bytes'])
def find(self, sub, start=0, end=None):
result = str_find(self._parent, sub, start=start, end=end, side='left')
return self._wrap_result(result)
@@ -2799,11 +2903,13 @@ def find(self, sub, start=0, end=None):
@Appender(_shared_docs['find'] %
dict(side='highest', method='rfind',
also='find : Return lowest indexes in each strings.'))
+ @forbid_nonstring_types(['bytes'])
def rfind(self, sub, start=0, end=None):
result = str_find(self._parent, sub,
start=start, end=end, side='right')
return self._wrap_result(result)
+ @forbid_nonstring_types(['bytes'])
def normalize(self, form):
"""
Return the Unicode normal form for the strings in the Series/Index.
@@ -2851,6 +2957,7 @@ def normalize(self, form):
@Appender(_shared_docs['index'] %
dict(side='lowest', similar='find', method='index',
also='rindex : Return highest indexes in each strings.'))
+ @forbid_nonstring_types(['bytes'])
def index(self, sub, start=0, end=None):
result = str_index(self._parent, sub,
start=start, end=end, side='left')
@@ -2859,6 +2966,7 @@ def index(self, sub, start=0, end=None):
@Appender(_shared_docs['index'] %
dict(side='highest', similar='rfind', method='rindex',
also='index : Return lowest indexes in each strings.'))
+ @forbid_nonstring_types(['bytes'])
def rindex(self, sub, start=0, end=None):
result = str_index(self._parent, sub,
start=start, end=end, side='right')
@@ -2908,7 +3016,8 @@ def rindex(self, sub, start=0, end=None):
5 3.0
dtype: float64
""")
- len = _noarg_wrapper(len, docstring=_shared_docs['len'], dtype=int)
+ len = _noarg_wrapper(len, docstring=_shared_docs['len'],
+ forbidden_types=None, dtype=int)
_shared_docs['casemethods'] = ("""
Convert strings in the Series/Index to %(type)s.
@@ -2989,21 +3098,27 @@ def rindex(self, sub, start=0, end=None):
_doc_args['casefold'] = dict(type='be casefolded', method='casefold',
version='\n .. versionadded:: 0.25.0\n')
lower = _noarg_wrapper(lambda x: x.lower(),
+ name='lower',
docstring=_shared_docs['casemethods'] %
_doc_args['lower'])
upper = _noarg_wrapper(lambda x: x.upper(),
+ name='upper',
docstring=_shared_docs['casemethods'] %
_doc_args['upper'])
title = _noarg_wrapper(lambda x: x.title(),
+ name='title',
docstring=_shared_docs['casemethods'] %
_doc_args['title'])
capitalize = _noarg_wrapper(lambda x: x.capitalize(),
+ name='capitalize',
docstring=_shared_docs['casemethods'] %
_doc_args['capitalize'])
swapcase = _noarg_wrapper(lambda x: x.swapcase(),
+ name='swapcase',
docstring=_shared_docs['casemethods'] %
_doc_args['swapcase'])
casefold = _noarg_wrapper(lambda x: x.casefold(),
+ name='casefold',
docstring=_shared_docs['casemethods'] %
_doc_args['casefold'])
@@ -3157,30 +3272,39 @@ def rindex(self, sub, start=0, end=None):
_doc_args['isnumeric'] = dict(type='numeric', method='isnumeric')
_doc_args['isdecimal'] = dict(type='decimal', method='isdecimal')
isalnum = _noarg_wrapper(lambda x: x.isalnum(),
+ name='isalnum',
docstring=_shared_docs['ismethods'] %
_doc_args['isalnum'])
isalpha = _noarg_wrapper(lambda x: x.isalpha(),
+ name='isalpha',
docstring=_shared_docs['ismethods'] %
_doc_args['isalpha'])
isdigit = _noarg_wrapper(lambda x: x.isdigit(),
+ name='isdigit',
docstring=_shared_docs['ismethods'] %
_doc_args['isdigit'])
isspace = _noarg_wrapper(lambda x: x.isspace(),
+ name='isspace',
docstring=_shared_docs['ismethods'] %
_doc_args['isspace'])
islower = _noarg_wrapper(lambda x: x.islower(),
+ name='islower',
docstring=_shared_docs['ismethods'] %
_doc_args['islower'])
isupper = _noarg_wrapper(lambda x: x.isupper(),
+ name='isupper',
docstring=_shared_docs['ismethods'] %
_doc_args['isupper'])
istitle = _noarg_wrapper(lambda x: x.istitle(),
+ name='istitle',
docstring=_shared_docs['ismethods'] %
_doc_args['istitle'])
isnumeric = _noarg_wrapper(lambda x: x.isnumeric(),
+ name='isnumeric',
docstring=_shared_docs['ismethods'] %
_doc_args['isnumeric'])
isdecimal = _noarg_wrapper(lambda x: x.isdecimal(),
+ name='isdecimal',
docstring=_shared_docs['ismethods'] %
_doc_args['isdecimal'])
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 2951ca24fa7ff..1ba0ef3918fb7 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -150,6 +150,9 @@ def any_allowed_skipna_inferred_dtype(request):
... inferred_dtype, values = any_allowed_skipna_inferred_dtype
... # will pass
... assert lib.infer_dtype(values, skipna=True) == inferred_dtype
+ ...
+ ... # constructor for .str-accessor will also pass
+ ... pd.Series(values).str
"""
inferred_dtype, values = request.param
values = np.array(values, dtype=object) # object dtype to avoid casting
@@ -179,20 +182,6 @@ def test_api_per_dtype(self, box, dtype, any_skipna_inferred_dtype):
pytest.xfail(reason='Conversion to numpy array fails because '
'the ._values-attribute is not a numpy array for '
'PeriodArray/IntervalArray; see GH 23553')
- if box == Index and inferred_dtype in ['empty', 'bytes']:
- pytest.xfail(reason='Raising too restrictively; '
- 'solved by GH 23167')
- if (box == Index and dtype == object
- and inferred_dtype in ['boolean', 'date', 'time']):
- pytest.xfail(reason='Inferring incorrectly because of NaNs; '
- 'solved by GH 23167')
- if (box == Series
- and (dtype == object and inferred_dtype not in [
- 'string', 'unicode', 'empty',
- 'bytes', 'mixed', 'mixed-integer'])
- or (dtype == 'category'
- and inferred_dtype in ['decimal', 'boolean', 'time'])):
- pytest.xfail(reason='Not raising correctly; solved by GH 23167')
types_passing_constructor = ['string', 'unicode', 'empty',
'bytes', 'mixed', 'mixed-integer']
@@ -220,27 +209,21 @@ def test_api_per_method(self, box, dtype,
method_name, args, kwargs = any_string_method
# TODO: get rid of these xfails
- if (method_name not in ['encode', 'decode', 'len']
- and inferred_dtype == 'bytes'):
- pytest.xfail(reason='Not raising for "bytes", see GH 23011;'
- 'Also: malformed method names, see GH 23551; '
- 'solved by GH 23167')
- if (method_name == 'cat'
- and inferred_dtype in ['mixed', 'mixed-integer']):
- pytest.xfail(reason='Bad error message; should raise better; '
- 'solved by GH 23167')
- if box == Index and inferred_dtype in ['empty', 'bytes']:
- pytest.xfail(reason='Raising too restrictively; '
- 'solved by GH 23167')
- if (box == Index and dtype == object
- and inferred_dtype in ['boolean', 'date', 'time']):
- pytest.xfail(reason='Inferring incorrectly because of NaNs; '
- 'solved by GH 23167')
+ if (method_name in ['partition', 'rpartition'] and box == Index
+ and inferred_dtype == 'empty'):
+ pytest.xfail(reason='Method cannot deal with empty Index')
+ if (method_name == 'split' and box == Index and values.size == 0
+ and kwargs.get('expand', None) is not None):
+ pytest.xfail(reason='Split fails on empty Series when expand=True')
+ if (method_name == 'get_dummies' and box == Index
+ and inferred_dtype == 'empty' and (dtype == object
+ or values.size == 0)):
+ pytest.xfail(reason='Need to fortify get_dummies corner cases')
t = box(values, dtype=dtype) # explicit dtype to avoid casting
method = getattr(t.str, method_name)
- bytes_allowed = method_name in ['encode', 'decode', 'len']
+ bytes_allowed = method_name in ['decode', 'get', 'len', 'slice']
# as of v0.23.4, all methods except 'cat' are very lenient with the
# allowed data types, just returning NaN for entries that error.
# This could be changed with an 'errors'-kwarg to the `str`-accessor,
@@ -3167,7 +3150,8 @@ def test_str_accessor_no_new_attributes(self):
def test_method_on_bytes(self):
lhs = Series(np.array(list('abc'), 'S1').astype(object))
rhs = Series(np.array(list('def'), 'S1').astype(object))
- with pytest.raises(TypeError, match="can't concat str to bytes"):
+ with pytest.raises(TypeError,
+ match="Cannot use .str.cat with values of.*"):
lhs.str.cat(rhs)
def test_casefold(self):
| closes #23163
closes #23011
closes #23551
~#23555 / #23556~
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Several times while working around `str.cat` (most recently #22725) @jreback and @WillAyd mentioned that the `.str` accessor should *not* work for (e.g.) bytes data. So far, the constructor of the `StringMethods` object didn't infer for `Series` -- this PR adds that.
~~What needs disussion is the following: I've commented out two tests about an encode-decode-roundtrip that cannot work when forbidding bytes data. Should such an encode-decode-cycle explicitly part of the desired functionality for `.str` (currently there are `.str.encode` and `.str.decode` methods), or not? I guess that even if not, this would need a deprecation cycle.~~
Alternatively, it'd be possible to allow construction of the `StringMethods` object but raise on `inferred_dtype='bytes'` for all but the `str.decode` method.
Finally, it would be possible to partially close #13877 by also excluding `'mixed-integer'` from the allowed inferred types, ~~but for floats, the inference-code would need to be able to distinguish between actual floats and missing values (currently, both of those return `inferred_type='mixed'`)~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/23167 | 2018-10-15T19:13:46Z | 2019-06-01T15:03:08Z | 2019-06-01T15:03:07Z | 2019-06-01T15:09:48Z |
DOC: Validate parameter types in docstrings (e.g. str instead of string) | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 27c63e3ba3a79..fcae4051dc471 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -208,7 +208,7 @@ def mode(self, axis, numeric_only):
.. versionchanged:: 0.1.2
- numeric_only : boolean
+ numeric_only : bool
Sentence ending in period, followed by multiple directives.
.. versionadded:: 0.1.2
@@ -455,6 +455,50 @@ def blank_lines(self, kind):
"""
pass
+ def integer_parameter(self, kind):
+ """
+ Uses integer instead of int.
+
+ Parameters
+ ----------
+ kind : integer
+ Foo bar baz.
+ """
+ pass
+
+ def string_parameter(self, kind):
+ """
+ Uses string instead of str.
+
+ Parameters
+ ----------
+ kind : string
+ Foo bar baz.
+ """
+ pass
+
+ def boolean_parameter(self, kind):
+ """
+ Uses boolean instead of bool.
+
+ Parameters
+ ----------
+ kind : boolean
+ Foo bar baz.
+ """
+ pass
+
+ def list_incorrect_parameter_type(self, kind):
+ """
+ Uses list of boolean instead of list of bool.
+
+ Parameters
+ ----------
+ kind : list of boolean, integer, float or string
+ Foo bar baz.
+ """
+ pass
+
class BadReturns(object):
@@ -590,6 +634,18 @@ def test_bad_generic_functions(self, func):
('Parameter "kind" description should finish with "."',)),
('BadParameters', 'parameter_capitalization',
('Parameter "kind" description should start with a capital letter',)),
+ ('BadParameters', 'integer_parameter',
+ ('Parameter "kind" type should use "int" instead of "integer"',)),
+ ('BadParameters', 'string_parameter',
+ ('Parameter "kind" type should use "str" instead of "string"',)),
+ ('BadParameters', 'boolean_parameter',
+ ('Parameter "kind" type should use "bool" instead of "boolean"',)),
+ ('BadParameters', 'list_incorrect_parameter_type',
+ ('Parameter "kind" type should use "bool" instead of "boolean"',)),
+ ('BadParameters', 'list_incorrect_parameter_type',
+ ('Parameter "kind" type should use "int" instead of "integer"',)),
+ ('BadParameters', 'list_incorrect_parameter_type',
+ ('Parameter "kind" type should use "str" instead of "string"',)),
pytest.param('BadParameters', 'blank_lines', ('No error yet?',),
marks=pytest.mark.xfail),
# Returns tests
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 6588522331433..c571827db70f8 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -464,7 +464,16 @@ def validate_one(func_name):
if doc.parameter_type(param)[-1] == '.':
param_errs.append('Parameter "{}" type should '
'not finish with "."'.format(param))
-
+ common_type_errors = [('integer', 'int'),
+ ('boolean', 'bool'),
+ ('string', 'str')]
+ for incorrect_type, correct_type in common_type_errors:
+ if incorrect_type in doc.parameter_type(param):
+ param_errs.append('Parameter "{}" type should use '
+ '"{}" instead of "{}"'
+ .format(param,
+ correct_type,
+ incorrect_type))
if not doc.parameter_desc(param):
param_errs.append('Parameter "{}" '
'has no description'.format(param))
| - [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Related to #20298. Fixes the bullet "Using string, boolean, integer in the param descriptions (should be str, bool, int)". | https://api.github.com/repos/pandas-dev/pandas/pulls/23165 | 2018-10-15T14:03:44Z | 2018-10-21T04:35:01Z | 2018-10-21T04:35:01Z | 2018-10-22T00:47:22Z |
BUG: Don't parse NaN as 'nan' in Data IO | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 16f0b9ee99909..d3b15703dab1f 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -440,7 +440,7 @@ In addition to these API breaking changes, many :ref:`performance improvements a
Raise ValueError in ``DataFrame.to_dict(orient='index')``
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Bug in :func:`DataFrame.to_dict` raises ``ValueError`` when used with
+Bug in :func:`DataFrame.to_dict` raises ``ValueError`` when used with
``orient='index'`` and a non-unique index instead of losing data (:issue:`22801`)
.. ipython:: python
@@ -448,7 +448,7 @@ Bug in :func:`DataFrame.to_dict` raises ``ValueError`` when used with
df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])
df
-
+
df.to_dict(orient='index')
.. _whatsnew_0240.api.datetimelike.normalize:
@@ -923,6 +923,41 @@ MultiIndex
I/O
^^^
+.. _whatsnew_0240.bug_fixes.nan_with_str_dtype:
+
+Proper handling of `np.NaN` in a string data-typed column with the Python engine
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+There was bug in :func:`read_excel` and :func:`read_csv` with the Python
+engine, where missing values turned to ``'nan'`` with ``dtype=str`` and
+``na_filter=True``. Now, these missing values are converted to the string
+missing indicator, ``np.nan``. (:issue `20377`)
+
+.. ipython:: python
+ :suppress:
+
+ from pandas.compat import StringIO
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [5]: data = 'a,b,c\n1,,3\n4,5,6'
+ In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
+ In [7]: df.loc[0, 'b']
+ Out[7]:
+ 'nan'
+
+Current Behavior:
+
+.. ipython:: python
+
+ data = 'a,b,c\n1,,3\n4,5,6'
+ df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
+ df.loc[0, 'b']
+
+Notice how we now instead output ``np.nan`` itself instead of a stringified form of it.
+
- :func:`read_html()` no longer ignores all-whitespace ``<tr>`` within ``<thead>`` when considering the ``skiprows`` and ``header`` arguments. Previously, users had to decrease their ``header`` and ``skiprows`` values on such tables to work around the issue. (:issue:`21641`)
- :func:`read_excel()` will correctly show the deprecation warning for previously deprecated ``sheetname`` (:issue:`17994`)
- :func:`read_csv()` and func:`read_table()` will throw ``UnicodeError`` and not coredump on badly encoded strings (:issue:`22748`)
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 0b9793a6ef97a..c5d5a431e8139 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -494,24 +494,70 @@ def astype_intsafe(ndarray[object] arr, new_dtype):
return result
-def astype_unicode(arr: ndarray) -> ndarray[object]:
+def astype_unicode(arr: ndarray,
+ skipna: bool=False) -> ndarray[object]:
+ """
+ Convert all elements in an array to unicode.
+
+ Parameters
+ ----------
+ arr : ndarray
+ The array whose elements we are casting.
+ skipna : bool, default False
+ Whether or not to coerce nulls to their stringified form
+ (e.g. NaN becomes 'nan').
+
+ Returns
+ -------
+ casted_arr : ndarray
+ A new array with the input array's elements casted.
+ """
cdef:
+ object arr_i
Py_ssize_t i, n = arr.size
ndarray[object] result = np.empty(n, dtype=object)
for i in range(n):
- result[i] = unicode(arr[i])
+ arr_i = arr[i]
+
+ if not (skipna and checknull(arr_i)):
+ arr_i = unicode(arr_i)
+
+ result[i] = arr_i
return result
-def astype_str(arr: ndarray) -> ndarray[object]:
+def astype_str(arr: ndarray,
+ skipna: bool=False) -> ndarray[object]:
+ """
+ Convert all elements in an array to string.
+
+ Parameters
+ ----------
+ arr : ndarray
+ The array whose elements we are casting.
+ skipna : bool, default False
+ Whether or not to coerce nulls to their stringified form
+ (e.g. NaN becomes 'nan').
+
+ Returns
+ -------
+ casted_arr : ndarray
+ A new array with the input array's elements casted.
+ """
cdef:
+ object arr_i
Py_ssize_t i, n = arr.size
ndarray[object] result = np.empty(n, dtype=object)
for i in range(n):
- result[i] = str(arr[i])
+ arr_i = arr[i]
+
+ if not (skipna and checknull(arr_i)):
+ arr_i = str(arr_i)
+
+ result[i] = arr_i
return result
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index a95a45d5f9ae4..56bf394729773 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -645,9 +645,9 @@ def conv(r, dtype):
return [conv(r, dtype) for r, dtype in zip(result, dtypes)]
-def astype_nansafe(arr, dtype, copy=True):
- """ return a view if copy is False, but
- need to be very careful as the result shape could change!
+def astype_nansafe(arr, dtype, copy=True, skipna=False):
+ """
+ Cast the elements of an array to a given dtype a nan-safe manner.
Parameters
----------
@@ -655,7 +655,9 @@ def astype_nansafe(arr, dtype, copy=True):
dtype : np.dtype
copy : bool, default True
If False, a view will be attempted but may fail, if
- e.g. the itemsizes don't align.
+ e.g. the item sizes don't align.
+ skipna: bool, default False
+ Whether or not we should skip NaN when casting as a string-type.
"""
# dispatch on extension dtype if needed
@@ -668,10 +670,12 @@ def astype_nansafe(arr, dtype, copy=True):
if issubclass(dtype.type, text_type):
# in Py3 that's str, in Py2 that's unicode
- return lib.astype_unicode(arr.ravel()).reshape(arr.shape)
+ return lib.astype_unicode(arr.ravel(),
+ skipna=skipna).reshape(arr.shape)
elif issubclass(dtype.type, string_types):
- return lib.astype_str(arr.ravel()).reshape(arr.shape)
+ return lib.astype_str(arr.ravel(),
+ skipna=skipna).reshape(arr.shape)
elif is_datetime64_dtype(arr):
if is_object_dtype(dtype):
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 1edc6f6e14442..eeba30ed8a44f 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1685,7 +1685,8 @@ def _cast_types(self, values, cast_type, column):
else:
try:
- values = astype_nansafe(values, cast_type, copy=True)
+ values = astype_nansafe(values, cast_type,
+ copy=True, skipna=True)
except ValueError:
raise ValueError("Unable to convert column %s to "
"type %s" % (column, cast_type))
diff --git a/pandas/tests/io/parser/na_values.py b/pandas/tests/io/parser/na_values.py
index 880ab707cfd07..29aed63e657fb 100644
--- a/pandas/tests/io/parser/na_values.py
+++ b/pandas/tests/io/parser/na_values.py
@@ -5,6 +5,7 @@
parsing for all of the parsers defined in parsers.py
"""
+import pytest
import numpy as np
from numpy import nan
@@ -380,3 +381,18 @@ def test_inf_na_values_with_int_index(self):
expected = DataFrame({"col1": [3, np.nan], "col2": [4, np.nan]},
index=Index([1, 2], name="idx"))
tm.assert_frame_equal(out, expected)
+
+ @pytest.mark.parametrize("na_filter", [True, False])
+ def test_na_values_with_dtype_str_and_na_filter(self, na_filter):
+ # see gh-20377
+ data = "a,b,c\n1,,3\n4,5,6"
+
+ # na_filter=True --> missing value becomes NaN.
+ # na_filter=False --> missing value remains empty string.
+ empty = np.nan if na_filter else ""
+ expected = DataFrame({"a": ["1", "4"],
+ "b": [empty, "5"],
+ "c": ["3", "6"]})
+
+ result = self.read_csv(StringIO(data), na_filter=na_filter, dtype=str)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/test_excel.py b/pandas/tests/io/test_excel.py
index a639556eb07d6..1bd2fb5887e38 100644
--- a/pandas/tests/io/test_excel.py
+++ b/pandas/tests/io/test_excel.py
@@ -14,7 +14,7 @@
import pandas as pd
import pandas.util.testing as tm
import pandas.util._test_decorators as td
-from pandas import DataFrame, Index, MultiIndex
+from pandas import DataFrame, Index, MultiIndex, Series
from pandas.compat import u, range, map, BytesIO, iteritems, PY36
from pandas.core.config import set_option, get_option
from pandas.io.common import URLError
@@ -371,7 +371,34 @@ def test_reader_dtype(self, ext):
tm.assert_frame_equal(actual, expected)
with pytest.raises(ValueError):
- actual = self.get_exceldf(basename, ext, dtype={'d': 'int64'})
+ self.get_exceldf(basename, ext, dtype={'d': 'int64'})
+
+ @pytest.mark.parametrize("dtype,expected", [
+ (None,
+ DataFrame({
+ "a": [1, 2, 3, 4],
+ "b": [2.5, 3.5, 4.5, 5.5],
+ "c": [1, 2, 3, 4],
+ "d": [1.0, 2.0, np.nan, 4.0]
+ })),
+ ({"a": "float64",
+ "b": "float32",
+ "c": str,
+ "d": str
+ },
+ DataFrame({
+ "a": Series([1, 2, 3, 4], dtype="float64"),
+ "b": Series([2.5, 3.5, 4.5, 5.5], dtype="float32"),
+ "c": ["001", "002", "003", "004"],
+ "d": ["1", "2", np.nan, "4"]
+ })),
+ ])
+ def test_reader_dtype_str(self, ext, dtype, expected):
+ # see gh-20377
+ basename = "testdtype"
+
+ actual = self.get_exceldf(basename, ext, dtype=dtype)
+ tm.assert_frame_equal(actual, expected)
def test_reading_all_sheets(self, ext):
# Test reading all sheetnames by setting sheetname to None,
| Re-implementation of #20429, with a couple of changes:
* The `whatsnew` entry now has a separate section for this change (https://github.com/pandas-dev/pandas/pull/20429#discussion_r180160530)
* The fix is localized as much as possible to `parsers.py` and does not impact other functionality.
Closes #20377. | https://api.github.com/repos/pandas-dev/pandas/pulls/23162 | 2018-10-15T06:03:01Z | 2018-10-18T15:58:29Z | 2018-10-18T15:58:29Z | 2022-12-06T21:51:17Z |
DOC: Validate in docstrings that numpy and pandas are not imported | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 0e10265a7291d..aa8a1500d9d3d 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -218,6 +218,18 @@ def mode(self, axis, numeric_only):
"""
pass
+ def good_imports(self):
+ """
+ Ensure import other than numpy and pandas are fine.
+
+ Examples
+ --------
+ This example does not import pandas or import numpy.
+ >>> import time
+ >>> import datetime
+ """
+ pass
+
class BadGenericDocStrings(object):
"""Everything here has a bad docstring
@@ -700,6 +712,11 @@ def test_bad_generic_functions(self, func):
marks=pytest.mark.xfail),
pytest.param('BadReturns', 'no_punctuation', ('foo',),
marks=pytest.mark.xfail),
+ # Examples tests
+ ('BadGenericDocStrings', 'method',
+ ('numpy does not need to be imported in the examples,')),
+ ('BadGenericDocStrings', 'method',
+ ('pandas does not need to be imported in the examples,')),
# See Also tests
('BadSeeAlso', 'prefix_pandas',
('pandas.Series.rename in `See Also` section '
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 4b1834adcaf33..4c54762f6df31 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -402,6 +402,11 @@ def examples_errors(self):
error_msgs += f.getvalue()
return error_msgs
+ @property
+ def examples_source_code(self):
+ lines = doctest.DocTestParser().get_examples(self.raw_doc)
+ return [line.source for line in lines]
+
def validate_one(func_name):
"""
@@ -531,6 +536,13 @@ def validate_one(func_name):
examples_errs = doc.examples_errors
if examples_errs:
errs.append('Examples do not pass tests')
+ examples_source_code = ''.join(doc.examples_source_code)
+ if 'import numpy' in examples_source_code:
+ errs.append("numpy does not need to be imported in the examples, "
+ "as it's assumed to be already imported as np")
+ if 'import pandas' in examples_source_code:
+ errs.append("pandas does not need to be imported in the examples, "
+ "as it's assumed to be already imported as pd")
return {'type': doc.type,
'docstring': doc.clean_doc,
| Fix #23134 | https://api.github.com/repos/pandas-dev/pandas/pulls/23161 | 2018-10-15T05:05:53Z | 2018-11-04T14:04:24Z | 2018-11-04T14:04:24Z | 2019-01-02T20:26:26Z |
DOC: Modify IntervalArray/IntervalIndex docstrings to pass validate_docstrings.py | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index eced3bf34e7c6..8171d2007b406 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -140,6 +140,13 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
-k"-crosstab -pivot_table -cut"
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Doctests interval classes' ; echo $MSG
+ pytest --doctest-modules -v \
+ pandas/core/indexes/interval.py \
+ pandas/core/arrays/interval.py \
+ -k"-from_arrays -from_breaks -from_intervals -from_tuples -get_loc -set_closed -to_tuples -interval_range"
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
fi
exit $RET
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 134999f05364f..8b37f25981cdd 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -33,7 +33,8 @@
)
-_interval_shared_docs['class'] = """%(summary)s
+_interval_shared_docs['class'] = """
+%(summary)s
.. versionadded:: %(versionadded)s
@@ -50,13 +51,15 @@
closed : {'left', 'right', 'both', 'neither'}, default 'right'
Whether the intervals are closed on the left-side, right-side, both or
neither.
-%(name)s\
-copy : boolean, default False
- Copy the meta-data.
dtype : dtype or None, default None
- If None, dtype will be inferred
+ If None, dtype will be inferred.
.. versionadded:: 0.23.0
+copy : bool, default False
+ Copy the input data.
+%(name)s\
+verify_integrity : bool, default True
+ Verify that the %(klass)s is valid.
Attributes
----------
@@ -87,18 +90,35 @@
See Also
--------
Index : The base pandas Index type
-Interval : A bounded slice-like interval; the elements of an IntervalIndex
+Interval : A bounded slice-like interval; the elements of an %(klass)s
interval_range : Function to create a fixed frequency IntervalIndex
-cut, qcut : Convert arrays of continuous data into Categoricals/Series of
- Intervals
+cut : Bin values into discrete Intervals
+qcut : Bin values into equal-sized Intervals based on rank or sample quantiles
"""
+# TODO(jschendel) use a more direct call in Examples when made public (GH22860)
@Appender(_interval_shared_docs['class'] % dict(
klass="IntervalArray",
- summary="Pandas array for interval data that are closed on the same side",
+ summary="Pandas array for interval data that are closed on the same side.",
versionadded="0.24.0",
- name='', extra_methods='', examples='',
+ name='',
+ extra_methods='',
+ examples=textwrap.dedent("""\
+ Examples
+ --------
+ A new ``IntervalArray`` can be constructed directly from an array-like of
+ ``Interval`` objects:
+
+ >>> pd.core.arrays.IntervalArray([pd.Interval(0, 1), pd.Interval(1, 5)])
+ IntervalArray([(0, 1], (1, 5]],
+ closed='right',
+ dtype='interval[int64]')
+
+ It may also be constructed using one of the constructor
+ methods: :meth:`IntervalArray.from_arrays`,
+ :meth:`IntervalArray.from_breaks`, and :meth:`IntervalArray.from_tuples`.
+ """),
))
@add_metaclass(_WritableDoc)
class IntervalArray(IntervalMixin, ExtensionArray):
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 25d4dd0cbcc81..f500d4a33bb73 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -53,7 +53,7 @@
target_klass='IntervalIndex or list of Intervals',
name=textwrap.dedent("""\
name : object, optional
- to be stored in the index.
+ Name to be stored in the index.
"""),
))
@@ -116,15 +116,15 @@ def _new_IntervalIndex(cls, d):
versionadded="0.20.0",
extra_methods="contains\n",
examples=textwrap.dedent("""\
-
Examples
--------
A new ``IntervalIndex`` is typically constructed using
:func:`interval_range`:
>>> pd.interval_range(start=0, end=5)
- IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]]
- closed='right', dtype='interval[int64]')
+ IntervalIndex([(0, 1], (1, 2], (2, 3], (3, 4], (4, 5]],
+ closed='right',
+ dtype='interval[int64]')
It may also be constructed using one of the constructor
methods: :meth:`IntervalIndex.from_arrays`,
| - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Didn't materially change any existing content; mostly small modifications and a couple additions. | https://api.github.com/repos/pandas-dev/pandas/pulls/23157 | 2018-10-14T22:27:22Z | 2018-10-24T10:16:22Z | 2018-10-24T10:16:21Z | 2018-10-25T02:15:44Z |
BUG/PERF: Avoid listifying in dispatch_to_extension_op | diff --git a/doc/source/extending.rst b/doc/source/extending.rst
index ab940384594bc..1e8a8e50dd9e3 100644
--- a/doc/source/extending.rst
+++ b/doc/source/extending.rst
@@ -135,6 +135,12 @@ There are two approaches for providing operator support for your ExtensionArray:
2. Use an operator implementation from pandas that depends on operators that are already defined
on the underlying elements (scalars) of the ExtensionArray.
+.. note::
+
+ Regardless of the approach, you may want to set ``__array_priority__``
+ if you want your implementation to be called when involved in binary operations
+ with NumPy arrays.
+
For the first approach, you define selected operators, e.g., ``__add__``, ``__le__``, etc. that
you want your ``ExtensionArray`` subclass to support.
@@ -173,6 +179,16 @@ or not that succeeds depends on whether the operation returns a result
that's valid for the ``ExtensionArray``. If an ``ExtensionArray`` cannot
be reconstructed, an ndarray containing the scalars returned instead.
+For ease of implementation and consistency with operations between pandas
+and NumPy ndarrays, we recommend *not* handling Series and Indexes in your binary ops.
+Instead, you should detect these cases and return ``NotImplemented``.
+When pandas encounters an operation like ``op(Series, ExtensionArray)``, pandas
+will
+
+1. unbox the array from the ``Series`` (roughly ``Series.values``)
+2. call ``result = op(values, ExtensionArray)``
+3. re-box the result in a ``Series``
+
.. _extending.extension.testing:
Testing Extension Arrays
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 2c142bdd7185b..6858073e46b07 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -875,6 +875,7 @@ Numeric
- Bug in :meth:`DataFrame.apply` where, when supplied with a string argument and additional positional or keyword arguments (e.g. ``df.apply('sum', min_count=1)``), a ``TypeError`` was wrongly raised (:issue:`22376`)
- Bug in :meth:`DataFrame.astype` to extension dtype may raise ``AttributeError`` (:issue:`22578`)
- Bug in :class:`DataFrame` with ``timedelta64[ns]`` dtype arithmetic operations with ``ndarray`` with integer dtype incorrectly treating the narray as ``timedelta64[ns]`` dtype (:issue:`23114`)
+- Bug in :meth:`Series.rpow` with object dtype ``NaN`` for ``1 ** NA`` instead of ``1`` (:issue:`22922`).
Strings
^^^^^^^
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index b745569d5bd76..f842d1237cb14 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -9,6 +9,7 @@
import operator
+from pandas.core.dtypes.generic import ABCSeries, ABCIndexClass
from pandas.errors import AbstractMethodError
from pandas.compat.numpy import function as nv
from pandas.compat import set_function_name, PY3
@@ -109,6 +110,7 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
compatible with the ExtensionArray.
copy : boolean, default False
If True, copy the underlying data.
+
Returns
-------
ExtensionArray
@@ -724,7 +726,13 @@ def _reduce(self, name, skipna=True, **kwargs):
class ExtensionOpsMixin(object):
"""
- A base class for linking the operators to their dunder names
+ A base class for linking the operators to their dunder names.
+
+ .. note::
+
+ You may want to set ``__array_priority__`` if you want your
+ implementation to be called when involved in binary operations
+ with NumPy arrays.
"""
@classmethod
@@ -761,12 +769,14 @@ def _add_comparison_ops(cls):
class ExtensionScalarOpsMixin(ExtensionOpsMixin):
- """A mixin for defining the arithmetic and logical operations on
- an ExtensionArray class, where it is assumed that the underlying objects
- have the operators already defined.
+ """
+ A mixin for defining ops on an ExtensionArray.
+
+ It is assumed that the underlying scalar objects have the operators
+ already defined.
- Usage
- ------
+ Notes
+ -----
If you have defined a subclass MyExtensionArray(ExtensionArray), then
use MyExtensionArray(ExtensionArray, ExtensionScalarOpsMixin) to
get the arithmetic operators. After the definition of MyExtensionArray,
@@ -776,6 +786,12 @@ class ExtensionScalarOpsMixin(ExtensionOpsMixin):
MyExtensionArray._add_comparison_ops()
to link the operators to your class.
+
+ .. note::
+
+ You may want to set ``__array_priority__`` if you want your
+ implementation to be called when involved in binary operations
+ with NumPy arrays.
"""
@classmethod
@@ -825,6 +841,11 @@ def convert_values(param):
else: # Assume its an object
ovalues = [param] * len(self)
return ovalues
+
+ if isinstance(other, (ABCSeries, ABCIndexClass)):
+ # rely on pandas to unbox and dispatch to us
+ return NotImplemented
+
lvalues = self
rvalues = convert_values(other)
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 9917045f2f7d2..52762514d00c2 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -3,7 +3,8 @@
import copy
import numpy as np
-from pandas._libs.lib import infer_dtype
+
+from pandas._libs import lib
from pandas.util._decorators import cache_readonly
from pandas.compat import u, range, string_types
from pandas.compat import set_function_name
@@ -171,7 +172,7 @@ def coerce_to_array(values, dtype, mask=None, copy=False):
values = np.array(values, copy=copy)
if is_object_dtype(values):
- inferred_type = infer_dtype(values)
+ inferred_type = lib.infer_dtype(values)
if inferred_type not in ['floating', 'integer',
'mixed-integer', 'mixed-integer-float']:
raise TypeError("{} cannot be converted to an IntegerDtype".format(
@@ -280,6 +281,8 @@ def _coerce_to_ndarray(self):
data[self._mask] = self._na_value
return data
+ __array_priority__ = 1000 # higher than ndarray so ops dispatch to us
+
def __array__(self, dtype=None):
"""
the array interface, return my values
@@ -288,12 +291,6 @@ def __array__(self, dtype=None):
return self._coerce_to_ndarray()
def __iter__(self):
- """Iterate over elements of the array.
-
- """
- # This needs to be implemented so that pandas recognizes extension
- # arrays as list-like. The default implementation makes successive
- # calls to ``__getitem__``, which may be slower than necessary.
for i in range(len(self)):
if self._mask[i]:
yield self.dtype.na_value
@@ -504,13 +501,21 @@ def cmp_method(self, other):
op_name = op.__name__
mask = None
+
+ if isinstance(other, (ABCSeries, ABCIndexClass)):
+ # Rely on pandas to unbox and dispatch to us.
+ return NotImplemented
+
if isinstance(other, IntegerArray):
other, mask = other._data, other._mask
+
elif is_list_like(other):
other = np.asarray(other)
if other.ndim > 0 and len(self) != len(other):
raise ValueError('Lengths must match to compare')
+ other = lib.item_from_zerodim(other)
+
# numpy will show a DeprecationWarning on invalid elementwise
# comparisons, this will raise in the future
with warnings.catch_warnings():
@@ -586,14 +591,21 @@ def integer_arithmetic_method(self, other):
op_name = op.__name__
mask = None
+
if isinstance(other, (ABCSeries, ABCIndexClass)):
- other = getattr(other, 'values', other)
+ # Rely on pandas to unbox and dispatch to us.
+ return NotImplemented
- if isinstance(other, IntegerArray):
- other, mask = other._data, other._mask
- elif getattr(other, 'ndim', 0) > 1:
+ if getattr(other, 'ndim', 0) > 1:
raise NotImplementedError(
"can only perform ops with 1-d structures")
+
+ if isinstance(other, IntegerArray):
+ other, mask = other._data, other._mask
+
+ elif getattr(other, 'ndim', None) == 0:
+ other = other.item()
+
elif is_list_like(other):
other = np.asarray(other)
if not other.ndim:
@@ -612,6 +624,13 @@ def integer_arithmetic_method(self, other):
else:
mask = self._mask | mask
+ # 1 ** np.nan is 1. So we have to unmask those.
+ if op_name == 'pow':
+ mask = np.where(self == 1, False, mask)
+
+ elif op_name == 'rpow':
+ mask = np.where(other == 1, False, mask)
+
with np.errstate(all='ignore'):
result = op(self._data, other)
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index e6ca7b8de83e4..b853d8fc0c528 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -1471,7 +1471,23 @@ def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
'power': 'pow',
'remainder': 'mod',
'divide': 'div',
+ 'equal': 'eq',
+ 'not_equal': 'ne',
+ 'less': 'lt',
+ 'less_equal': 'le',
+ 'greater': 'gt',
+ 'greater_equal': 'ge',
}
+
+ flipped = {
+ 'lt': '__gt__',
+ 'le': '__ge__',
+ 'gt': '__lt__',
+ 'ge': '__le__',
+ 'eq': '__eq__',
+ 'ne': '__ne__',
+ }
+
op_name = ufunc.__name__
op_name = aliases.get(op_name, op_name)
@@ -1479,7 +1495,8 @@ def __array_ufunc__(self, ufunc, method, *inputs, **kwargs):
if isinstance(inputs[0], type(self)):
return getattr(self, '__{}__'.format(op_name))(inputs[1])
else:
- return getattr(self, '__r{}__'.format(op_name))(inputs[0])
+ name = flipped.get(op_name, '__r{}__'.format(op_name))
+ return getattr(self, name)(inputs[0])
if len(inputs) == 1:
# No alignment necessary.
@@ -1528,7 +1545,8 @@ def sparse_arithmetic_method(self, other):
op_name = op.__name__
if isinstance(other, (ABCSeries, ABCIndexClass)):
- other = getattr(other, 'values', other)
+ # Rely on pandas to dispatch to us.
+ return NotImplemented
if isinstance(other, SparseArray):
return _sparse_array_op(self, other, op, op_name)
@@ -1573,10 +1591,11 @@ def cmp_method(self, other):
op_name = op_name[:-1]
if isinstance(other, (ABCSeries, ABCIndexClass)):
- other = getattr(other, 'values', other)
+ # Rely on pandas to unbox and dispatch to us.
+ return NotImplemented
if not is_scalar(other) and not isinstance(other, type(self)):
- # convert list-like to ndarary
+ # convert list-like to ndarray
other = np.asarray(other)
if isinstance(other, np.ndarray):
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 9791354de7ffa..daa87ad173772 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -862,6 +862,13 @@ def masked_arith_op(x, y, op):
# mask is only meaningful for x
result = np.empty(x.size, dtype=x.dtype)
mask = notna(xrav)
+
+ # 1 ** np.nan is 1. So we have to unmask those.
+ if op == pow:
+ mask = np.where(x == 1, False, mask)
+ elif op == rpow:
+ mask = np.where(y == 1, False, mask)
+
if mask.any():
with np.errstate(all='ignore'):
result[mask] = op(xrav[mask], y)
@@ -1202,29 +1209,16 @@ def dispatch_to_extension_op(op, left, right):
# The op calls will raise TypeError if the op is not defined
# on the ExtensionArray
- # TODO(jreback)
- # we need to listify to avoid ndarray, or non-same-type extension array
- # dispatching
-
- if is_extension_array_dtype(left):
-
- new_left = left.values
- if isinstance(right, np.ndarray):
-
- # handle numpy scalars, this is a PITA
- # TODO(jreback)
- new_right = lib.item_from_zerodim(right)
- if is_scalar(new_right):
- new_right = [new_right]
- new_right = list(new_right)
- elif is_extension_array_dtype(right) and type(left) != type(right):
- new_right = list(right)
- else:
- new_right = right
+ # unbox Series and Index to arrays
+ if isinstance(left, (ABCSeries, ABCIndexClass)):
+ new_left = left._values
else:
+ new_left = left
- new_left = list(left.values)
+ if isinstance(right, (ABCSeries, ABCIndexClass)):
+ new_right = right._values
+ else:
new_right = right
res_values = op(new_left, new_right)
diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 23ee8d217bd59..66d2baac8c91c 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -128,6 +128,13 @@ def _check_op(self, s, op_name, other, exc=None):
if omask is not None:
mask |= omask
+ # 1 ** na is na, so need to unmask those
+ if op_name == '__pow__':
+ mask = np.where(s == 1, False, mask)
+
+ elif op_name == '__rpow__':
+ mask = np.where(other == 1, False, mask)
+
# float result type or float op
if ((is_float_dtype(other) or is_float(other) or
op_name in ['__rtruediv__', '__truediv__',
@@ -171,7 +178,6 @@ def _check_op_integer(self, result, expected, mask, s, op_name, other):
else:
expected[(s.values == 0) &
((expected == 0) | expected.isna())] = 0
-
try:
expected[(expected == np.inf) | (expected == -np.inf)] = fill_value
original = expected
@@ -248,13 +254,20 @@ def test_arith_coerce_scalar(self, data, all_arithmetic_operators):
@pytest.mark.parametrize("other", [1., 1.0, np.array(1.), np.array([1.])])
def test_arithmetic_conversion(self, all_arithmetic_operators, other):
# if we have a float operand we should have a float result
- # if if that is equal to an integer
+ # if that is equal to an integer
op = self.get_op_from_name(all_arithmetic_operators)
s = pd.Series([1, 2, 3], dtype='Int64')
result = op(s, other)
assert result.dtype is np.dtype('float')
+ @pytest.mark.parametrize("other", [0, 0.5])
+ def test_arith_zero_dim_ndarray(self, other):
+ arr = integer_array([1, None, 2])
+ result = arr + np.array(other)
+ expected = arr + other
+ tm.assert_equal(result, expected)
+
def test_error(self, data, all_arithmetic_operators):
# invalid ops
@@ -285,6 +298,21 @@ def test_error(self, data, all_arithmetic_operators):
with pytest.raises(NotImplementedError):
opa(np.arange(len(s)).reshape(-1, len(s)))
+ def test_pow(self):
+ # https://github.com/pandas-dev/pandas/issues/22022
+ a = integer_array([1, np.nan, np.nan, 1])
+ b = integer_array([1, np.nan, 1, np.nan])
+ result = a ** b
+ expected = pd.core.arrays.integer_array([1, np.nan, np.nan, 1])
+ tm.assert_extension_array_equal(result, expected)
+
+ def test_rpow_one_to_na(self):
+ # https://github.com/pandas-dev/pandas/issues/22022
+ arr = integer_array([np.nan, np.nan])
+ result = np.array([1.0, 2.0]) ** arr
+ expected = np.array([1.0, np.nan])
+ tm.assert_numpy_array_equal(result, expected)
+
class TestComparisonOps(BaseOpsUtil):
@@ -497,6 +525,18 @@ def test_integer_array_constructor():
IntegerArray(values)
+@pytest.mark.parametrize('a, b', [
+ ([1, None], [1, np.nan]),
+ pytest.param([None], [np.nan],
+ marks=pytest.mark.xfail(reason='GH-23224',
+ strict=True)),
+])
+def test_integer_array_constructor_none_is_nan(a, b):
+ result = integer_array(a)
+ expected = integer_array(b)
+ tm.assert_extension_array_equal(result, expected)
+
+
def test_integer_array_constructor_copy():
values = np.array([1, 2, 3, 4], dtype='int64')
mask = np.array([False, False, False, True], dtype='bool')
diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index 9c1cfcdfeacc3..7baa6284e398f 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -107,6 +107,18 @@ def test_error(self, data, all_arithmetic_operators):
with pytest.raises(AttributeError):
getattr(data, op_name)
+ def test_direct_arith_with_series_returns_not_implemented(self, data):
+ # EAs should return NotImplemented for ops with Series.
+ # Pandas takes care of unboxing the series and calling the EA's op.
+ other = pd.Series(data)
+ if hasattr(data, '__add__'):
+ result = data.__add__(other)
+ assert result is NotImplemented
+ else:
+ raise pytest.skip(
+ "{} does not implement add".format(data.__class__.__name__)
+ )
+
class BaseComparisonOpsTests(BaseOpsUtil):
"""Various Series and DataFrame comparison ops methods."""
@@ -140,3 +152,15 @@ def test_compare_array(self, data, all_compare_operators):
s = pd.Series(data)
other = pd.Series([data[0]] * len(data))
self._compare_other(s, data, op_name, other)
+
+ def test_direct_arith_with_series_returns_not_implemented(self, data):
+ # EAs should return NotImplemented for ops with Series.
+ # Pandas takes care of unboxing the series and calling the EA's op.
+ other = pd.Series(data)
+ if hasattr(data, '__eq__'):
+ result = data.__eq__(other)
+ assert result is NotImplemented
+ else:
+ raise pytest.skip(
+ "{} does not implement __eq__".format(data.__class__.__name__)
+ )
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 53a598559393c..fe07aae61c5e2 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -47,6 +47,7 @@ def _is_numeric(self):
class DecimalArray(ExtensionArray, ExtensionScalarOpsMixin):
+ __array_priority__ = 1000
def __init__(self, values, dtype=None, copy=False, context=None):
for val in values:
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 976511941042d..5c63e50c3eaaa 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -55,6 +55,7 @@ def construct_from_string(cls, string):
class JSONArray(ExtensionArray):
dtype = JSONDtype()
+ __array_priority__ = 1000
def __init__(self, values, dtype=None, copy=False):
for val in values:
diff --git a/pandas/tests/extension/test_integer.py b/pandas/tests/extension/test_integer.py
index 89c36bbe7b325..668939e775148 100644
--- a/pandas/tests/extension/test_integer.py
+++ b/pandas/tests/extension/test_integer.py
@@ -143,7 +143,6 @@ def test_error(self, data, all_arithmetic_operators):
# other specific errors tested in the integer array specific tests
pass
- @pytest.mark.xfail(reason="EA is listified. GH-22922", strict=True)
def test_add_series_with_extension_array(self, data):
super(TestArithmeticOps, self).test_add_series_with_extension_array(
data
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 37ba1c91368b3..43b24930f6303 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
import operator
+import numpy as np
import pytest
from pandas import Series
@@ -66,3 +67,18 @@ def test_ser_cmp_result_names(self, names, op):
ser = Series(cidx).rename(names[1])
result = op(ser, cidx)
assert result.name == names[2]
+
+
+def test_pow_ops_object():
+ # 22922
+ # pow is weird with masking & 1, so testing here
+ a = Series([1, np.nan, 1, np.nan], dtype=object)
+ b = Series([1, np.nan, np.nan, 1], dtype=object)
+ result = a ** b
+ expected = Series(a.values ** b.values, dtype=object)
+ tm.assert_series_equal(result, expected)
+
+ result = b ** a
+ expected = Series(b.values ** a.values, dtype=object)
+
+ tm.assert_series_equal(result, expected)
| This simplifies dispatch_to_extension_op. The remaining logic is simply
unboxing Series / Indexes in favor of their underlying arrays. This forced
two additional changes
1. Move some logic that IntegerArray relied on down to the IntegerArray ops.
Things like handling of 0-dim ndarrays was previously broken on IntegerArray
ops, but work with Serires[IntegerArray]
2. Fix pandas handling of 1 ** NA for object dtype (used to construct expected).
closes https://github.com/pandas-dev/pandas/issues/22922
closes https://github.com/pandas-dev/pandas/issues/22022 | https://api.github.com/repos/pandas-dev/pandas/pulls/23155 | 2018-10-14T18:49:06Z | 2018-10-19T12:08:34Z | 2018-10-19T12:08:34Z | 2018-10-23T08:49:20Z |
DOC: fix rendering of example function in cookbook | diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 21c8ab4128188..be8457fc14a4f 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -1228,36 +1228,40 @@ Correlation
The `method` argument within `DataFrame.corr` can accept a callable in addition to the named correlation types. Here we compute the `distance correlation <https://en.wikipedia.org/wiki/Distance_correlation>`__ matrix for a `DataFrame` object.
-.. ipython:: python
-
- def distcorr(x, y):
- n = len(x)
- a = np.zeros(shape=(n, n))
- b = np.zeros(shape=(n, n))
-
- for i in range(n):
- for j in range(i + 1, n):
- a[i, j] = abs(x[i] - x[j])
- b[i, j] = abs(y[i] - y[j])
-
- a += a.T
- b += b.T
-
- a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
- b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
-
- A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
- B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
-
- cov_ab = np.sqrt(np.nansum(A * B)) / n
- std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n)
- std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n)
-
- return cov_ab / std_a / std_b
-
- df = pd.DataFrame(np.random.normal(size=(100, 3)))
+.. code-block:: python
- df.corr(method=distcorr)
+ >>> def distcorr(x, y):
+ ... n = len(x)
+ ... a = np.zeros(shape=(n, n))
+ ... b = np.zeros(shape=(n, n))
+ ...
+ ... for i in range(n):
+ ... for j in range(i + 1, n):
+ ... a[i, j] = abs(x[i] - x[j])
+ ... b[i, j] = abs(y[i] - y[j])
+ ...
+ ... a += a.T
+ ... b += b.T
+ ...
+ ... a_bar = np.vstack([np.nanmean(a, axis=0)] * n)
+ ... b_bar = np.vstack([np.nanmean(b, axis=0)] * n)
+ ...
+ ... A = a - a_bar - a_bar.T + np.full(shape=(n, n), fill_value=a_bar.mean())
+ ... B = b - b_bar - b_bar.T + np.full(shape=(n, n), fill_value=b_bar.mean())
+ ...
+ ... cov_ab = np.sqrt(np.nansum(A * B)) / n
+ ... std_a = np.sqrt(np.sqrt(np.nansum(A**2)) / n)
+ ... std_b = np.sqrt(np.sqrt(np.nansum(B**2)) / n)
+ ...
+ ... return cov_ab / std_a / std_b
+ ...
+ >>> df = pd.DataFrame(np.random.normal(size=(100, 3)))
+ ...
+ >>> df.corr(method=distcorr)
+ 0 1 2
+ 0 1.000000 0.171368 0.145302
+ 1 0.171368 1.000000 0.189919
+ 2 0.145302 0.189919 1.000000
Timedeltas
----------
| Fix #23128 | https://api.github.com/repos/pandas-dev/pandas/pulls/23153 | 2018-10-14T18:11:53Z | 2018-10-16T13:25:33Z | 2018-10-16T13:25:33Z | 2019-01-02T20:26:44Z |
DOC: #22896, Fixed docstring of to_stata in pandas/core/frame.py | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index f0772f72d63d4..86e7003681e98 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -148,7 +148,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests frame.py' ; echo $MSG
pytest -q --doctest-modules pandas/core/frame.py \
- -k"-axes -combine -itertuples -join -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack -to_stata"
+ -k"-axes -combine -itertuples -join -nunique -pivot_table -quantile -query -reindex -reindex_axis -replace -round -set_index -stack"
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests series.py' ; echo $MSG
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8d6e403783fc9..572bb3668caf8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1856,13 +1856,16 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
data_label=None, variable_labels=None, version=114,
convert_strl=None):
"""
- Export Stata binary dta files.
+ Export DataFrame object to Stata dta format.
+
+ Writes the DataFrame to a Stata dataset file.
+ "dta" files contain a Stata dataset.
Parameters
----------
- fname : path (string), buffer or path object
- string, path object (pathlib.Path or py._path.local.LocalPath) or
- object implementing a binary write() functions. If using a buffer
+ fname : str, buffer or path object
+ String, path object (pathlib.Path or py._path.local.LocalPath) or
+ object implementing a binary write() function. If using a buffer
then the buffer will not be automatically closed after the file
data has been written.
convert_dates : dict
@@ -1881,7 +1884,7 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
time_stamp : datetime
A datetime to use as file creation date. Default is the current
time.
- data_label : str
+ data_label : str, optional
A label for the data set. Must be 80 characters or smaller.
variable_labels : dict
Dictionary containing columns as keys and variable labels as
@@ -1889,7 +1892,7 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
.. versionadded:: 0.19.0
- version : {114, 117}
+ version : {114, 117}, default 114
Version to use in the output dta file. Version 114 can be used
read by Stata 10 and later. Version 117 can be read by Stata 13
or later. Version 114 limits string variables to 244 characters or
@@ -1921,28 +1924,16 @@ def to_stata(self, fname, convert_dates=None, write_index=True,
See Also
--------
- pandas.read_stata : Import Stata data files.
- pandas.io.stata.StataWriter : Low-level writer for Stata data files.
- pandas.io.stata.StataWriter117 : Low-level writer for version 117
- files.
+ read_stata : Import Stata data files.
+ io.stata.StataWriter : Low-level writer for Stata data files.
+ io.stata.StataWriter117 : Low-level writer for version 117 files.
Examples
--------
- >>> data.to_stata('./data_file.dta')
-
- Or with dates
-
- >>> data.to_stata('./date_data_file.dta', {2 : 'tw'})
-
- Alternatively you can create an instance of the StataWriter class
-
- >>> writer = StataWriter('./data_file.dta', data)
- >>> writer.write_file()
-
- With dates:
-
- >>> writer = StataWriter('./date_data_file.dta', data, {2 : 'tw'})
- >>> writer.write_file()
+ >>> df = pd.DataFrame({'animal': ['falcon', 'parrot', 'falcon',
+ ... 'parrot'],
+ ... 'speed': [350, 18, 361, 15]})
+ >>> df.to_stata('animals.dta') # doctest: +SKIP
"""
kwargs = {}
if version not in (114, 117):
| - [x] closes #22896
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] passes `./scripts/validate_docstrings.py pandas.DataFrame.to_stata`
| https://api.github.com/repos/pandas-dev/pandas/pulls/23152 | 2018-10-14T17:41:46Z | 2018-11-21T12:40:57Z | 2018-11-21T12:40:57Z | 2018-11-21T12:41:00Z |
DOC: Add number of errors/warnings | diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 6588522331433..fbc15bf787ac9 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -591,11 +591,11 @@ def header(title, width=80, char='#'):
fd.write('{}\n'.format(doc_info['docstring']))
fd.write(header('Validation'))
if doc_info['errors']:
- fd.write('Errors found:\n')
+ fd.write('{} Errors found:\n'.format(len(doc_info['errors'])))
for err in doc_info['errors']:
fd.write('\t{}\n'.format(err))
if doc_info['warnings']:
- fd.write('Warnings found:\n')
+ fd.write('{} Warnings found:\n'.format(len(doc_info['warnings'])))
for wrn in doc_info['warnings']:
fd.write('\t{}\n'.format(wrn))
| - [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Add number of Errors or Warnings to scripts/validate_docstrings.py when validating docstrings. | https://api.github.com/repos/pandas-dev/pandas/pulls/23150 | 2018-10-14T15:26:23Z | 2018-11-03T14:33:18Z | 2018-11-03T14:33:18Z | 2019-11-07T09:13:22Z |
CLN GH23123 Move SparseArray to arrays | diff --git a/pandas/api/extensions/__init__.py b/pandas/api/extensions/__init__.py
index 8a515661920f3..51555c57b2288 100644
--- a/pandas/api/extensions/__init__.py
+++ b/pandas/api/extensions/__init__.py
@@ -3,8 +3,8 @@
register_index_accessor,
register_series_accessor)
from pandas.core.algorithms import take # noqa
-from pandas.core.arrays.base import (ExtensionArray, # noqa
- ExtensionScalarOpsMixin)
+from pandas.core.arrays import (ExtensionArray, # noqa
+ ExtensionScalarOpsMixin)
from pandas.core.dtypes.dtypes import ( # noqa
ExtensionDtype, register_extension_dtype
)
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 713a5b1120beb..59c162251c58f 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -56,6 +56,8 @@ def load_reduce(self):
# If classes are moved, provide compat here.
_class_locations_map = {
+ ('pandas.core.sparse.array', 'SparseArray'):
+ ('pandas.core.arrays', 'SparseArray'),
# 15477
('pandas.core.base', 'FrozenNDArray'):
@@ -88,7 +90,7 @@ def load_reduce(self):
# 15998 top-level dirs moving
('pandas.sparse.array', 'SparseArray'):
- ('pandas.core.sparse.array', 'SparseArray'),
+ ('pandas.core.arrays.sparse', 'SparseArray'),
('pandas.sparse.series', 'SparseSeries'):
('pandas.core.sparse.series', 'SparseSeries'),
('pandas.sparse.frame', 'SparseDataFrame'):
diff --git a/pandas/core/arrays/__init__.py b/pandas/core/arrays/__init__.py
index 29f258bf1b29e..0537b79541641 100644
--- a/pandas/core/arrays/__init__.py
+++ b/pandas/core/arrays/__init__.py
@@ -8,3 +8,4 @@
from .timedeltas import TimedeltaArrayMixin # noqa
from .integer import ( # noqa
IntegerArray, integer_array)
+from .sparse import SparseArray # noqa
diff --git a/pandas/core/sparse/array.py b/pandas/core/arrays/sparse.py
similarity index 85%
rename from pandas/core/sparse/array.py
rename to pandas/core/arrays/sparse.py
index 15b5118db2230..f5e54e4425444 100644
--- a/pandas/core/sparse/array.py
+++ b/pandas/core/arrays/sparse.py
@@ -4,6 +4,7 @@
from __future__ import division
# pylint: disable=E1101,E1103,W0231
+import re
import operator
import numbers
import numpy as np
@@ -16,8 +17,10 @@
from pandas.errors import PerformanceWarning
from pandas.compat.numpy import function as nv
-from pandas.core.arrays.base import ExtensionArray, ExtensionOpsMixin
+from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
import pandas.core.common as com
+from pandas.core.dtypes.base import ExtensionDtype
+from pandas.core.dtypes.dtypes import register_extension_dtype
from pandas.core.dtypes.generic import (
ABCSparseSeries, ABCSeries, ABCIndexClass
)
@@ -45,7 +48,252 @@
import pandas.core.algorithms as algos
import pandas.io.formats.printing as printing
-from pandas.core.sparse.dtype import SparseDtype
+
+# ----------------------------------------------------------------------------
+# Dtype
+
+@register_extension_dtype
+class SparseDtype(ExtensionDtype):
+ """
+ Dtype for data stored in :class:`SparseArray`.
+
+ This dtype implements the pandas ExtensionDtype interface.
+
+ .. versionadded:: 0.24.0
+
+ Parameters
+ ----------
+ dtype : str, ExtensionDtype, numpy.dtype, type, default numpy.float64
+ The dtype of the underlying array storing the non-fill value values.
+ fill_value : scalar, optional.
+ The scalar value not stored in the SparseArray. By default, this
+ depends on `dtype`.
+
+ ========== ==========
+ dtype na_value
+ ========== ==========
+ float ``np.nan``
+ int ``0``
+ bool ``False``
+ datetime64 ``pd.NaT``
+ timedelta64 ``pd.NaT``
+ ========== ==========
+
+ The default value may be overridden by specifying a `fill_value`.
+ """
+ # We include `_is_na_fill_value` in the metadata to avoid hash collisions
+ # between SparseDtype(float, 0.0) and SparseDtype(float, nan).
+ # Without is_na_fill_value in the comparison, those would be equal since
+ # hash(nan) is (sometimes?) 0.
+ _metadata = ('_dtype', '_fill_value', '_is_na_fill_value')
+
+ def __init__(self, dtype=np.float64, fill_value=None):
+ # type: (Union[str, np.dtype, 'ExtensionDtype', type], Any) -> None
+ from pandas.core.dtypes.missing import na_value_for_dtype
+ from pandas.core.dtypes.common import (
+ pandas_dtype, is_string_dtype, is_scalar
+ )
+
+ if isinstance(dtype, type(self)):
+ if fill_value is None:
+ fill_value = dtype.fill_value
+ dtype = dtype.subtype
+
+ dtype = pandas_dtype(dtype)
+ if is_string_dtype(dtype):
+ dtype = np.dtype('object')
+
+ if fill_value is None:
+ fill_value = na_value_for_dtype(dtype)
+
+ if not is_scalar(fill_value):
+ raise ValueError("fill_value must be a scalar. Got {} "
+ "instead".format(fill_value))
+ self._dtype = dtype
+ self._fill_value = fill_value
+
+ def __hash__(self):
+ # Python3 doesn't inherit __hash__ when a base class overrides
+ # __eq__, so we explicitly do it here.
+ return super(SparseDtype, self).__hash__()
+
+ def __eq__(self, other):
+ # We have to override __eq__ to handle NA values in _metadata.
+ # The base class does simple == checks, which fail for NA.
+ if isinstance(other, compat.string_types):
+ try:
+ other = self.construct_from_string(other)
+ except TypeError:
+ return False
+
+ if isinstance(other, type(self)):
+ subtype = self.subtype == other.subtype
+ if self._is_na_fill_value:
+ # this case is complicated by two things:
+ # SparseDtype(float, float(nan)) == SparseDtype(float, np.nan)
+ # SparseDtype(float, np.nan) != SparseDtype(float, pd.NaT)
+ # i.e. we want to treat any floating-point NaN as equal, but
+ # not a floating-point NaN and a datetime NaT.
+ fill_value = (
+ other._is_na_fill_value and
+ isinstance(self.fill_value, type(other.fill_value)) or
+ isinstance(other.fill_value, type(self.fill_value))
+ )
+ else:
+ fill_value = self.fill_value == other.fill_value
+
+ return subtype and fill_value
+ return False
+
+ @property
+ def fill_value(self):
+ """
+ The fill value of the array.
+
+ Converting the SparseArray to a dense ndarray will fill the
+ array with this value.
+
+ .. warning::
+
+ It's possible to end up with a SparseArray that has ``fill_value``
+ values in ``sp_values``. This can occur, for example, when setting
+ ``SparseArray.fill_value`` directly.
+ """
+ return self._fill_value
+
+ @property
+ def _is_na_fill_value(self):
+ from pandas.core.dtypes.missing import isna
+ return isna(self.fill_value)
+
+ @property
+ def _is_numeric(self):
+ from pandas.core.dtypes.common import is_object_dtype
+ return not is_object_dtype(self.subtype)
+
+ @property
+ def _is_boolean(self):
+ from pandas.core.dtypes.common import is_bool_dtype
+ return is_bool_dtype(self.subtype)
+
+ @property
+ def kind(self):
+ return self.subtype.kind
+
+ @property
+ def type(self):
+ return self.subtype.type
+
+ @property
+ def subtype(self):
+ return self._dtype
+
+ @property
+ def name(self):
+ return 'Sparse[{}, {}]'.format(self.subtype.name, self.fill_value)
+
+ def __repr__(self):
+ return self.name
+
+ @classmethod
+ def construct_array_type(cls):
+ return SparseArray
+
+ @classmethod
+ def construct_from_string(cls, string):
+ """
+ Construct a SparseDtype from a string form.
+
+ Parameters
+ ----------
+ string : str
+ Can take the following forms.
+
+ string dtype
+ ================ ============================
+ 'int' SparseDtype[np.int64, 0]
+ 'Sparse' SparseDtype[np.float64, nan]
+ 'Sparse[int]' SparseDtype[np.int64, 0]
+ 'Sparse[int, 0]' SparseDtype[np.int64, 0]
+ ================ ============================
+
+ It is not possible to specify non-default fill values
+ with a string. An argument like ``'Sparse[int, 1]'``
+ will raise a ``TypeError`` because the default fill value
+ for integers is 0.
+
+ Returns
+ -------
+ SparseDtype
+ """
+ msg = "Could not construct SparseDtype from '{}'".format(string)
+ if string.startswith("Sparse"):
+ try:
+ sub_type, has_fill_value = cls._parse_subtype(string)
+ result = SparseDtype(sub_type)
+ except Exception:
+ raise TypeError(msg)
+ else:
+ msg = ("Could not construct SparseDtype from '{}'.\n\nIt "
+ "looks like the fill_value in the string is not "
+ "the default for the dtype. Non-default fill_values "
+ "are not supported. Use the 'SparseDtype()' "
+ "constructor instead.")
+ if has_fill_value and str(result) != string:
+ raise TypeError(msg.format(string))
+ return result
+ else:
+ raise TypeError(msg)
+
+ @staticmethod
+ def _parse_subtype(dtype):
+ """
+ Parse a string to get the subtype
+
+ Parameters
+ ----------
+ dtype : str
+ A string like
+
+ * Sparse[subtype]
+ * Sparse[subtype, fill_value]
+
+ Returns
+ -------
+ subtype : str
+
+ Raises
+ ------
+ ValueError
+ When the subtype cannot be extracted.
+ """
+ xpr = re.compile(
+ r"Sparse\[(?P<subtype>[^,]*)(, )?(?P<fill_value>.*?)?\]$"
+ )
+ m = xpr.match(dtype)
+ has_fill_value = False
+ if m:
+ subtype = m.groupdict()['subtype']
+ has_fill_value = m.groupdict()['fill_value'] or has_fill_value
+ elif dtype == "Sparse":
+ subtype = 'float64'
+ else:
+ raise ValueError("Cannot parse {}".format(dtype))
+ return subtype, has_fill_value
+
+ @classmethod
+ def is_dtype(cls, dtype):
+ dtype = getattr(dtype, 'dtype', dtype)
+ if (isinstance(dtype, compat.string_types) and
+ dtype.startswith("Sparse")):
+ sub_type, _ = cls._parse_subtype(dtype)
+ dtype = np.dtype(sub_type)
+ elif isinstance(dtype, cls):
+ return True
+ return isinstance(dtype, np.dtype) or dtype == 'Sparse'
+
+# ----------------------------------------------------------------------------
+# Array
_sparray_doc_kwargs = dict(klass='SparseArray')
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 7a4e7022f7819..22da546355df6 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1,5 +1,4 @@
""" common type operations """
-
import numpy as np
from pandas.compat import (string_types, text_type, binary_type,
PY3, PY36)
@@ -12,7 +11,6 @@
PeriodDtype, IntervalDtype,
PandasExtensionDtype, ExtensionDtype,
_pandas_registry)
-from pandas.core.sparse.dtype import SparseDtype
from pandas.core.dtypes.generic import (
ABCCategorical, ABCPeriodIndex, ABCDatetimeIndex, ABCSeries,
ABCSparseArray, ABCSparseSeries, ABCCategoricalIndex, ABCIndexClass,
@@ -23,7 +21,6 @@
is_file_like, is_re, is_re_compilable, is_sequence, is_nested_list_like,
is_named_tuple, is_array_like, is_decimal, is_complex, is_interval)
-
_POSSIBLY_CAST_DTYPES = {np.dtype(t).name
for t in ['O', 'int8', 'uint8', 'int16', 'uint16',
'int32', 'uint32', 'int64', 'uint64']}
@@ -181,7 +178,7 @@ def is_sparse(arr):
>>> is_sparse(bsr_matrix([1, 2, 3]))
False
"""
- from pandas.core.sparse.dtype import SparseDtype
+ from pandas.core.arrays.sparse import SparseDtype
dtype = getattr(arr, 'dtype', arr)
return isinstance(dtype, SparseDtype)
@@ -1928,10 +1925,13 @@ def _get_dtype_type(arr_or_dtype):
elif is_interval_dtype(arr_or_dtype):
return Interval
return _get_dtype_type(np.dtype(arr_or_dtype))
- elif isinstance(arr_or_dtype, (ABCSparseSeries, ABCSparseArray,
- SparseDtype)):
- dtype = getattr(arr_or_dtype, 'dtype', arr_or_dtype)
- return dtype.type
+ else:
+ from pandas.core.arrays.sparse import SparseDtype
+ if isinstance(arr_or_dtype, (ABCSparseSeries,
+ ABCSparseArray,
+ SparseDtype)):
+ dtype = getattr(arr_or_dtype, 'dtype', arr_or_dtype)
+ return dtype.type
try:
return arr_or_dtype.dtype.type
except AttributeError:
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index ac824708245d2..91fbaf736aae8 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -556,7 +556,7 @@ def _concat_sparse(to_concat, axis=0, typs=None):
a single array, preserving the combined dtypes
"""
- from pandas.core.sparse.array import SparseArray
+ from pandas.core.arrays import SparseArray
fill_values = [x.fill_value for x in to_concat
if isinstance(x, SparseArray)]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e9be7a3e9afb8..064a1b72eb4c8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1763,7 +1763,7 @@ def to_sparse(self, fill_value=None, kind='block'):
>>> type(sdf)
<class 'pandas.core.sparse.frame.SparseDataFrame'>
"""
- from pandas.core.sparse.frame import SparseDataFrame
+ from pandas.core.sparse.api import SparseDataFrame
return SparseDataFrame(self._series, index=self.index,
columns=self.columns, default_kind=kind,
default_fill_value=fill_value)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 3667d7c5e39dc..dd0bb1ab8bacb 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -29,7 +29,7 @@
from pandas.core.base import PandasObject
import pandas.core.algorithms as algos
-from pandas.core.sparse.array import _maybe_to_sparse
+from pandas.core.arrays.sparse import _maybe_to_sparse
from pandas.core.index import Index, MultiIndex, ensure_index
from pandas.core.indexing import maybe_convert_indices
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 640b2812d3e85..8d1ed6486a456 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -2116,7 +2116,7 @@ def _sparse_series_op(left, right, op, name):
new_index = left.index
new_name = get_op_result_name(left, right)
- from pandas.core.sparse.array import _sparse_array_op
+ from pandas.core.arrays.sparse import _sparse_array_op
lvalues, rvalues = _cast_sparse_series_op(left.values, right.values, name)
result = _sparse_array_op(lvalues, rvalues, op, name)
return left._constructor(result, index=new_index, name=new_name)
@@ -2130,7 +2130,7 @@ def _arith_method_SPARSE_ARRAY(cls, op, special):
op_name = _get_op_name(op, special)
def wrapper(self, other):
- from pandas.core.sparse.array import (
+ from pandas.core.arrays.sparse.array import (
SparseArray, _sparse_array_op, _wrap_result, _get_fill)
if isinstance(other, np.ndarray):
if len(self) != len(other):
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 7bee1ba0e2eb2..03b77f0e787f0 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -19,7 +19,7 @@
from pandas.core.frame import DataFrame
from pandas.core.sparse.api import SparseDataFrame, SparseSeries
-from pandas.core.sparse.array import SparseArray
+from pandas.core.arrays import SparseArray
from pandas._libs.sparse import IntIndex
from pandas.core.arrays import Categorical
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4f6bca93d377b..b4566ebd36d13 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1384,7 +1384,7 @@ def to_sparse(self, kind='block', fill_value=None):
"""
# TODO: deprecate
from pandas.core.sparse.series import SparseSeries
- from pandas.core.sparse.array import SparseArray
+ from pandas.core.arrays import SparseArray
values = SparseArray(self, kind=kind, fill_value=fill_value)
return SparseSeries(
diff --git a/pandas/core/sparse/api.py b/pandas/core/sparse/api.py
index 0fb0396e34669..e3be241bcdd70 100644
--- a/pandas/core/sparse/api.py
+++ b/pandas/core/sparse/api.py
@@ -1,6 +1,5 @@
# pylint: disable=W0611
# flake8: noqa
-from pandas.core.sparse.array import SparseArray
+from pandas.core.arrays.sparse import SparseArray, SparseDtype
from pandas.core.sparse.series import SparseSeries
from pandas.core.sparse.frame import SparseDataFrame
-from pandas.core.sparse.dtype import SparseDtype
diff --git a/pandas/core/sparse/dtype.py b/pandas/core/sparse/dtype.py
deleted file mode 100644
index 7f99bf8b58847..0000000000000
--- a/pandas/core/sparse/dtype.py
+++ /dev/null
@@ -1,249 +0,0 @@
-import re
-
-import numpy as np
-
-from pandas.core.dtypes.base import ExtensionDtype
-from pandas.core.dtypes.dtypes import register_extension_dtype
-from pandas import compat
-
-
-@register_extension_dtype
-class SparseDtype(ExtensionDtype):
- """
- Dtype for data stored in :class:`SparseArray`.
-
- This dtype implements the pandas ExtensionDtype interface.
-
- .. versionadded:: 0.24.0
-
- Parameters
- ----------
- dtype : str, ExtensionDtype, numpy.dtype, type, default numpy.float64
- The dtype of the underlying array storing the non-fill value values.
- fill_value : scalar, optional.
- The scalar value not stored in the SparseArray. By default, this
- depends on `dtype`.
-
- ========== ==========
- dtype na_value
- ========== ==========
- float ``np.nan``
- int ``0``
- bool ``False``
- datetime64 ``pd.NaT``
- timedelta64 ``pd.NaT``
- ========== ==========
-
- The default value may be overridden by specifying a `fill_value`.
- """
- # We include `_is_na_fill_value` in the metadata to avoid hash collisions
- # between SparseDtype(float, 0.0) and SparseDtype(float, nan).
- # Without is_na_fill_value in the comparison, those would be equal since
- # hash(nan) is (sometimes?) 0.
- _metadata = ('_dtype', '_fill_value', '_is_na_fill_value')
-
- def __init__(self, dtype=np.float64, fill_value=None):
- # type: (Union[str, np.dtype, 'ExtensionDtype', type], Any) -> None
- from pandas.core.dtypes.missing import na_value_for_dtype
- from pandas.core.dtypes.common import (
- pandas_dtype, is_string_dtype, is_scalar
- )
-
- if isinstance(dtype, type(self)):
- if fill_value is None:
- fill_value = dtype.fill_value
- dtype = dtype.subtype
-
- dtype = pandas_dtype(dtype)
- if is_string_dtype(dtype):
- dtype = np.dtype('object')
-
- if fill_value is None:
- fill_value = na_value_for_dtype(dtype)
-
- if not is_scalar(fill_value):
- raise ValueError("fill_value must be a scalar. Got {} "
- "instead".format(fill_value))
- self._dtype = dtype
- self._fill_value = fill_value
-
- def __hash__(self):
- # Python3 doesn't inherit __hash__ when a base class overrides
- # __eq__, so we explicitly do it here.
- return super(SparseDtype, self).__hash__()
-
- def __eq__(self, other):
- # We have to override __eq__ to handle NA values in _metadata.
- # The base class does simple == checks, which fail for NA.
- if isinstance(other, compat.string_types):
- try:
- other = self.construct_from_string(other)
- except TypeError:
- return False
-
- if isinstance(other, type(self)):
- subtype = self.subtype == other.subtype
- if self._is_na_fill_value:
- # this case is complicated by two things:
- # SparseDtype(float, float(nan)) == SparseDtype(float, np.nan)
- # SparseDtype(float, np.nan) != SparseDtype(float, pd.NaT)
- # i.e. we want to treat any floating-point NaN as equal, but
- # not a floating-point NaN and a datetime NaT.
- fill_value = (
- other._is_na_fill_value and
- isinstance(self.fill_value, type(other.fill_value)) or
- isinstance(other.fill_value, type(self.fill_value))
- )
- else:
- fill_value = self.fill_value == other.fill_value
-
- return subtype and fill_value
- return False
-
- @property
- def fill_value(self):
- """
- The fill value of the array.
-
- Converting the SparseArray to a dense ndarray will fill the
- array with this value.
-
- .. warning::
-
- It's possible to end up with a SparseArray that has ``fill_value``
- values in ``sp_values``. This can occur, for example, when setting
- ``SparseArray.fill_value`` directly.
- """
- return self._fill_value
-
- @property
- def _is_na_fill_value(self):
- from pandas.core.dtypes.missing import isna
- return isna(self.fill_value)
-
- @property
- def _is_numeric(self):
- from pandas.core.dtypes.common import is_object_dtype
- return not is_object_dtype(self.subtype)
-
- @property
- def _is_boolean(self):
- from pandas.core.dtypes.common import is_bool_dtype
- return is_bool_dtype(self.subtype)
-
- @property
- def kind(self):
- return self.subtype.kind
-
- @property
- def type(self):
- return self.subtype.type
-
- @property
- def subtype(self):
- return self._dtype
-
- @property
- def name(self):
- return 'Sparse[{}, {}]'.format(self.subtype.name, self.fill_value)
-
- def __repr__(self):
- return self.name
-
- @classmethod
- def construct_array_type(cls):
- from .array import SparseArray
- return SparseArray
-
- @classmethod
- def construct_from_string(cls, string):
- """
- Construct a SparseDtype from a string form.
-
- Parameters
- ----------
- string : str
- Can take the following forms.
-
- string dtype
- ================ ============================
- 'int' SparseDtype[np.int64, 0]
- 'Sparse' SparseDtype[np.float64, nan]
- 'Sparse[int]' SparseDtype[np.int64, 0]
- 'Sparse[int, 0]' SparseDtype[np.int64, 0]
- ================ ============================
-
- It is not possible to specify non-default fill values
- with a string. An argument like ``'Sparse[int, 1]'``
- will raise a ``TypeError`` because the default fill value
- for integers is 0.
-
- Returns
- -------
- SparseDtype
- """
- msg = "Could not construct SparseDtype from '{}'".format(string)
- if string.startswith("Sparse"):
- try:
- sub_type, has_fill_value = cls._parse_subtype(string)
- result = SparseDtype(sub_type)
- except Exception:
- raise TypeError(msg)
- else:
- msg = ("Could not construct SparseDtype from '{}'.\n\nIt "
- "looks like the fill_value in the string is not "
- "the default for the dtype. Non-default fill_values "
- "are not supported. Use the 'SparseDtype()' "
- "constructor instead.")
- if has_fill_value and str(result) != string:
- raise TypeError(msg.format(string))
- return result
- else:
- raise TypeError(msg)
-
- @staticmethod
- def _parse_subtype(dtype):
- """
- Parse a string to get the subtype
-
- Parameters
- ----------
- dtype : str
- A string like
-
- * Sparse[subtype]
- * Sparse[subtype, fill_value]
-
- Returns
- -------
- subtype : str
-
- Raises
- ------
- ValueError
- When the subtype cannot be extracted.
- """
- xpr = re.compile(
- r"Sparse\[(?P<subtype>[^,]*)(, )?(?P<fill_value>.*?)?\]$"
- )
- m = xpr.match(dtype)
- has_fill_value = False
- if m:
- subtype = m.groupdict()['subtype']
- has_fill_value = m.groupdict()['fill_value'] or has_fill_value
- elif dtype == "Sparse":
- subtype = 'float64'
- else:
- raise ValueError("Cannot parse {}".format(dtype))
- return subtype, has_fill_value
-
- @classmethod
- def is_dtype(cls, dtype):
- dtype = getattr(dtype, 'dtype', dtype)
- if (isinstance(dtype, compat.string_types) and
- dtype.startswith("Sparse")):
- sub_type, _ = cls._parse_subtype(dtype)
- dtype = np.dtype(sub_type)
- elif isinstance(dtype, cls):
- return True
- return isinstance(dtype, np.dtype) or dtype == 'Sparse'
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 36b6ea089f459..2ed275e3bbd2d 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -22,8 +22,8 @@
from pandas.core.internals import (BlockManager,
create_block_manager_from_arrays)
import pandas.core.generic as generic
-from pandas.core.sparse.series import SparseSeries, SparseArray
-from pandas.core.sparse.dtype import SparseDtype
+from pandas.core.arrays.sparse import SparseArray, SparseDtype
+from pandas.core.sparse.series import SparseSeries
from pandas._libs.sparse import BlockIndex, get_blocks
from pandas.util._decorators import Appender
import pandas.core.ops as ops
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index eebf26bbb9708..35ddd623878d0 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -24,7 +24,7 @@
import pandas._libs.index as libindex
from pandas.util._decorators import Appender, Substitution
-from pandas.core.sparse.array import (
+from pandas.core.arrays import (
SparseArray,
)
from pandas._libs.sparse import BlockIndex, IntIndex
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index e41885d525653..6a2cfd4d4a7b3 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -2,7 +2,6 @@
data hash pandas / numpy objects
"""
import itertools
-
import numpy as np
from pandas._libs import hashing, tslibs
from pandas.core.dtypes.generic import (
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index 638b76c780852..135f9e89eaaef 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -70,7 +70,7 @@
from pandas.core.generic import NDFrame
from pandas.core.internals import BlockManager, make_block, _safe_reshape
from pandas.core.sparse.api import SparseSeries, SparseDataFrame
-from pandas.core.sparse.array import BlockIndex, IntIndex
+from pandas.core.arrays.sparse import BlockIndex, IntIndex
from pandas.io.common import get_filepath_or_buffer, _stringify_path
from pandas.io.msgpack import Unpacker as _Unpacker, Packer as _Packer, ExtType
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index de193db846c50..9cceff30c9e0e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -50,7 +50,7 @@
from pandas.core.internals import (BlockManager, make_block,
_block2d_to_blocknd,
_factor_indexer, _block_shape)
-from pandas.core.sparse.array import BlockIndex, IntIndex
+from pandas.core.arrays.sparse import BlockIndex, IntIndex
from pandas.io.common import _stringify_path
from pandas.io.formats.printing import adjoin, pprint_thing
diff --git a/pandas/tests/arrays/sparse/__init__.py b/pandas/tests/arrays/sparse/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/sparse/test_arithmetics.py b/pandas/tests/arrays/sparse/test_arithmetics.py
similarity index 100%
rename from pandas/tests/sparse/test_arithmetics.py
rename to pandas/tests/arrays/sparse/test_arithmetics.py
diff --git a/pandas/tests/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
similarity index 100%
rename from pandas/tests/sparse/test_array.py
rename to pandas/tests/arrays/sparse/test_array.py
diff --git a/pandas/tests/sparse/test_dtype.py b/pandas/tests/arrays/sparse/test_dtype.py
similarity index 100%
rename from pandas/tests/sparse/test_dtype.py
rename to pandas/tests/arrays/sparse/test_dtype.py
diff --git a/pandas/tests/sparse/test_libsparse.py b/pandas/tests/arrays/sparse/test_libsparse.py
similarity index 99%
rename from pandas/tests/sparse/test_libsparse.py
rename to pandas/tests/arrays/sparse/test_libsparse.py
index 3b90d93cee7a4..3d867cdda1d42 100644
--- a/pandas/tests/sparse/test_libsparse.py
+++ b/pandas/tests/arrays/sparse/test_libsparse.py
@@ -6,7 +6,7 @@
import pandas.util.testing as tm
import pandas.util._test_decorators as td
-from pandas.core.sparse.array import IntIndex, BlockIndex, _make_index
+from pandas.core.arrays.sparse import IntIndex, BlockIndex, _make_index
import pandas._libs.sparse as splib
TEST_LENGTH = 20
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 11bf1cb6e9f05..ca0435141c2e2 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -2,8 +2,7 @@
import pandas as pd
import numpy as np
-from pandas.core.sparse.dtype import SparseDtype
-from pandas import SparseArray
+from pandas import SparseArray, SparseDtype
from pandas.errors import PerformanceWarning
from pandas.tests.extension import base
import pandas.util.testing as tm
diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index f27600d830a93..bf7247caa5d4a 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -223,13 +223,15 @@ def test_concat_empty_series_dtypes(self):
result = pd.concat([Series(dtype='float64').to_sparse(), Series(
dtype='float64')])
# TODO: release-note: concat sparse dtype
- assert result.dtype == pd.core.sparse.dtype.SparseDtype(np.float64)
+ expected = pd.core.sparse.api.SparseDtype(np.float64)
+ assert result.dtype == expected
assert result.ftype == 'float64:sparse'
result = pd.concat([Series(dtype='float64').to_sparse(), Series(
dtype='object')])
# TODO: release-note: concat sparse dtype
- assert result.dtype == pd.core.sparse.dtype.SparseDtype('object')
+ expected = pd.core.sparse.api.SparseDtype('object')
+ assert result.dtype == expected
assert result.ftype == 'object:sparse'
def test_combine_first_dt64(self):
diff --git a/pandas/tests/series/test_subclass.py b/pandas/tests/series/test_subclass.py
index d539dfa456740..70e44a9d2d40f 100644
--- a/pandas/tests/series/test_subclass.py
+++ b/pandas/tests/series/test_subclass.py
@@ -2,7 +2,7 @@
# pylint: disable-msg=E1101,W0612
import numpy as np
import pandas as pd
-from pandas.core.sparse.dtype import SparseDtype
+from pandas import SparseDtype
import pandas.util.testing as tm
diff --git a/pandas/tests/sparse/series/test_series.py b/pandas/tests/sparse/series/test_series.py
index a1ec8314841e3..7a8b5b5ad407b 100644
--- a/pandas/tests/sparse/series/test_series.py
+++ b/pandas/tests/sparse/series/test_series.py
@@ -18,11 +18,10 @@
from pandas.compat import range, PY36
from pandas.core.reshape.util import cartesian_product
-from pandas.core.sparse.api import SparseDtype
import pandas.core.sparse.frame as spf
from pandas._libs.sparse import BlockIndex, IntIndex
-from pandas.core.sparse.api import SparseSeries
+from pandas import SparseSeries, SparseDtype
from pandas.tests.series.test_api import SharedWithSparse
| - [X] closes #23123
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Importing `SparseArray` from the `Arrays` folder was causing circular import paths with the `__init__.py`, so I had to empty it and fix every other file that imported from `pandas.core.arrays` | https://api.github.com/repos/pandas-dev/pandas/pulls/23147 | 2018-10-14T11:29:51Z | 2018-10-16T13:37:12Z | 2018-10-16T13:37:12Z | 2018-10-16T13:37:25Z |
DOC: Validate that See Also section items do not contain the pandas. prefix | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 6bf832fb9dc6d..0e10265a7291d 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -334,33 +334,6 @@ def method(self, foo=None, bar=None):
pass
-class BadSeeAlso(object):
-
- def desc_no_period(self):
- """
- Return the first 5 elements of the Series.
-
- See Also
- --------
- Series.tail : Return the last 5 elements of the Series.
- Series.iloc : Return a slice of the elements in the Series,
- which can also be used to return the first or last n
- """
- pass
-
- def desc_first_letter_lowercase(self):
- """
- Return the first 5 elements of the Series.
-
- See Also
- --------
- Series.tail : return the last 5 elements of the Series.
- Series.iloc : Return a slice of the elements in the Series,
- which can also be used to return the first or last n.
- """
- pass
-
-
class BadSummaries(object):
def wrong_line(self):
@@ -573,6 +546,44 @@ def no_punctuation(self):
return "Hello world!"
+class BadSeeAlso(object):
+
+ def desc_no_period(self):
+ """
+ Return the first 5 elements of the Series.
+
+ See Also
+ --------
+ Series.tail : Return the last 5 elements of the Series.
+ Series.iloc : Return a slice of the elements in the Series,
+ which can also be used to return the first or last n
+ """
+ pass
+
+ def desc_first_letter_lowercase(self):
+ """
+ Return the first 5 elements of the Series.
+
+ See Also
+ --------
+ Series.tail : return the last 5 elements of the Series.
+ Series.iloc : Return a slice of the elements in the Series,
+ which can also be used to return the first or last n.
+ """
+ pass
+
+ def prefix_pandas(self):
+ """
+ Have `pandas` prefix in See Also section.
+
+ See Also
+ --------
+ pandas.Series.rename : Alter Series index labels or name.
+ DataFrame.head : The first `n` rows of the caller object.
+ """
+ pass
+
+
class TestValidator(object):
def _import_path(self, klass=None, func=None):
@@ -688,7 +699,11 @@ def test_bad_generic_functions(self, func):
pytest.param('BadReturns', 'no_description', ('foo',),
marks=pytest.mark.xfail),
pytest.param('BadReturns', 'no_punctuation', ('foo',),
- marks=pytest.mark.xfail)
+ marks=pytest.mark.xfail),
+ # See Also tests
+ ('BadSeeAlso', 'prefix_pandas',
+ ('pandas.Series.rename in `See Also` section '
+ 'does not need `pandas` prefix',))
])
def test_bad_examples(self, capsys, klass, func, msgs):
result = validate_one(self._import_path(klass=klass, func=func)) # noqa:F821
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 2fef3332de55c..29d485550be40 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -515,7 +515,10 @@ def validate_one(func_name):
else:
errs.append('Missing description for '
'See Also "{}" reference'.format(rel_name))
-
+ if rel_name.startswith('pandas.'):
+ errs.append('{} in `See Also` section does not '
+ 'need `pandas` prefix, use {} instead.'
+ .format(rel_name, rel_name[len('pandas.'):]))
for line in doc.raw_doc.splitlines():
if re.match("^ *\t", line):
errs.append('Tabs found at the start of line "{}", '
| Fix #23136
Please also see similar PR #23143 | https://api.github.com/repos/pandas-dev/pandas/pulls/23145 | 2018-10-14T08:11:23Z | 2018-10-27T18:20:16Z | 2018-10-27T18:20:16Z | 2019-01-02T20:26:24Z |
DOC: Add docstring validations for "See Also" section | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index 27c63e3ba3a79..8ebf4299edca4 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -334,6 +334,33 @@ def method(self, foo=None, bar=None):
pass
+class BadSeeAlso(object):
+
+ def desc_no_period(self):
+ """
+ Return the first 5 elements of the Series.
+
+ See Also
+ --------
+ Series.tail : Return the last 5 elements of the Series.
+ Series.iloc : Return a slice of the elements in the Series,
+ which can also be used to return the first or last n
+ """
+ pass
+
+ def desc_first_letter_lowercase(self):
+ """
+ Return the first 5 elements of the Series.
+
+ See Also
+ --------
+ Series.tail : return the last 5 elements of the Series.
+ Series.iloc : Return a slice of the elements in the Series,
+ which can also be used to return the first or last n.
+ """
+ pass
+
+
class BadSummaries(object):
def wrong_line(self):
@@ -564,6 +591,11 @@ def test_bad_generic_functions(self, func):
assert errors
@pytest.mark.parametrize("klass,func,msgs", [
+ # See Also tests
+ ('BadSeeAlso', 'desc_no_period',
+ ('Missing period at end of description for See Also "Series.iloc"',)),
+ ('BadSeeAlso', 'desc_first_letter_lowercase',
+ ('should be capitalized for See Also "Series.tail"',)),
# Summary tests
('BadSummaries', 'wrong_line',
('should start in the line immediately after the opening quotes',)),
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index 6588522331433..a492a95b79bf9 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -496,7 +496,14 @@ def validate_one(func_name):
wrns.append('See Also section not found')
else:
for rel_name, rel_desc in doc.see_also.items():
- if not rel_desc:
+ if rel_desc:
+ if not rel_desc.endswith('.'):
+ errs.append('Missing period at end of description for '
+ 'See Also "{}" reference'.format(rel_name))
+ if not rel_desc[0].isupper():
+ errs.append('Description should be capitalized for '
+ 'See Also "{}" reference'.format(rel_name))
+ else:
errs.append('Missing description for '
'See Also "{}" reference'.format(rel_name))
| - [x] closes #23135
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/23143 | 2018-10-14T01:55:26Z | 2018-10-26T21:23:22Z | 2018-10-26T21:23:22Z | 2018-10-26T21:23:31Z |
ENH: MultiIndex.from_frame | diff --git a/doc/source/advanced.rst b/doc/source/advanced.rst
index 39082ef7a4c69..0cc2cea774bbd 100644
--- a/doc/source/advanced.rst
+++ b/doc/source/advanced.rst
@@ -62,8 +62,9 @@ The :class:`MultiIndex` object is the hierarchical analogue of the standard
can think of ``MultiIndex`` as an array of tuples where each tuple is unique. A
``MultiIndex`` can be created from a list of arrays (using
:meth:`MultiIndex.from_arrays`), an array of tuples (using
-:meth:`MultiIndex.from_tuples`), or a crossed set of iterables (using
-:meth:`MultiIndex.from_product`). The ``Index`` constructor will attempt to return
+:meth:`MultiIndex.from_tuples`), a crossed set of iterables (using
+:meth:`MultiIndex.from_product`), or a :class:`DataFrame` (using
+:meth:`MultiIndex.from_frame`). The ``Index`` constructor will attempt to return
a ``MultiIndex`` when it is passed a list of tuples. The following examples
demonstrate different ways to initialize MultiIndexes.
@@ -89,6 +90,19 @@ to use the :meth:`MultiIndex.from_product` method:
iterables = [['bar', 'baz', 'foo', 'qux'], ['one', 'two']]
pd.MultiIndex.from_product(iterables, names=['first', 'second'])
+You can also construct a ``MultiIndex`` from a ``DataFrame`` directly, using
+the method :meth:`MultiIndex.from_frame`. This is a complementary method to
+:meth:`MultiIndex.to_frame`.
+
+.. versionadded:: 0.24.0
+
+.. ipython:: python
+
+ df = pd.DataFrame([['bar', 'one'], ['bar', 'two'],
+ ['foo', 'one'], ['foo', 'two']],
+ columns=['first', 'second'])
+ pd.MultiIndex.from_frame(df)
+
As a convenience, you can pass a list of arrays directly into ``Series`` or
``DataFrame`` to construct a ``MultiIndex`` automatically:
diff --git a/doc/source/api.rst b/doc/source/api.rst
index 1a23587d2ebb5..49c89a53e7b17 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1703,6 +1703,7 @@ MultiIndex Constructors
MultiIndex.from_arrays
MultiIndex.from_tuples
MultiIndex.from_product
+ MultiIndex.from_frame
MultiIndex Attributes
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 0b2b526dfe9e7..ab4b7f3c41fed 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -378,6 +378,7 @@ Backwards incompatible API changes
- Passing scalar values to :class:`DatetimeIndex` or :class:`TimedeltaIndex` will now raise ``TypeError`` instead of ``ValueError`` (:issue:`23539`)
- ``max_rows`` and ``max_cols`` parameters removed from :class:`HTMLFormatter` since truncation is handled by :class:`DataFrameFormatter` (:issue:`23818`)
- :meth:`read_csv` will now raise a ``ValueError`` if a column with missing values is declared as having dtype ``bool`` (:issue:`20591`)
+- The column order of the resultant :class:`DataFrame` from :meth:`MultiIndex.to_frame` is now guaranteed to match the :attr:`MultiIndex.names` order. (:issue:`22420`)
.. _whatsnew_0240.api_breaking.deps:
@@ -1433,6 +1434,7 @@ MultiIndex
- Removed compatibility for :class:`MultiIndex` pickles prior to version 0.8.0; compatibility with :class:`MultiIndex` pickles from version 0.13 forward is maintained (:issue:`21654`)
- :meth:`MultiIndex.get_loc_level` (and as a consequence, ``.loc`` on a ``Series`` or ``DataFrame`` with a :class:`MultiIndex` index) will now raise a ``KeyError``, rather than returning an empty ``slice``, if asked a label which is present in the ``levels`` but is unused (:issue:`22221`)
+- :cls:`MultiIndex` has gained the :meth:`MultiIndex.from_frame`, it allows constructing a :cls:`MultiIndex` object from a :cls:`DataFrame` (:issue:`22420`)
- Fix ``TypeError`` in Python 3 when creating :class:`MultiIndex` in which some levels have mixed types, e.g. when some labels are tuples (:issue:`15457`)
I/O
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index c4ae7ef54bfce..be0856e1a825a 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1,4 +1,5 @@
# pylint: disable=E1101,E1103,W0232
+from collections import OrderedDict
import datetime
from sys import getsizeof
import warnings
@@ -18,6 +19,7 @@
is_integer, is_iterator, is_list_like, is_object_dtype, is_scalar,
pandas_dtype)
from pandas.core.dtypes.dtypes import ExtensionDtype, PandasExtensionDtype
+from pandas.core.dtypes.generic import ABCDataFrame
from pandas.core.dtypes.missing import array_equivalent, isna
import pandas.core.algorithms as algos
@@ -125,25 +127,25 @@ class MultiIndex(Index):
Parameters
----------
levels : sequence of arrays
- The unique labels for each level
+ The unique labels for each level.
codes : sequence of arrays
- Integers for each level designating which label at each location
+ Integers for each level designating which label at each location.
.. versionadded:: 0.24.0
labels : sequence of arrays
- Integers for each level designating which label at each location
+ Integers for each level designating which label at each location.
.. deprecated:: 0.24.0
Use ``codes`` instead
sortorder : optional int
Level of sortedness (must be lexicographically sorted by that
- level)
+ level).
names : optional sequence of objects
- Names for each of the index levels. (name is accepted for compat)
- copy : boolean, default False
- Copy the meta-data
- verify_integrity : boolean, default True
- Check that the levels/codes are consistent and valid
+ Names for each of the index levels. (name is accepted for compat).
+ copy : bool, default False
+ Copy the meta-data.
+ verify_integrity : bool, default True
+ Check that the levels/codes are consistent and valid.
Attributes
----------
@@ -158,6 +160,7 @@ class MultiIndex(Index):
from_arrays
from_tuples
from_product
+ from_frame
set_levels
set_codes
to_frame
@@ -175,13 +178,9 @@ class MultiIndex(Index):
MultiIndex.from_product : Create a MultiIndex from the cartesian product
of iterables.
MultiIndex.from_tuples : Convert list of tuples to a MultiIndex.
+ MultiIndex.from_frame : Make a MultiIndex from a DataFrame.
Index : The base pandas Index type.
- Notes
- -----
- See the `user guide
- <http://pandas.pydata.org/pandas-docs/stable/advanced.html>`_ for more.
-
Examples
---------
A new ``MultiIndex`` is typically constructed using one of the helper
@@ -196,6 +195,11 @@ class MultiIndex(Index):
See further examples for how to construct a MultiIndex in the doc strings
of the mentioned helper methods.
+
+ Notes
+ -----
+ See the `user guide
+ <http://pandas.pydata.org/pandas-docs/stable/advanced.html>`_ for more.
"""
# initialize to zero-length tuples to make everything work
@@ -288,7 +292,7 @@ def _verify_integrity(self, codes=None, levels=None):
@classmethod
def from_arrays(cls, arrays, sortorder=None, names=None):
"""
- Convert arrays to MultiIndex
+ Convert arrays to MultiIndex.
Parameters
----------
@@ -297,7 +301,9 @@ def from_arrays(cls, arrays, sortorder=None, names=None):
len(arrays) is the number of levels.
sortorder : int or None
Level of sortedness (must be lexicographically sorted by that
- level)
+ level).
+ names : list / sequence of str, optional
+ Names for the levels in the index.
Returns
-------
@@ -308,11 +314,15 @@ def from_arrays(cls, arrays, sortorder=None, names=None):
MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
MultiIndex.from_product : Make a MultiIndex from cartesian product
of iterables.
+ MultiIndex.from_frame : Make a MultiIndex from a DataFrame.
Examples
--------
>>> arrays = [[1, 1, 2, 2], ['red', 'blue', 'red', 'blue']]
>>> pd.MultiIndex.from_arrays(arrays, names=('number', 'color'))
+ MultiIndex(levels=[[1, 2], ['blue', 'red']],
+ labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ names=['number', 'color'])
"""
if not is_list_like(arrays):
raise TypeError("Input must be a list / sequence of array-likes.")
@@ -337,7 +347,7 @@ def from_arrays(cls, arrays, sortorder=None, names=None):
@classmethod
def from_tuples(cls, tuples, sortorder=None, names=None):
"""
- Convert list of tuples to MultiIndex
+ Convert list of tuples to MultiIndex.
Parameters
----------
@@ -345,7 +355,9 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
Each tuple is the index of one row/column.
sortorder : int or None
Level of sortedness (must be lexicographically sorted by that
- level)
+ level).
+ names : list / sequence of str, optional
+ Names for the levels in the index.
Returns
-------
@@ -353,15 +365,19 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
See Also
--------
- MultiIndex.from_arrays : Convert list of arrays to MultiIndex
+ MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
MultiIndex.from_product : Make a MultiIndex from cartesian product
- of iterables
+ of iterables.
+ MultiIndex.from_frame : Make a MultiIndex from a DataFrame.
Examples
--------
>>> tuples = [(1, u'red'), (1, u'blue'),
- (2, u'red'), (2, u'blue')]
+ ... (2, u'red'), (2, u'blue')]
>>> pd.MultiIndex.from_tuples(tuples, names=('number', 'color'))
+ MultiIndex(levels=[[1, 2], ['blue', 'red']],
+ labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ names=['number', 'color'])
"""
if not is_list_like(tuples):
raise TypeError('Input must be a list / sequence of tuple-likes.')
@@ -388,7 +404,7 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
@classmethod
def from_product(cls, iterables, sortorder=None, names=None):
"""
- Make a MultiIndex from the cartesian product of multiple iterables
+ Make a MultiIndex from the cartesian product of multiple iterables.
Parameters
----------
@@ -397,7 +413,7 @@ def from_product(cls, iterables, sortorder=None, names=None):
sortorder : int or None
Level of sortedness (must be lexicographically sorted by that
level).
- names : list / sequence of strings or None
+ names : list / sequence of str, optional
Names for the levels in the index.
Returns
@@ -408,16 +424,17 @@ def from_product(cls, iterables, sortorder=None, names=None):
--------
MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
+ MultiIndex.from_frame : Make a MultiIndex from a DataFrame.
Examples
--------
>>> numbers = [0, 1, 2]
- >>> colors = [u'green', u'purple']
+ >>> colors = ['green', 'purple']
>>> pd.MultiIndex.from_product([numbers, colors],
- names=['number', 'color'])
- MultiIndex(levels=[[0, 1, 2], [u'green', u'purple']],
+ ... names=['number', 'color'])
+ MultiIndex(levels=[[0, 1, 2], ['green', 'purple']],
labels=[[0, 0, 1, 1, 2, 2], [0, 1, 0, 1, 0, 1]],
- names=[u'number', u'color'])
+ names=['number', 'color'])
"""
from pandas.core.arrays.categorical import _factorize_from_iterables
from pandas.core.reshape.util import cartesian_product
@@ -431,6 +448,68 @@ def from_product(cls, iterables, sortorder=None, names=None):
codes = cartesian_product(codes)
return MultiIndex(levels, codes, sortorder=sortorder, names=names)
+ @classmethod
+ def from_frame(cls, df, sortorder=None, names=None):
+ """
+ Make a MultiIndex from a DataFrame.
+
+ .. versionadded:: 0.24.0
+
+ Parameters
+ ----------
+ df : DataFrame
+ DataFrame to be converted to MultiIndex.
+ sortorder : int, optional
+ Level of sortedness (must be lexicographically sorted by that
+ level).
+ names : list-like, optional
+ If no names are provided, use the column names, or tuple of column
+ names if the columns is a MultiIndex. If a sequence, overwrite
+ names with the given sequence.
+
+ Returns
+ -------
+ MultiIndex
+ The MultiIndex representation of the given DataFrame.
+
+ See Also
+ --------
+ MultiIndex.from_arrays : Convert list of arrays to MultiIndex.
+ MultiIndex.from_tuples : Convert list of tuples to MultiIndex.
+ MultiIndex.from_product : Make a MultiIndex from cartesian product
+ of iterables.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame([['HI', 'Temp'], ['HI', 'Precip'],
+ ... ['NJ', 'Temp'], ['NJ', 'Precip']],
+ ... columns=['a', 'b'])
+ >>> df
+ a b
+ 0 HI Temp
+ 1 HI Precip
+ 2 NJ Temp
+ 3 NJ Precip
+
+ >>> pd.MultiIndex.from_frame(df)
+ MultiIndex(levels=[['HI', 'NJ'], ['Precip', 'Temp']],
+ labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ names=['a', 'b'])
+
+ Using explicit names, instead of the column names
+
+ >>> pd.MultiIndex.from_frame(df, names=['state', 'observation'])
+ MultiIndex(levels=[['HI', 'NJ'], ['Precip', 'Temp']],
+ labels=[[0, 0, 1, 1], [1, 0, 1, 0]],
+ names=['state', 'observation'])
+ """
+ if not isinstance(df, ABCDataFrame):
+ raise TypeError("Input must be a DataFrame")
+
+ column_names, columns = lzip(*df.iteritems())
+ names = column_names if names is None else names
+ return cls.from_arrays(columns, sortorder=sortorder, names=names)
+
# --------------------------------------------------------------------
@property
@@ -1386,11 +1465,16 @@ def to_frame(self, index=True, name=None):
else:
idx_names = self.names
- result = DataFrame({(name or level):
- self._get_level_values(level)
- for name, level in
- zip(idx_names, range(len(self.levels)))},
- copy=False)
+ # Guarantee resulting column order
+ result = DataFrame(
+ OrderedDict([
+ ((level if name is None else name),
+ self._get_level_values(level))
+ for name, level in zip(idx_names, range(len(self.levels)))
+ ]),
+ copy=False
+ )
+
if index:
result.index = self
return result
diff --git a/pandas/tests/indexes/multi/test_constructor.py b/pandas/tests/indexes/multi/test_constructor.py
index d80395e513497..e6678baf8a996 100644
--- a/pandas/tests/indexes/multi/test_constructor.py
+++ b/pandas/tests/indexes/multi/test_constructor.py
@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
+from collections import OrderedDict
import re
import numpy as np
@@ -108,6 +109,9 @@ def test_copy_in_constructor():
assert mi.levels[0][0] == val
+# ----------------------------------------------------------------------------
+# from_arrays
+# ----------------------------------------------------------------------------
def test_from_arrays(idx):
arrays = [np.asarray(lev).take(level_codes)
for lev, level_codes in zip(idx.levels, idx.codes)]
@@ -278,6 +282,9 @@ def test_from_arrays_different_lengths(idx1, idx2):
MultiIndex.from_arrays([idx1, idx2])
+# ----------------------------------------------------------------------------
+# from_tuples
+# ----------------------------------------------------------------------------
def test_from_tuples():
msg = 'Cannot infer number of levels from empty list'
with pytest.raises(TypeError, match=msg):
@@ -321,6 +328,28 @@ def test_from_tuples_index_values(idx):
assert (result.values == idx.values).all()
+def test_tuples_with_name_string():
+ # GH 15110 and GH 14848
+
+ li = [(0, 0, 1), (0, 1, 0), (1, 0, 0)]
+ with pytest.raises(ValueError):
+ pd.Index(li, name='abc')
+ with pytest.raises(ValueError):
+ pd.Index(li, name='a')
+
+
+def test_from_tuples_with_tuple_label():
+ # GH 15457
+ expected = pd.DataFrame([[2, 1, 2], [4, (1, 2), 3]],
+ columns=['a', 'b', 'c']).set_index(['a', 'b'])
+ idx = pd.MultiIndex.from_tuples([(2, 1), (4, (1, 2))], names=('a', 'b'))
+ result = pd.DataFrame([2, 3], columns=['c'], index=idx)
+ tm.assert_frame_equal(expected, result)
+
+
+# ----------------------------------------------------------------------------
+# from_product
+# ----------------------------------------------------------------------------
def test_from_product_empty_zero_levels():
# 0 levels
msg = "Must pass non-zero number of levels/codes"
@@ -470,20 +499,79 @@ def test_create_index_existing_name(idx):
tm.assert_index_equal(result, expected)
-def test_tuples_with_name_string():
- # GH 15110 and GH 14848
-
- li = [(0, 0, 1), (0, 1, 0), (1, 0, 0)]
- with pytest.raises(ValueError):
- pd.Index(li, name='abc')
- with pytest.raises(ValueError):
- pd.Index(li, name='a')
-
-
-def test_from_tuples_with_tuple_label():
- # GH 15457
- expected = pd.DataFrame([[2, 1, 2], [4, (1, 2), 3]],
- columns=['a', 'b', 'c']).set_index(['a', 'b'])
- idx = pd.MultiIndex.from_tuples([(2, 1), (4, (1, 2))], names=('a', 'b'))
- result = pd.DataFrame([2, 3], columns=['c'], index=idx)
- tm.assert_frame_equal(expected, result)
+# ----------------------------------------------------------------------------
+# from_frame
+# ----------------------------------------------------------------------------
+def test_from_frame():
+ # GH 22420
+ df = pd.DataFrame([['a', 'a'], ['a', 'b'], ['b', 'a'], ['b', 'b']],
+ columns=['L1', 'L2'])
+ expected = pd.MultiIndex.from_tuples([('a', 'a'), ('a', 'b'),
+ ('b', 'a'), ('b', 'b')],
+ names=['L1', 'L2'])
+ result = pd.MultiIndex.from_frame(df)
+ tm.assert_index_equal(expected, result)
+
+
+@pytest.mark.parametrize('non_frame', [
+ pd.Series([1, 2, 3, 4]),
+ [1, 2, 3, 4],
+ [[1, 2], [3, 4], [5, 6]],
+ pd.Index([1, 2, 3, 4]),
+ np.array([[1, 2], [3, 4], [5, 6]]),
+ 27
+])
+def test_from_frame_error(non_frame):
+ # GH 22420
+ with pytest.raises(TypeError, match='Input must be a DataFrame'):
+ pd.MultiIndex.from_frame(non_frame)
+
+
+def test_from_frame_dtype_fidelity():
+ # GH 22420
+ df = pd.DataFrame(OrderedDict([
+ ('dates', pd.date_range('19910905', periods=6, tz='US/Eastern')),
+ ('a', [1, 1, 1, 2, 2, 2]),
+ ('b', pd.Categorical(['a', 'a', 'b', 'b', 'c', 'c'], ordered=True)),
+ ('c', ['x', 'x', 'y', 'z', 'x', 'y'])
+ ]))
+ original_dtypes = df.dtypes.to_dict()
+
+ expected_mi = pd.MultiIndex.from_arrays([
+ pd.date_range('19910905', periods=6, tz='US/Eastern'),
+ [1, 1, 1, 2, 2, 2],
+ pd.Categorical(['a', 'a', 'b', 'b', 'c', 'c'], ordered=True),
+ ['x', 'x', 'y', 'z', 'x', 'y']
+ ], names=['dates', 'a', 'b', 'c'])
+ mi = pd.MultiIndex.from_frame(df)
+ mi_dtypes = {name: mi.levels[i].dtype for i, name in enumerate(mi.names)}
+
+ tm.assert_index_equal(expected_mi, mi)
+ assert original_dtypes == mi_dtypes
+
+
+@pytest.mark.parametrize('names_in,names_out', [
+ (None, [('L1', 'x'), ('L2', 'y')]),
+ (['x', 'y'], ['x', 'y']),
+])
+def test_from_frame_valid_names(names_in, names_out):
+ # GH 22420
+ df = pd.DataFrame([['a', 'a'], ['a', 'b'], ['b', 'a'], ['b', 'b']],
+ columns=pd.MultiIndex.from_tuples([('L1', 'x'),
+ ('L2', 'y')]))
+ mi = pd.MultiIndex.from_frame(df, names=names_in)
+ assert mi.names == names_out
+
+
+@pytest.mark.parametrize('names_in,names_out', [
+ ('bad_input', ValueError("Names should be list-like for a MultiIndex")),
+ (['a', 'b', 'c'], ValueError("Length of names must match number of "
+ "levels in MultiIndex."))
+])
+def test_from_frame_invalid_names(names_in, names_out):
+ # GH 22420
+ df = pd.DataFrame([['a', 'a'], ['a', 'b'], ['b', 'a'], ['b', 'b']],
+ columns=pd.MultiIndex.from_tuples([('L1', 'x'),
+ ('L2', 'y')]))
+ with pytest.raises(type(names_out), match=names_out.args[0]):
+ pd.MultiIndex.from_frame(df, names=names_in)
diff --git a/pandas/tests/indexes/multi/test_conversion.py b/pandas/tests/indexes/multi/test_conversion.py
index b72fadfeeab72..0c483873a335e 100644
--- a/pandas/tests/indexes/multi/test_conversion.py
+++ b/pandas/tests/indexes/multi/test_conversion.py
@@ -1,5 +1,7 @@
# -*- coding: utf-8 -*-
+from collections import OrderedDict
+
import pytest
import numpy as np
@@ -83,6 +85,39 @@ def test_to_frame():
tm.assert_frame_equal(result, expected)
+def test_to_frame_dtype_fidelity():
+ # GH 22420
+ mi = pd.MultiIndex.from_arrays([
+ pd.date_range('19910905', periods=6, tz='US/Eastern'),
+ [1, 1, 1, 2, 2, 2],
+ pd.Categorical(['a', 'a', 'b', 'b', 'c', 'c'], ordered=True),
+ ['x', 'x', 'y', 'z', 'x', 'y']
+ ], names=['dates', 'a', 'b', 'c'])
+ original_dtypes = {name: mi.levels[i].dtype
+ for i, name in enumerate(mi.names)}
+
+ expected_df = pd.DataFrame(OrderedDict([
+ ('dates', pd.date_range('19910905', periods=6, tz='US/Eastern')),
+ ('a', [1, 1, 1, 2, 2, 2]),
+ ('b', pd.Categorical(['a', 'a', 'b', 'b', 'c', 'c'], ordered=True)),
+ ('c', ['x', 'x', 'y', 'z', 'x', 'y'])
+ ]))
+ df = mi.to_frame(index=False)
+ df_dtypes = df.dtypes.to_dict()
+
+ tm.assert_frame_equal(df, expected_df)
+ assert original_dtypes == df_dtypes
+
+
+def test_to_frame_resulting_column_order():
+ # GH 22420
+ expected = ['z', 0, 'a']
+ mi = pd.MultiIndex.from_arrays([['a', 'b', 'c'], ['x', 'y', 'z'],
+ ['q', 'w', 'e']], names=expected)
+ result = mi.to_frame().columns.tolist()
+ assert result == expected
+
+
def test_to_hierarchical():
index = MultiIndex.from_tuples([(1, 'one'), (1, 'two'), (2, 'one'), (
2, 'two')])
| - [x] closes #22420
- [x] tests added / passed
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This pull request is to add the from_frame method of creating multiindexes. Along with this feature the helper method "squeeze" has been added for squeezing single level multiindexes to a standard index (analogous to df.squeeze).
Additionally the to_frame method was updated to guarantee that the order of the labels is preserved when converting a multiindex to a dataframe. Currently this cannot be guaranteed in Python 2.7 due to the use of a dictionary for creating the frame. With this change from_frame and to_frame are perfectly complementary.
| https://api.github.com/repos/pandas-dev/pandas/pulls/23141 | 2018-10-13T22:20:38Z | 2018-12-09T14:10:15Z | 2018-12-09T14:10:15Z | 2018-12-09T14:10:18Z |
Use align_method in comp_method_FRAME | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 2c142bdd7185b..c93162177462b 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -567,6 +567,88 @@ Previous Behavior:
0
0 NaT
+.. _whatsnew_0240.api.dataframe_cmp_broadcasting:
+
+DataFrame Comparison Operations Broadcasting Changes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+Previously, the broadcasting behavior of :class:`DataFrame` comparison
+operations (``==``, ``!=``, ...) was inconsistent with the behavior of
+arithmetic operations (``+``, ``-``, ...). The behavior of the comparison
+operations has been changed to match the arithmetic operations in these cases.
+(:issue:`22880`)
+
+The affected cases are:
+
+- operating against a 2-dimensional ``np.ndarray`` with either 1 row or 1 column will now broadcast the same way a ``np.ndarray`` would (:issue:`23000`).
+- a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ``ValueError`` instead of operating column-by-column (:issue:`22880`.
+- a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ``ValueError`` (:issue:`22880`).
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [3]: arr = np.arange(6).reshape(3, 2)
+ In [4]: df = pd.DataFrame(arr)
+
+ In [5]: df == arr[[0], :]
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[5]:
+ 0 1
+ 0 True True
+ 1 False False
+ 2 False False
+ In [6]: df + arr[[0], :]
+ ...
+ ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
+
+ In [7]: df == (1, 2)
+ ...: # length matches number of columns;
+ ...: # comparison previously raised where arithmetic would broadcast
+ ...
+ ValueError: Invalid broadcasting comparison [(1, 2)] with block values
+ In [8]: df + (1, 2)
+ Out[8]:
+ 0 1
+ 0 1 3
+ 1 3 5
+ 2 5 7
+
+ In [9]: df == (1, 2, 3)
+ ...: # length matches number of rows
+ ...: # comparison previously broadcast where arithmetic would raise
+ Out[9]:
+ 0 1
+ 0 False True
+ 1 True False
+ 2 False False
+ In [10]: df + (1, 2, 3)
+ ...
+ ValueError: Unable to coerce to Series, length must be 2: given 3
+
+*Current Behavior*:
+
+.. ipython:: python
+ :okexcept:
+
+ arr = np.arange(6).reshape(3, 2)
+ df = pd.DataFrame(arr)
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == arr[[0], :]
+ df + arr[[0], :]
+
+.. ipython:: python
+ # Comparison operations and arithmetic operations both broadcast.
+ df == (1, 2)
+ df + (1, 2)
+
+.. ipython:: python
+ :okexcept:
+ # Comparison operations and arithmetic opeartions both raise ValueError.
+ df == (1, 2, 3)
+ df + (1, 2, 3)
+
.. _whatsnew_0240.api.dataframe_arithmetic_broadcasting:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8f3873b4299a5..726188cffa4ee 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4950,13 +4950,8 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
return ops.dispatch_to_series(left, right, func, axis="columns")
def _combine_const(self, other, func, errors='raise', try_cast=True):
- if lib.is_scalar(other) or np.ndim(other) == 0:
- return ops.dispatch_to_series(self, other, func)
-
- new_data = self._data.eval(func=func, other=other,
- errors=errors,
- try_cast=try_cast)
- return self._constructor(new_data)
+ assert lib.is_scalar(other) or np.ndim(other) == 0
+ return ops.dispatch_to_series(self, other, func)
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 214fcb097f736..3bf6c585fc5df 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1318,145 +1318,6 @@ def shift(self, periods, axis=0, mgr=None):
return [self.make_block(new_values)]
- def eval(self, func, other, errors='raise', try_cast=False, mgr=None):
- """
- evaluate the block; return result block from the result
-
- Parameters
- ----------
- func : how to combine self, other
- other : a ndarray/object
- errors : str, {'raise', 'ignore'}, default 'raise'
- - ``raise`` : allow exceptions to be raised
- - ``ignore`` : suppress exceptions. On error return original object
-
- try_cast : try casting the results to the input type
-
- Returns
- -------
- a new block, the result of the func
- """
- orig_other = other
- values = self.values
-
- other = getattr(other, 'values', other)
-
- # make sure that we can broadcast
- is_transposed = False
- if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
- if values.ndim != other.ndim:
- is_transposed = True
- else:
- if values.shape == other.shape[::-1]:
- is_transposed = True
- elif values.shape[0] == other.shape[-1]:
- is_transposed = True
- else:
- # this is a broadcast error heree
- raise ValueError(
- "cannot broadcast shape [{t_shape}] with "
- "block values [{oth_shape}]".format(
- t_shape=values.T.shape, oth_shape=other.shape))
-
- transf = (lambda x: x.T) if is_transposed else (lambda x: x)
-
- # coerce/transpose the args if needed
- try:
- values, values_mask, other, other_mask = self._try_coerce_args(
- transf(values), other)
- except TypeError:
- block = self.coerce_to_target_dtype(orig_other)
- return block.eval(func, orig_other,
- errors=errors,
- try_cast=try_cast, mgr=mgr)
-
- # get the result, may need to transpose the other
- def get_result(other):
-
- # avoid numpy warning of comparisons again None
- if other is None:
- result = not func.__name__ == 'eq'
-
- # avoid numpy warning of elementwise comparisons to object
- elif is_numeric_v_string_like(values, other):
- result = False
-
- # avoid numpy warning of elementwise comparisons
- elif func.__name__ == 'eq':
- if is_list_like(other) and not isinstance(other, np.ndarray):
- other = np.asarray(other)
-
- # if we can broadcast, then ok
- if values.shape[-1] != other.shape[-1]:
- return False
- result = func(values, other)
- else:
- result = func(values, other)
-
- # mask if needed
- if isinstance(values_mask, np.ndarray) and values_mask.any():
- result = result.astype('float64', copy=False)
- result[values_mask] = np.nan
- if other_mask is True:
- result = result.astype('float64', copy=False)
- result[:] = np.nan
- elif isinstance(other_mask, np.ndarray) and other_mask.any():
- result = result.astype('float64', copy=False)
- result[other_mask.ravel()] = np.nan
-
- return result
-
- # error handler if we have an issue operating with the function
- def handle_error():
-
- if errors == 'raise':
- # The 'detail' variable is defined in outer scope.
- raise TypeError(
- 'Could not operate {other!r} with block values '
- '{detail!s}'.format(other=other, detail=detail)) # noqa
- else:
- # return the values
- result = np.empty(values.shape, dtype='O')
- result.fill(np.nan)
- return result
-
- # get the result
- try:
- with np.errstate(all='ignore'):
- result = get_result(other)
-
- # if we have an invalid shape/broadcast error
- # GH4576, so raise instead of allowing to pass through
- except ValueError as detail:
- raise
- except Exception as detail:
- result = handle_error()
-
- # technically a broadcast error in numpy can 'work' by returning a
- # boolean False
- if not isinstance(result, np.ndarray):
- if not isinstance(result, np.ndarray):
-
- # differentiate between an invalid ndarray-ndarray comparison
- # and an invalid type comparison
- if isinstance(values, np.ndarray) and is_list_like(other):
- raise ValueError(
- 'Invalid broadcasting comparison [{other!r}] with '
- 'block values'.format(other=other))
-
- raise TypeError('Could not compare [{other!r}] '
- 'with block values'.format(other=other))
-
- # transpose if needed
- result = transf(result)
-
- # try to cast if requested
- if try_cast:
- result = self._try_cast_result(result)
-
- result = _block_shape(result, ndim=self.ndim)
- return [self.make_block(result)]
-
def where(self, other, cond, align=True, errors='raise',
try_cast=False, axis=0, transpose=False, mgr=None):
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 34727fe575d10..f15f9ce3e8cb6 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -373,9 +373,6 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False,
align_keys = ['new', 'mask']
else:
align_keys = ['mask']
- elif f == 'eval':
- align_copy = False
- align_keys = ['other']
elif f == 'fillna':
# fillna internally does putmask, maybe it's better to do this
# at mgr, not block level?
@@ -511,9 +508,6 @@ def isna(self, func, **kwargs):
def where(self, **kwargs):
return self.apply('where', **kwargs)
- def eval(self, **kwargs):
- return self.apply('eval', **kwargs)
-
def quantile(self, **kwargs):
return self.reduction('quantile', **kwargs)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 9791354de7ffa..7645787d671c2 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1929,6 +1929,9 @@ def _comp_method_FRAME(cls, func, special):
@Appender('Wrapper for comparison method {name}'.format(name=op_name))
def f(self, other):
+
+ other = _align_method_FRAME(self, other, axis=None)
+
if isinstance(other, ABCDataFrame):
# Another DataFrame
if not self._indexed_same(other):
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index d0eb7cd35b268..8156c5ea671c2 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -48,15 +48,20 @@ def test_mixed_comparison(self):
assert result.all().all()
def test_df_boolean_comparison_error(self):
- # GH 4576
- # boolean comparisons with a tuple/list give unexpected results
+ # GH#4576, GH#22880
+ # comparing DataFrame against list/tuple with len(obj) matching
+ # len(df.columns) is supported as of GH#22800
df = pd.DataFrame(np.arange(6).reshape((3, 2)))
- # not shape compatible
- with pytest.raises(ValueError):
- df == (2, 2)
- with pytest.raises(ValueError):
- df == [2, 2]
+ expected = pd.DataFrame([[False, False],
+ [True, False],
+ [False, False]])
+
+ result = df == (2, 2)
+ tm.assert_frame_equal(result, expected)
+
+ result = df == [2, 2]
+ tm.assert_frame_equal(result, expected)
def test_df_float_none_comparison(self):
df = pd.DataFrame(np.random.randn(8, 3), index=range(8),
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index dff6e1c34ea50..b2781952ea86d 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -752,8 +752,9 @@ def test_comp(func):
result = func(df1, df2)
tm.assert_numpy_array_equal(result.values,
func(df1.values, df2.values))
+
with tm.assert_raises_regex(ValueError,
- 'Wrong number of dimensions'):
+ 'dim must be <= 2'):
func(df1, ndim_5)
result2 = func(self.simple, row)
@@ -804,22 +805,28 @@ def test_boolean_comparison(self):
result = df.values > b
assert_numpy_array_equal(result, expected.values)
- result = df > lst
- assert_frame_equal(result, expected)
+ msg1d = 'Unable to coerce to Series, length must be 2: given 3'
+ msg2d = 'Unable to coerce to DataFrame, shape must be'
+ msg2db = 'operands could not be broadcast together with shapes'
+ with tm.assert_raises_regex(ValueError, msg1d):
+ # wrong shape
+ df > lst
- result = df > tup
- assert_frame_equal(result, expected)
+ with tm.assert_raises_regex(ValueError, msg1d):
+ # wrong shape
+ result = df > tup
+ # broadcasts like ndarray (GH#23000)
result = df > b_r
assert_frame_equal(result, expected)
result = df.values > b_r
assert_numpy_array_equal(result, expected.values)
- with pytest.raises(ValueError):
+ with tm.assert_raises_regex(ValueError, msg2d):
df > b_c
- with pytest.raises(ValueError):
+ with tm.assert_raises_regex(ValueError, msg2db):
df.values > b_c
# ==
@@ -827,19 +834,20 @@ def test_boolean_comparison(self):
result = df == b
assert_frame_equal(result, expected)
- result = df == lst
- assert_frame_equal(result, expected)
+ with tm.assert_raises_regex(ValueError, msg1d):
+ result = df == lst
- result = df == tup
- assert_frame_equal(result, expected)
+ with tm.assert_raises_regex(ValueError, msg1d):
+ result = df == tup
+ # broadcasts like ndarray (GH#23000)
result = df == b_r
assert_frame_equal(result, expected)
result = df.values == b_r
assert_numpy_array_equal(result, expected.values)
- with pytest.raises(ValueError):
+ with tm.assert_raises_regex(ValueError, msg2d):
df == b_c
assert df.values.shape != b_c.shape
@@ -850,11 +858,11 @@ def test_boolean_comparison(self):
expected.index = df.index
expected.columns = df.columns
- result = df == lst
- assert_frame_equal(result, expected)
+ with tm.assert_raises_regex(ValueError, msg1d):
+ result = df == lst
- result = df == tup
- assert_frame_equal(result, expected)
+ with tm.assert_raises_regex(ValueError, msg1d):
+ result = df == tup
def test_combine_generic(self):
df1 = self.frame
| Redux to #22880 @jorisvandenbossche
Closes #20090
Summary:
DataFrame comparison operations broadcasting is inconsistent with arithmetic operations broadcasting. This PR fixes that. | https://api.github.com/repos/pandas-dev/pandas/pulls/23132 | 2018-10-13T16:25:16Z | 2018-10-23T02:55:37Z | 2018-10-23T02:55:37Z | 2018-10-23T09:17:57Z |
CLN: Flake8 E741 | diff --git a/.pep8speaks.yml b/.pep8speaks.yml
index c3a85d595eb59..ff6989cc4dc20 100644
--- a/.pep8speaks.yml
+++ b/.pep8speaks.yml
@@ -14,7 +14,6 @@ pycodestyle:
- E402, # module level import not at top of file
- E722, # do not use bare except
- E731, # do not assign a lambda expression, use a def
- - E741, # ambiguous variable name 'l'
- C406, # Unnecessary list literal - rewrite as a dict literal.
- C408, # Unnecessary dict call - rewrite as a literal.
- C409 # Unnecessary list passed to tuple() - rewrite as a tuple literal.
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index e2be410d51b88..3a45e0b61184c 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -896,8 +896,7 @@ def empty_gen():
def test_constructor_list_of_lists(self):
# GH #484
- l = [[1, 'a'], [2, 'b']]
- df = DataFrame(data=l, columns=["num", "str"])
+ df = DataFrame(data=[[1, 'a'], [2, 'b']], columns=["num", "str"])
assert is_integer_dtype(df['num'])
assert df['str'].dtype == np.object_
@@ -923,9 +922,9 @@ def __getitem__(self, n):
def __len__(self, n):
return self._lst.__len__()
- l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])]
+ lst_containers = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])]
columns = ["num", "str"]
- result = DataFrame(l, columns=columns)
+ result = DataFrame(lst_containers, columns=columns)
expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns)
tm.assert_frame_equal(result, expected, check_dtype=False)
@@ -1744,14 +1743,14 @@ def test_constructor_categorical(self):
def test_constructor_categorical_series(self):
- l = [1, 2, 3, 1]
- exp = Series(l).astype('category')
- res = Series(l, dtype='category')
+ items = [1, 2, 3, 1]
+ exp = Series(items).astype('category')
+ res = Series(items, dtype='category')
tm.assert_series_equal(res, exp)
- l = ["a", "b", "c", "a"]
- exp = Series(l).astype('category')
- res = Series(l, dtype='category')
+ items = ["a", "b", "c", "a"]
+ exp = Series(items).astype('category')
+ res = Series(items, dtype='category')
tm.assert_series_equal(res, exp)
# insert into frame with different index
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 2b93af357481a..89b8382dccad2 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -2076,9 +2076,9 @@ def test_nested_exception(self):
# a named argument
df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6],
"c": [7, 8, 9]}).set_index(["a", "b"])
- l = list(df.index)
- l[0] = ["a", "b"]
- df.index = l
+ index = list(df.index)
+ index[0] = ["a", "b"]
+ df.index = index
try:
repr(df)
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 433b0f09e13bc..dff6e1c34ea50 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -793,8 +793,8 @@ def test_boolean_comparison(self):
b = np.array([2, 2])
b_r = np.atleast_2d([2, 2])
b_c = b_r.T
- l = (2, 2, 2)
- tup = tuple(l)
+ lst = [2, 2, 2]
+ tup = tuple(lst)
# gt
expected = DataFrame([[False, False], [False, True], [True, True]])
@@ -804,7 +804,7 @@ def test_boolean_comparison(self):
result = df.values > b
assert_numpy_array_equal(result, expected.values)
- result = df > l
+ result = df > lst
assert_frame_equal(result, expected)
result = df > tup
@@ -827,7 +827,7 @@ def test_boolean_comparison(self):
result = df == b
assert_frame_equal(result, expected)
- result = df == l
+ result = df == lst
assert_frame_equal(result, expected)
result = df == tup
@@ -850,7 +850,7 @@ def test_boolean_comparison(self):
expected.index = df.index
expected.columns = df.columns
- result = df == l
+ result = df == lst
assert_frame_equal(result, expected)
result = df == tup
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 0f524ca0aaac5..def0da4fcd6bd 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -768,34 +768,34 @@ def test_rhs_alignment(self):
# assigned to. covers both uniform data-type & multi-type cases
def run_tests(df, rhs, right):
# label, index, slice
- r, i, s = list('bcd'), [1, 2, 3], slice(1, 4)
- c, j, l = ['joe', 'jolie'], [1, 2], slice(1, 3)
+ lbl_one, idx_one, slice_one = list('bcd'), [1, 2, 3], slice(1, 4)
+ lbl_two, idx_two, slice_two = ['joe', 'jolie'], [1, 2], slice(1, 3)
left = df.copy()
- left.loc[r, c] = rhs
+ left.loc[lbl_one, lbl_two] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
- left.iloc[i, j] = rhs
+ left.iloc[idx_one, idx_two] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
# XXX: finer-filter here.
simplefilter("ignore")
- left.ix[s, l] = rhs
+ left.ix[slice_one, slice_two] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
simplefilter("ignore")
- left.ix[i, j] = rhs
+ left.ix[idx_one, idx_two] = rhs
tm.assert_frame_equal(left, right)
left = df.copy()
with catch_warnings(record=True):
simplefilter("ignore")
- left.ix[r, c] = rhs
+ left.ix[lbl_one, lbl_two] = rhs
tm.assert_frame_equal(left, right)
xs = np.arange(20).reshape(5, 4)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 9fa705f923c88..6b5ba373eb10b 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -668,10 +668,10 @@ def gen_test(l, l2):
index=[0] * l2, columns=columns)])
def gen_expected(df, mask):
- l = len(mask)
+ len_mask = len(mask)
return pd.concat([df.take([0]),
- DataFrame(np.ones((l, len(columns))),
- index=[0] * l,
+ DataFrame(np.ones((len_mask, len(columns))),
+ index=[0] * len_mask,
columns=columns),
df.take(mask[1:])])
diff --git a/pandas/tests/io/test_packers.py b/pandas/tests/io/test_packers.py
index ee45f8828d85e..8b7151620ee0c 100644
--- a/pandas/tests/io/test_packers.py
+++ b/pandas/tests/io/test_packers.py
@@ -512,27 +512,27 @@ def test_multi(self):
for k in self.frame.keys():
assert_frame_equal(self.frame[k], i_rec[k])
- l = tuple([self.frame['float'], self.frame['float'].A,
- self.frame['float'].B, None])
- l_rec = self.encode_decode(l)
- check_arbitrary(l, l_rec)
+ packed_items = tuple([self.frame['float'], self.frame['float'].A,
+ self.frame['float'].B, None])
+ l_rec = self.encode_decode(packed_items)
+ check_arbitrary(packed_items, l_rec)
# this is an oddity in that packed lists will be returned as tuples
- l = [self.frame['float'], self.frame['float']
- .A, self.frame['float'].B, None]
- l_rec = self.encode_decode(l)
+ packed_items = [self.frame['float'], self.frame['float'].A,
+ self.frame['float'].B, None]
+ l_rec = self.encode_decode(packed_items)
assert isinstance(l_rec, tuple)
- check_arbitrary(l, l_rec)
+ check_arbitrary(packed_items, l_rec)
def test_iterator(self):
- l = [self.frame['float'], self.frame['float']
- .A, self.frame['float'].B, None]
+ packed_items = [self.frame['float'], self.frame['float'].A,
+ self.frame['float'].B, None]
with ensure_clean(self.path) as path:
- to_msgpack(path, *l)
+ to_msgpack(path, *packed_items)
for i, packed in enumerate(read_msgpack(path, iterator=True)):
- check_arbitrary(packed, l[i])
+ check_arbitrary(packed, packed_items[i])
def tests_datetimeindex_freq_issue(self):
diff --git a/pandas/tests/io/test_pytables.py b/pandas/tests/io/test_pytables.py
index ea5f1684c0695..4e9da92edcf5e 100644
--- a/pandas/tests/io/test_pytables.py
+++ b/pandas/tests/io/test_pytables.py
@@ -2182,14 +2182,14 @@ def test_unimplemented_dtypes_table_columns(self):
with ensure_clean_store(self.path) as store:
- l = [('date', datetime.date(2001, 1, 2))]
+ dtypes = [('date', datetime.date(2001, 1, 2))]
# py3 ok for unicode
if not compat.PY3:
- l.append(('unicode', u('\\u03c3')))
+ dtypes.append(('unicode', u('\\u03c3')))
# currently not supported dtypes ####
- for n, f in l:
+ for n, f in dtypes:
df = tm.makeDataFrame()
df[n] = f
pytest.raises(
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index 5c88926828fa6..b142ce339879c 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -256,8 +256,8 @@ def _check_text_labels(self, texts, expected):
else:
labels = [t.get_text() for t in texts]
assert len(labels) == len(expected)
- for l, e in zip(labels, expected):
- assert l == e
+ for label, e in zip(labels, expected):
+ assert label == e
def _check_ticks_props(self, axes, xlabelsize=None, xrot=None,
ylabelsize=None, yrot=None):
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index de6f6b931987c..c66e03fe7b2a2 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -542,8 +542,8 @@ def test_gaps(self):
ts.plot(ax=ax)
lines = ax.get_lines()
assert len(lines) == 1
- l = lines[0]
- data = l.get_xydata()
+ line = lines[0]
+ data = line.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
assert mask[5:25, 1].all()
@@ -557,8 +557,8 @@ def test_gaps(self):
ax = ts.plot(ax=ax)
lines = ax.get_lines()
assert len(lines) == 1
- l = lines[0]
- data = l.get_xydata()
+ line = lines[0]
+ data = line.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
assert mask[2:5, 1].all()
@@ -572,8 +572,8 @@ def test_gaps(self):
ser.plot(ax=ax)
lines = ax.get_lines()
assert len(lines) == 1
- l = lines[0]
- data = l.get_xydata()
+ line = lines[0]
+ data = line.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
assert mask[2:5, 1].all()
@@ -592,8 +592,8 @@ def test_gap_upsample(self):
lines = ax.get_lines()
assert len(lines) == 1
assert len(ax.right_ax.get_lines()) == 1
- l = lines[0]
- data = l.get_xydata()
+ line = lines[0]
+ data = line.get_xydata()
assert isinstance(data, np.ma.core.MaskedArray)
mask = data.mask
@@ -608,8 +608,8 @@ def test_secondary_y(self):
assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
axes = fig.get_axes()
- l = ax.get_lines()[0]
- xp = Series(l.get_ydata(), l.get_xdata())
+ line = ax.get_lines()[0]
+ xp = Series(line.get_ydata(), line.get_xdata())
assert_series_equal(ser, xp)
assert ax.get_yaxis().get_ticks_position() == 'right'
assert not axes[0].get_yaxis().get_visible()
@@ -639,8 +639,8 @@ def test_secondary_y_ts(self):
assert hasattr(ax, 'left_ax')
assert not hasattr(ax, 'right_ax')
axes = fig.get_axes()
- l = ax.get_lines()[0]
- xp = Series(l.get_ydata(), l.get_xdata()).to_timestamp()
+ line = ax.get_lines()[0]
+ xp = Series(line.get_ydata(), line.get_xdata()).to_timestamp()
assert_series_equal(ser, xp)
assert ax.get_yaxis().get_ticks_position() == 'right'
assert not axes[0].get_yaxis().get_visible()
@@ -950,25 +950,25 @@ def test_from_resampling_area_line_mixed(self):
dtype=np.float64)
expected_y = np.zeros(len(expected_x), dtype=np.float64)
for i in range(3):
- l = ax.lines[i]
- assert PeriodIndex(l.get_xdata()).freq == idxh.freq
- tm.assert_numpy_array_equal(l.get_xdata(orig=False),
+ line = ax.lines[i]
+ assert PeriodIndex(line.get_xdata()).freq == idxh.freq
+ tm.assert_numpy_array_equal(line.get_xdata(orig=False),
expected_x)
# check stacked values are correct
expected_y += low[i].values
- tm.assert_numpy_array_equal(l.get_ydata(orig=False),
+ tm.assert_numpy_array_equal(line.get_ydata(orig=False),
expected_y)
# check high dataframe result
expected_x = idxh.to_period().asi8.astype(np.float64)
expected_y = np.zeros(len(expected_x), dtype=np.float64)
for i in range(3):
- l = ax.lines[3 + i]
- assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
- tm.assert_numpy_array_equal(l.get_xdata(orig=False),
+ line = ax.lines[3 + i]
+ assert PeriodIndex(data=line.get_xdata()).freq == idxh.freq
+ tm.assert_numpy_array_equal(line.get_xdata(orig=False),
expected_x)
expected_y += high[i].values
- tm.assert_numpy_array_equal(l.get_ydata(orig=False),
+ tm.assert_numpy_array_equal(line.get_ydata(orig=False),
expected_y)
# high to low
@@ -981,12 +981,12 @@ def test_from_resampling_area_line_mixed(self):
expected_x = idxh.to_period().asi8.astype(np.float64)
expected_y = np.zeros(len(expected_x), dtype=np.float64)
for i in range(3):
- l = ax.lines[i]
- assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
- tm.assert_numpy_array_equal(l.get_xdata(orig=False),
+ line = ax.lines[i]
+ assert PeriodIndex(data=line.get_xdata()).freq == idxh.freq
+ tm.assert_numpy_array_equal(line.get_xdata(orig=False),
expected_x)
expected_y += high[i].values
- tm.assert_numpy_array_equal(l.get_ydata(orig=False),
+ tm.assert_numpy_array_equal(line.get_ydata(orig=False),
expected_y)
# check low dataframe result
@@ -995,12 +995,12 @@ def test_from_resampling_area_line_mixed(self):
dtype=np.float64)
expected_y = np.zeros(len(expected_x), dtype=np.float64)
for i in range(3):
- l = ax.lines[3 + i]
- assert PeriodIndex(data=l.get_xdata()).freq == idxh.freq
- tm.assert_numpy_array_equal(l.get_xdata(orig=False),
+ lines = ax.lines[3 + i]
+ assert PeriodIndex(data=lines.get_xdata()).freq == idxh.freq
+ tm.assert_numpy_array_equal(lines.get_xdata(orig=False),
expected_x)
expected_y += low[i].values
- tm.assert_numpy_array_equal(l.get_ydata(orig=False),
+ tm.assert_numpy_array_equal(lines.get_ydata(orig=False),
expected_y)
@pytest.mark.slow
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index cd297c356d60e..a4f5d8e2f4ff2 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -299,16 +299,16 @@ def test_unsorted_index(self):
df = DataFrame({'y': np.arange(100)}, index=np.arange(99, -1, -1),
dtype=np.int64)
ax = df.plot()
- l = ax.get_lines()[0]
- rs = l.get_xydata()
+ lines = ax.get_lines()[0]
+ rs = lines.get_xydata()
rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64, name='y')
tm.assert_series_equal(rs, df.y, check_index_type=False)
tm.close()
df.index = pd.Index(np.arange(99, -1, -1), dtype=np.float64)
ax = df.plot()
- l = ax.get_lines()[0]
- rs = l.get_xydata()
+ lines = ax.get_lines()[0]
+ rs = lines.get_xydata()
rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64, name='y')
tm.assert_series_equal(rs, df.y)
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 58a160d17cbe8..517bb9511552c 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -539,9 +539,9 @@ def _check_stat_op(self, name, alternate, check_objects=False,
f(s)
# 2888
- l = [0]
- l.extend(lrange(2 ** 40, 2 ** 40 + 1000))
- s = Series(l, dtype='int64')
+ items = [0]
+ items.extend(lrange(2 ** 40, 2 ** 40 + 1000))
+ s = Series(items, dtype='int64')
assert_almost_equal(float(f(s)), float(alternate(s.values)))
# check date range
@@ -974,12 +974,12 @@ def test_clip_types_and_nulls(self):
for s in sers:
thresh = s[2]
- l = s.clip_lower(thresh)
- u = s.clip_upper(thresh)
- assert l[notna(l)].min() == thresh
- assert u[notna(u)].max() == thresh
- assert list(isna(s)) == list(isna(l))
- assert list(isna(s)) == list(isna(u))
+ lower = s.clip_lower(thresh)
+ upper = s.clip_upper(thresh)
+ assert lower[notna(lower)].min() == thresh
+ assert upper[notna(upper)].max() == thresh
+ assert list(isna(s)) == list(isna(lower))
+ assert list(isna(s)) == list(isna(upper))
def test_clip_with_na_args(self):
"""Should process np.nan argument as None """
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 63ead2dc7d245..55a1afcb504e7 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -243,15 +243,15 @@ def test_astype_categories_deprecation(self):
tm.assert_series_equal(result, expected)
def test_astype_from_categorical(self):
- l = ["a", "b", "c", "a"]
- s = Series(l)
- exp = Series(Categorical(l))
+ items = ["a", "b", "c", "a"]
+ s = Series(items)
+ exp = Series(Categorical(items))
res = s.astype('category')
tm.assert_series_equal(res, exp)
- l = [1, 2, 3, 1]
- s = Series(l)
- exp = Series(Categorical(l))
+ items = [1, 2, 3, 1]
+ s = Series(items)
+ exp = Series(Categorical(items))
res = s.astype('category')
tm.assert_series_equal(res, exp)
@@ -270,13 +270,13 @@ def test_astype_from_categorical(self):
tm.assert_frame_equal(exp_df, df)
# with keywords
- l = ["a", "b", "c", "a"]
- s = Series(l)
- exp = Series(Categorical(l, ordered=True))
+ lst = ["a", "b", "c", "a"]
+ s = Series(lst)
+ exp = Series(Categorical(lst, ordered=True))
res = s.astype(CategoricalDtype(None, ordered=True))
tm.assert_series_equal(res, exp)
- exp = Series(Categorical(l, categories=list('abcdef'), ordered=True))
+ exp = Series(Categorical(lst, categories=list('abcdef'), ordered=True))
res = s.astype(CategoricalDtype(list('abcdef'), ordered=True))
tm.assert_series_equal(res, exp)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index b2ddbf715b480..d2b7979aed98d 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -372,8 +372,8 @@ def test_uint64_overflow(self):
tm.assert_numpy_array_equal(algos.unique(s), exp)
def test_nan_in_object_array(self):
- l = ['a', np.nan, 'c', 'c']
- result = pd.unique(l)
+ duplicated_items = ['a', np.nan, 'c', 'c']
+ result = pd.unique(duplicated_items)
expected = np.array(['a', np.nan, 'c'], dtype=object)
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 1718c6beaef55..0dbbe60283cac 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -2062,14 +2062,14 @@ def test_assign_index_sequences(self):
df = DataFrame({"a": [1, 2, 3],
"b": [4, 5, 6],
"c": [7, 8, 9]}).set_index(["a", "b"])
- l = list(df.index)
- l[0] = ("faz", "boo")
- df.index = l
+ index = list(df.index)
+ index[0] = ("faz", "boo")
+ df.index = index
repr(df)
# this travels an improper code path
- l[0] = ["faz", "boo"]
- df.index = l
+ index[0] = ["faz", "boo"]
+ df.index = index
repr(df)
def test_tuples_have_na(self):
diff --git a/setup.cfg b/setup.cfg
index 29392d7f15345..818d68176cb88 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -18,7 +18,6 @@ ignore =
E402, # module level import not at top of file
E722, # do not use bare except
E731, # do not assign a lambda expression, use a def
- E741, # ambiguous variable name 'l'
C406, # Unnecessary list literal - rewrite as a dict literal.
C408, # Unnecessary dict call - rewrite as a literal.
C409 # Unnecessary list passed to tuple() - rewrite as a tuple literal.
| - [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- xref up E741 mentioned on this issue: #22122
- Follow up from this PR: #22913
`flake8 --select E741` now runs clean
| https://api.github.com/repos/pandas-dev/pandas/pulls/23131 | 2018-10-13T16:01:57Z | 2018-10-14T12:34:42Z | 2018-10-14T12:34:42Z | 2018-10-15T12:32:21Z |
DOC: Fix creation of [source] links in the doc creation | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 29f947e1144ea..f0cf3a977416f 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -569,7 +569,11 @@ def linkcode_resolve(domain, info):
return None
try:
- fn = inspect.getsourcefile(obj)
+ # inspect.unwrap() was added in Python version 3.4
+ if sys.version_info >= (3, 5):
+ fn = inspect.getsourcefile(inspect.unwrap(obj))
+ else:
+ fn = inspect.getsourcefile(obj)
except:
fn = None
if not fn:
| See this issue: https://github.com/pandas-dev/pandas/issues/23046
A simple fix is to add `inspect.unwrap(obj)` before `getsourcefile`. This follows the `__wrapped__` attributes of the functions "all the way to the bottom" and then returns the object found.
One problem with this is that `unwrap` function was only introduced in Python 3.4. This means that compiling the documentation will now fail on Python versions older than that. I do not know if this is acceptable. | https://api.github.com/repos/pandas-dev/pandas/pulls/23129 | 2018-10-13T15:17:10Z | 2018-11-04T20:32:20Z | 2018-11-04T20:32:20Z | 2018-11-04T20:32:27Z |
Bump Hypothesis timeout from 200ms to 5s. | diff --git a/pandas/conftest.py b/pandas/conftest.py
index e84657a79b51a..b2870f8fd9ece 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -13,6 +13,11 @@
hypothesis.settings.register_profile(
"ci",
+ # Hypothesis timing checks are tuned for scalars by default, so we bump
+ # them from 200ms to 5 secs per test case as the global default. If this
+ # is too short for a specific test, (a) try to make it faster, and (b)
+ # if it really is slow add `@settings(timeout=...)` with a working value.
+ timeout=5000,
suppress_health_check=(hypothesis.HealthCheck.too_slow,)
)
hypothesis.settings.load_profile("ci")
| Closes #23121 - this is a minor config change to account for the fact that Pandas tests are slower than equivalent tests for scalars simply due to data size. CC @jorisvandenbossche @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/23127 | 2018-10-13T13:26:29Z | 2018-10-13T14:09:23Z | 2018-10-13T14:09:23Z | 2018-10-13T14:35:12Z |
Revert "Use align_method in comp_method_FRAME" | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 95eecba5b5ef6..5d7f45b92b75d 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -510,88 +510,6 @@ Previous Behavior:
0
0 NaT
-.. _whatsnew_0240.api.dataframe_cmp_broadcasting:
-
-DataFrame Comparison Operations Broadcasting Changes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Previously, the broadcasting behavior of :class:`DataFrame` comparison
-operations (``==``, ``!=``, ...) was inconsistent with the behavior of
-arithmetic operations (``+``, ``-``, ...). The behavior of the comparison
-operations has been changed to match the arithmetic operations in these cases.
-(:issue:`22880`)
-
-The affected cases are:
-
-- operating against a 2-dimensional ``np.ndarray`` with either 1 row or 1 column will now broadcast the same way a ``np.ndarray`` would (:issue:`23000`).
-- a list or tuple with length matching the number of rows in the :class:`DataFrame` will now raise ``ValueError`` instead of operating column-by-column (:issue:`22880`.
-- a list or tuple with length matching the number of columns in the :class:`DataFrame` will now operate row-by-row instead of raising ``ValueError`` (:issue:`22880`).
-
-Previous Behavior:
-
-.. code-block:: ipython
-
- In [3]: arr = np.arange(6).reshape(3, 2)
- In [4]: df = pd.DataFrame(arr)
-
- In [5]: df == arr[[0], :]
- ...: # comparison previously broadcast where arithmetic would raise
- Out[5]:
- 0 1
- 0 True True
- 1 False False
- 2 False False
- In [6]: df + arr[[0], :]
- ...
- ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
-
- In [7]: df == (1, 2)
- ...: # length matches number of columns;
- ...: # comparison previously raised where arithmetic would broadcast
- ...
- ValueError: Invalid broadcasting comparison [(1, 2)] with block values
- In [8]: df + (1, 2)
- Out[8]:
- 0 1
- 0 1 3
- 1 3 5
- 2 5 7
-
- In [9]: df == (1, 2, 3)
- ...: # length matches number of rows
- ...: # comparison previously broadcast where arithmetic would raise
- Out[9]:
- 0 1
- 0 False True
- 1 True False
- 2 False False
- In [10]: df + (1, 2, 3)
- ...
- ValueError: Unable to coerce to Series, length must be 2: given 3
-
-*Current Behavior*:
-
-.. ipython:: python
- :okexcept:
-
- arr = np.arange(6).reshape(3, 2)
- df = pd.DataFrame(arr)
-
-.. ipython:: python
- # Comparison operations and arithmetic operations both broadcast.
- df == arr[[0], :]
- df + arr[[0], :]
-
-.. ipython:: python
- # Comparison operations and arithmetic operations both broadcast.
- df == (1, 2)
- df + (1, 2)
-
-.. ipython:: python
- :okexcept:
- # Comparison operations and arithmetic opeartions both raise ValueError.
- df == (1, 2, 3)
- df + (1, 2, 3)
-
.. _whatsnew_0240.api.dataframe_arithmetic_broadcasting:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d612e96ec0db2..e9be7a3e9afb8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4948,8 +4948,13 @@ def _combine_match_columns(self, other, func, level=None, try_cast=True):
return ops.dispatch_to_series(left, right, func, axis="columns")
def _combine_const(self, other, func, errors='raise', try_cast=True):
- assert lib.is_scalar(other) or np.ndim(other) == 0
- return ops.dispatch_to_series(self, other, func)
+ if lib.is_scalar(other) or np.ndim(other) == 0:
+ return ops.dispatch_to_series(self, other, func)
+
+ new_data = self._data.eval(func=func, other=other,
+ errors=errors,
+ try_cast=try_cast)
+ return self._constructor(new_data)
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 290de0539db83..93930fd844b95 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1313,6 +1313,145 @@ def shift(self, periods, axis=0, mgr=None):
return [self.make_block(new_values)]
+ def eval(self, func, other, errors='raise', try_cast=False, mgr=None):
+ """
+ evaluate the block; return result block from the result
+
+ Parameters
+ ----------
+ func : how to combine self, other
+ other : a ndarray/object
+ errors : str, {'raise', 'ignore'}, default 'raise'
+ - ``raise`` : allow exceptions to be raised
+ - ``ignore`` : suppress exceptions. On error return original object
+
+ try_cast : try casting the results to the input type
+
+ Returns
+ -------
+ a new block, the result of the func
+ """
+ orig_other = other
+ values = self.values
+
+ other = getattr(other, 'values', other)
+
+ # make sure that we can broadcast
+ is_transposed = False
+ if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
+ if values.ndim != other.ndim:
+ is_transposed = True
+ else:
+ if values.shape == other.shape[::-1]:
+ is_transposed = True
+ elif values.shape[0] == other.shape[-1]:
+ is_transposed = True
+ else:
+ # this is a broadcast error heree
+ raise ValueError(
+ "cannot broadcast shape [{t_shape}] with "
+ "block values [{oth_shape}]".format(
+ t_shape=values.T.shape, oth_shape=other.shape))
+
+ transf = (lambda x: x.T) if is_transposed else (lambda x: x)
+
+ # coerce/transpose the args if needed
+ try:
+ values, values_mask, other, other_mask = self._try_coerce_args(
+ transf(values), other)
+ except TypeError:
+ block = self.coerce_to_target_dtype(orig_other)
+ return block.eval(func, orig_other,
+ errors=errors,
+ try_cast=try_cast, mgr=mgr)
+
+ # get the result, may need to transpose the other
+ def get_result(other):
+
+ # avoid numpy warning of comparisons again None
+ if other is None:
+ result = not func.__name__ == 'eq'
+
+ # avoid numpy warning of elementwise comparisons to object
+ elif is_numeric_v_string_like(values, other):
+ result = False
+
+ # avoid numpy warning of elementwise comparisons
+ elif func.__name__ == 'eq':
+ if is_list_like(other) and not isinstance(other, np.ndarray):
+ other = np.asarray(other)
+
+ # if we can broadcast, then ok
+ if values.shape[-1] != other.shape[-1]:
+ return False
+ result = func(values, other)
+ else:
+ result = func(values, other)
+
+ # mask if needed
+ if isinstance(values_mask, np.ndarray) and values_mask.any():
+ result = result.astype('float64', copy=False)
+ result[values_mask] = np.nan
+ if other_mask is True:
+ result = result.astype('float64', copy=False)
+ result[:] = np.nan
+ elif isinstance(other_mask, np.ndarray) and other_mask.any():
+ result = result.astype('float64', copy=False)
+ result[other_mask.ravel()] = np.nan
+
+ return result
+
+ # error handler if we have an issue operating with the function
+ def handle_error():
+
+ if errors == 'raise':
+ # The 'detail' variable is defined in outer scope.
+ raise TypeError(
+ 'Could not operate {other!r} with block values '
+ '{detail!s}'.format(other=other, detail=detail)) # noqa
+ else:
+ # return the values
+ result = np.empty(values.shape, dtype='O')
+ result.fill(np.nan)
+ return result
+
+ # get the result
+ try:
+ with np.errstate(all='ignore'):
+ result = get_result(other)
+
+ # if we have an invalid shape/broadcast error
+ # GH4576, so raise instead of allowing to pass through
+ except ValueError as detail:
+ raise
+ except Exception as detail:
+ result = handle_error()
+
+ # technically a broadcast error in numpy can 'work' by returning a
+ # boolean False
+ if not isinstance(result, np.ndarray):
+ if not isinstance(result, np.ndarray):
+
+ # differentiate between an invalid ndarray-ndarray comparison
+ # and an invalid type comparison
+ if isinstance(values, np.ndarray) and is_list_like(other):
+ raise ValueError(
+ 'Invalid broadcasting comparison [{other!r}] with '
+ 'block values'.format(other=other))
+
+ raise TypeError('Could not compare [{other!r}] '
+ 'with block values'.format(other=other))
+
+ # transpose if needed
+ result = transf(result)
+
+ # try to cast if requested
+ if try_cast:
+ result = self._try_cast_result(result)
+
+ result = _block_shape(result, ndim=self.ndim)
+ return [self.make_block(result)]
+
def where(self, other, cond, align=True, errors='raise',
try_cast=False, axis=0, transpose=False, mgr=None):
"""
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 1cbc09b4ca51a..2f29f1ae2509f 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -373,6 +373,9 @@ def apply(self, f, axes=None, filter=None, do_integrity_check=False,
align_keys = ['new', 'mask']
else:
align_keys = ['mask']
+ elif f == 'eval':
+ align_copy = False
+ align_keys = ['other']
elif f == 'fillna':
# fillna internally does putmask, maybe it's better to do this
# at mgr, not block level?
@@ -508,6 +511,9 @@ def isna(self, func, **kwargs):
def where(self, **kwargs):
return self.apply('where', **kwargs)
+ def eval(self, **kwargs):
+ return self.apply('eval', **kwargs)
+
def quantile(self, **kwargs):
return self.reduction('quantile', **kwargs)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index e894c763ebe03..20559bca9caed 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -1934,9 +1934,6 @@ def _comp_method_FRAME(cls, func, special):
@Appender('Wrapper for comparison method {name}'.format(name=op_name))
def f(self, other):
-
- other = _align_method_FRAME(self, other, axis=None)
-
if isinstance(other, ABCDataFrame):
# Another DataFrame
if not self._indexed_same(other):
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 8156c5ea671c2..d0eb7cd35b268 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -48,20 +48,15 @@ def test_mixed_comparison(self):
assert result.all().all()
def test_df_boolean_comparison_error(self):
- # GH#4576, GH#22880
- # comparing DataFrame against list/tuple with len(obj) matching
- # len(df.columns) is supported as of GH#22800
+ # GH 4576
+ # boolean comparisons with a tuple/list give unexpected results
df = pd.DataFrame(np.arange(6).reshape((3, 2)))
- expected = pd.DataFrame([[False, False],
- [True, False],
- [False, False]])
-
- result = df == (2, 2)
- tm.assert_frame_equal(result, expected)
-
- result = df == [2, 2]
- tm.assert_frame_equal(result, expected)
+ # not shape compatible
+ with pytest.raises(ValueError):
+ df == (2, 2)
+ with pytest.raises(ValueError):
+ df == [2, 2]
def test_df_float_none_comparison(self):
df = pd.DataFrame(np.random.randn(8, 3), index=range(8),
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 9c0ef259ab686..433b0f09e13bc 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -752,9 +752,8 @@ def test_comp(func):
result = func(df1, df2)
tm.assert_numpy_array_equal(result.values,
func(df1.values, df2.values))
-
with tm.assert_raises_regex(ValueError,
- 'dim must be <= 2'):
+ 'Wrong number of dimensions'):
func(df1, ndim_5)
result2 = func(self.simple, row)
@@ -805,28 +804,22 @@ def test_boolean_comparison(self):
result = df.values > b
assert_numpy_array_equal(result, expected.values)
- msg1d = 'Unable to coerce to Series, length must be 2: given 3'
- msg2d = 'Unable to coerce to DataFrame, shape must be'
- msg2db = 'operands could not be broadcast together with shapes'
- with tm.assert_raises_regex(ValueError, msg1d):
- # wrong shape
- df > l
+ result = df > l
+ assert_frame_equal(result, expected)
- with tm.assert_raises_regex(ValueError, msg1d):
- # wrong shape
- result = df > tup
+ result = df > tup
+ assert_frame_equal(result, expected)
- # broadcasts like ndarray (GH#23000)
result = df > b_r
assert_frame_equal(result, expected)
result = df.values > b_r
assert_numpy_array_equal(result, expected.values)
- with tm.assert_raises_regex(ValueError, msg2d):
+ with pytest.raises(ValueError):
df > b_c
- with tm.assert_raises_regex(ValueError, msg2db):
+ with pytest.raises(ValueError):
df.values > b_c
# ==
@@ -834,20 +827,19 @@ def test_boolean_comparison(self):
result = df == b
assert_frame_equal(result, expected)
- with tm.assert_raises_regex(ValueError, msg1d):
- result = df == l
+ result = df == l
+ assert_frame_equal(result, expected)
- with tm.assert_raises_regex(ValueError, msg1d):
- result = df == tup
+ result = df == tup
+ assert_frame_equal(result, expected)
- # broadcasts like ndarray (GH#23000)
result = df == b_r
assert_frame_equal(result, expected)
result = df.values == b_r
assert_numpy_array_equal(result, expected.values)
- with tm.assert_raises_regex(ValueError, msg2d):
+ with pytest.raises(ValueError):
df == b_c
assert df.values.shape != b_c.shape
@@ -858,11 +850,11 @@ def test_boolean_comparison(self):
expected.index = df.index
expected.columns = df.columns
- with tm.assert_raises_regex(ValueError, msg1d):
- result = df == l
+ result = df == l
+ assert_frame_equal(result, expected)
- with tm.assert_raises_regex(ValueError, msg1d):
- result = df == tup
+ result = df == tup
+ assert_frame_equal(result, expected)
def test_combine_generic(self):
df1 = self.frame
| Reverts pandas-dev/pandas#22880 | https://api.github.com/repos/pandas-dev/pandas/pulls/23120 | 2018-10-13T07:58:54Z | 2018-10-13T07:59:01Z | 2018-10-13T07:59:01Z | 2018-10-13T07:59:04Z |
Remove offset/DTI caching (disabled since 0.14 | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 5811a8c4c45ff..dde098be2e5ae 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -688,6 +688,7 @@ Other API Changes
- :meth:`DataFrame.corr` and :meth:`Series.corr` now raise a ``ValueError`` along with a helpful error message instead of a ``KeyError`` when supplied with an invalid method (:issue:`22298`)
- :meth:`shift` will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (:issue:`22397`)
- Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (:issue:`22784`)
+- :class:`DateOffset` attribute `_cacheable` and method `_should_cache` have been removed (:issue:`23118`)
.. _whatsnew_0240.deprecations:
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4d611f89bca9c..393c2cdba8568 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -282,11 +282,6 @@ class ApplyTypeError(TypeError):
pass
-# TODO: unused. remove?
-class CacheableOffset(object):
- _cacheable = True
-
-
# ---------------------------------------------------------------------
# Base Classes
@@ -296,8 +291,6 @@ class _BaseOffset(object):
and will (after pickle errors are resolved) go into a cdef class.
"""
_typ = "dateoffset"
- _normalize_cache = True
- _cacheable = False
_day_opt = None
_attributes = frozenset(['n', 'normalize'])
@@ -386,10 +379,6 @@ class _BaseOffset(object):
# that allows us to use methods that can go in a `cdef class`
return self * 1
- # TODO: this is never true. fix it or get rid of it
- def _should_cache(self):
- return self.isAnchored() and self._cacheable
-
def __repr__(self):
className = getattr(self, '_outputName', type(self).__name__)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 4c75927135b22..6cc4922788cf3 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -13,7 +13,7 @@
resolution as libresolution)
from pandas.util._decorators import cache_readonly
-from pandas.errors import PerformanceWarning, AbstractMethodError
+from pandas.errors import PerformanceWarning
from pandas import compat
from pandas.core.dtypes.common import (
@@ -268,27 +268,22 @@ def _generate_range(cls, start, end, periods, freq, tz=None,
end, end.tz, start.tz, freq, tz
)
if freq is not None:
- if cls._use_cached_range(freq, _normalized, start, end):
- # Currently always False; never hit
- # Should be reimplemented as a part of GH#17914
- index = cls._cached_range(start, end, periods=periods,
- freq=freq)
- else:
- index = _generate_regular_range(cls, start, end, periods, freq)
-
- if tz is not None and getattr(index, 'tz', None) is None:
- arr = conversion.tz_localize_to_utc(
- ensure_int64(index.values),
- tz, ambiguous=ambiguous)
-
- index = cls(arr)
-
- # index is localized datetime64 array -> have to convert
- # start/end as well to compare
- if start is not None:
- start = start.tz_localize(tz).asm8
- if end is not None:
- end = end.tz_localize(tz).asm8
+ # TODO: consider re-implementing _cached_range; GH#17914
+ index = _generate_regular_range(cls, start, end, periods, freq)
+
+ if tz is not None and getattr(index, 'tz', None) is None:
+ arr = conversion.tz_localize_to_utc(
+ ensure_int64(index.values),
+ tz, ambiguous=ambiguous)
+
+ index = cls(arr)
+
+ # index is localized datetime64 array -> have to convert
+ # start/end as well to compare
+ if start is not None:
+ start = start.tz_localize(tz).asm8
+ if end is not None:
+ end = end.tz_localize(tz).asm8
else:
# Create a linearly spaced date_range in local time
arr = np.linspace(start.value, end.value, periods)
@@ -303,16 +298,6 @@ def _generate_range(cls, start, end, periods, freq, tz=None,
return cls._simple_new(index.values, freq=freq, tz=tz)
- @classmethod
- def _use_cached_range(cls, freq, _normalized, start, end):
- # DatetimeArray is mutable, so is not cached
- return False
-
- @classmethod
- def _cached_range(cls, start=None, end=None,
- periods=None, freq=None, **kwargs):
- raise AbstractMethodError(cls)
-
# -----------------------------------------------------------------
# Descriptive Properties
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 70140d2d9a432..e0219acc115b5 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -40,7 +40,7 @@
DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin,
wrap_field_accessor, wrap_array_method)
from pandas.tseries.offsets import (
- generate_range, CDay, prefix_mapping)
+ CDay, prefix_mapping)
from pandas.core.tools.timedeltas import to_timedelta
from pandas.util._decorators import Appender, cache_readonly, Substitution
@@ -326,13 +326,6 @@ def _generate_range(cls, start, end, periods, name=None, freq=None,
out.name = name
return out
- @classmethod
- def _use_cached_range(cls, freq, _normalized, start, end):
- # Note: This always returns False
- return (freq._should_cache() and
- not (freq._normalize_cache and not _normalized) and
- _naive_in_cache_range(start, end))
-
def _convert_for_op(self, value):
""" Convert value to be insertable to ndarray """
if self._has_same_tz(value):
@@ -410,71 +403,6 @@ def nbytes(self):
# for TZ-aware
return self._ndarray_values.nbytes
- @classmethod
- def _cached_range(cls, start=None, end=None, periods=None, freq=None,
- name=None):
- if start is None and end is None:
- # I somewhat believe this should never be raised externally
- raise TypeError('Must specify either start or end.')
- if start is not None:
- start = Timestamp(start)
- if end is not None:
- end = Timestamp(end)
- if (start is None or end is None) and periods is None:
- raise TypeError(
- 'Must either specify period or provide both start and end.')
-
- if freq is None:
- # This can't happen with external-facing code
- raise TypeError('Must provide freq.')
-
- drc = _daterange_cache
- if freq not in _daterange_cache:
- xdr = generate_range(offset=freq, start=_CACHE_START,
- end=_CACHE_END)
-
- arr = tools.to_datetime(list(xdr), box=False)
-
- cachedRange = DatetimeIndex._simple_new(arr)
- cachedRange.freq = freq
- cachedRange = cachedRange.tz_localize(None)
- cachedRange.name = None
- drc[freq] = cachedRange
- else:
- cachedRange = drc[freq]
-
- if start is None:
- if not isinstance(end, Timestamp):
- raise AssertionError('end must be an instance of Timestamp')
-
- end = freq.rollback(end)
-
- endLoc = cachedRange.get_loc(end) + 1
- startLoc = endLoc - periods
- elif end is None:
- if not isinstance(start, Timestamp):
- raise AssertionError('start must be an instance of Timestamp')
-
- start = freq.rollforward(start)
-
- startLoc = cachedRange.get_loc(start)
- endLoc = startLoc + periods
- else:
- if not freq.onOffset(start):
- start = freq.rollforward(start)
-
- if not freq.onOffset(end):
- end = freq.rollback(end)
-
- startLoc = cachedRange.get_loc(start)
- endLoc = cachedRange.get_loc(end) + 1
-
- indexSlice = cachedRange[startLoc:endLoc]
- indexSlice.name = name
- indexSlice.freq = freq
-
- return indexSlice
-
def _mpl_repr(self):
# how to represent ourselves to matplotlib
return libts.ints_to_pydatetime(self.asi8, self.tz)
@@ -832,22 +760,19 @@ def _fast_union(self, other):
else:
left, right = other, self
- left_start, left_end = left[0], left[-1]
+ left_end = left[-1]
right_end = right[-1]
- if not self.freq._should_cache():
- # concatenate dates
- if left_end < right_end:
- loc = right.searchsorted(left_end, side='right')
- right_chunk = right.values[loc:]
- dates = _concat._concat_compat((left.values, right_chunk))
- return self._shallow_copy(dates)
- else:
- return left
+ # TODO: consider re-implementing freq._should_cache for fastpath
+
+ # concatenate dates
+ if left_end < right_end:
+ loc = right.searchsorted(left_end, side='right')
+ right_chunk = right.values[loc:]
+ dates = _concat._concat_compat((left.values, right_chunk))
+ return self._shallow_copy(dates)
else:
- return type(self)(start=left_start,
- end=max(left_end, right_end),
- freq=left.freq)
+ return left
def _wrap_union_result(self, other, result):
name = self.name if self.name == other.name else None
@@ -1724,21 +1649,6 @@ def cdate_range(start=None, end=None, periods=None, freq='C', tz=None,
closed=closed, **kwargs)
-_CACHE_START = Timestamp(datetime(1950, 1, 1))
-_CACHE_END = Timestamp(datetime(2030, 1, 1))
-
-_daterange_cache = {}
-
-
-def _naive_in_cache_range(start, end):
- if start is None or end is None:
- return False
- else:
- if start.tzinfo is not None or end.tzinfo is not None:
- return False
- return start > _CACHE_START and end < _CACHE_END
-
-
def _time_to_micros(time):
seconds = time.hour * 60 * 60 + 60 * time.minute + time.second
return 1000000 * seconds + time.microsecond
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index e0caf671fc390..7481c4a710083 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -616,23 +616,6 @@ def test_naive_aware_conflicts(self):
with tm.assert_raises_regex(TypeError, msg):
aware.join(naive)
- def test_cached_range(self):
- DatetimeIndex._cached_range(START, END, freq=BDay())
- DatetimeIndex._cached_range(START, periods=20, freq=BDay())
- DatetimeIndex._cached_range(end=START, periods=20, freq=BDay())
-
- with tm.assert_raises_regex(TypeError, "freq"):
- DatetimeIndex._cached_range(START, END)
-
- with tm.assert_raises_regex(TypeError, "specify period"):
- DatetimeIndex._cached_range(START, freq=BDay())
-
- with tm.assert_raises_regex(TypeError, "specify period"):
- DatetimeIndex._cached_range(end=END, freq=BDay())
-
- with tm.assert_raises_regex(TypeError, "start or end"):
- DatetimeIndex._cached_range(periods=20, freq=BDay())
-
def test_misc(self):
end = datetime(2009, 5, 13)
dr = bdate_range(end=end, periods=20)
@@ -693,29 +676,6 @@ def test_constructor(self):
with tm.assert_raises_regex(TypeError, msg):
bdate_range('2011-1-1', '2012-1-1', 'C')
- def test_cached_range(self):
- DatetimeIndex._cached_range(START, END, freq=CDay())
- DatetimeIndex._cached_range(START, periods=20,
- freq=CDay())
- DatetimeIndex._cached_range(end=START, periods=20,
- freq=CDay())
-
- # with pytest.raises(TypeError):
- with tm.assert_raises_regex(TypeError, "freq"):
- DatetimeIndex._cached_range(START, END)
-
- # with pytest.raises(TypeError):
- with tm.assert_raises_regex(TypeError, "specify period"):
- DatetimeIndex._cached_range(START, freq=CDay())
-
- # with pytest.raises(TypeError):
- with tm.assert_raises_regex(TypeError, "specify period"):
- DatetimeIndex._cached_range(end=END, freq=CDay())
-
- # with pytest.raises(TypeError):
- with tm.assert_raises_regex(TypeError, "start or end"):
- DatetimeIndex._cached_range(periods=20, freq=CDay())
-
def test_misc(self):
end = datetime(2009, 5, 13)
dr = bdate_range(end=end, periods=20, freq='C')
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index bda4d71d58e82..a0cff6f74b979 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -15,11 +15,9 @@
from pandas._libs.tslibs.frequencies import (get_freq_code, get_freq_str,
INVALID_FREQ_ERR_MSG)
from pandas.tseries.frequencies import _offset_map, get_offset
-from pandas.core.indexes.datetimes import (
- _to_m8, DatetimeIndex, _daterange_cache)
+from pandas.core.indexes.datetimes import _to_m8, DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
import pandas._libs.tslibs.offsets as liboffsets
-from pandas._libs.tslibs.offsets import CacheableOffset
from pandas.tseries.offsets import (BDay, CDay, BQuarterEnd, BMonthEnd,
BusinessHour, WeekOfMonth, CBMonthEnd,
CustomBusinessHour,
@@ -28,7 +26,7 @@
BYearBegin, QuarterBegin, BQuarterBegin,
BMonthBegin, DateOffset, Week, YearBegin,
YearEnd, Day,
- QuarterEnd, BusinessMonthEnd, FY5253,
+ QuarterEnd, FY5253,
Nano, Easter, FY5253Quarter,
LastWeekOfMonth, Tick, CalendarDay)
import pandas.tseries.offsets as offsets
@@ -2830,70 +2828,6 @@ def test_freq_offsets():
assert (off.freqstr == 'B-30Min')
-def get_all_subclasses(cls):
- ret = set()
- this_subclasses = cls.__subclasses__()
- ret = ret | set(this_subclasses)
- for this_subclass in this_subclasses:
- ret | get_all_subclasses(this_subclass)
- return ret
-
-
-class TestCaching(object):
-
- # as of GH 6479 (in 0.14.0), offset caching is turned off
- # as of v0.12.0 only BusinessMonth/Quarter were actually caching
-
- def setup_method(self, method):
- _daterange_cache.clear()
- _offset_map.clear()
-
- def run_X_index_creation(self, cls):
- inst1 = cls()
- if not inst1.isAnchored():
- assert not inst1._should_cache(), cls
- return
-
- assert inst1._should_cache(), cls
-
- DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31),
- freq=inst1, normalize=True)
- assert cls() in _daterange_cache, cls
-
- def test_should_cache_month_end(self):
- assert not MonthEnd()._should_cache()
-
- def test_should_cache_bmonth_end(self):
- assert not BusinessMonthEnd()._should_cache()
-
- def test_should_cache_week_month(self):
- assert not WeekOfMonth(weekday=1, week=2)._should_cache()
-
- def test_all_cacheableoffsets(self):
- for subclass in get_all_subclasses(CacheableOffset):
- if subclass.__name__[0] == "_" \
- or subclass in TestCaching.no_simple_ctr:
- continue
- self.run_X_index_creation(subclass)
-
- def test_month_end_index_creation(self):
- DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 31),
- freq=MonthEnd(), normalize=True)
- assert not MonthEnd() in _daterange_cache
-
- def test_bmonth_end_index_creation(self):
- DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29),
- freq=BusinessMonthEnd(), normalize=True)
- assert not BusinessMonthEnd() in _daterange_cache
-
- def test_week_of_month_index_creation(self):
- inst1 = WeekOfMonth(weekday=1, week=2)
- DatetimeIndex(start=datetime(2013, 1, 31), end=datetime(2013, 3, 29),
- freq=inst1, normalize=True)
- inst2 = WeekOfMonth(weekday=1, week=2)
- assert inst2 not in _daterange_cache
-
-
class TestReprNames(object):
def test_str_for_named_is_name(self):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 0a9931c46bbd5..e6d73fc45c502 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -806,7 +806,6 @@ class CustomBusinessDay(_CustomMixin, BusinessDay):
passed to ``numpy.busdaycalendar``
calendar : pd.HolidayCalendar or np.busdaycalendar
"""
- _cacheable = False
_prefix = 'C'
_attributes = frozenset(['n', 'normalize',
'weekmask', 'holidays', 'calendar', 'offset'])
@@ -958,7 +957,6 @@ class _CustomBusinessMonth(_CustomMixin, BusinessMixin, MonthOffset):
passed to ``numpy.busdaycalendar``
calendar : pd.HolidayCalendar or np.busdaycalendar
"""
- _cacheable = False
_attributes = frozenset(['n', 'normalize',
'weekmask', 'holidays', 'calendar', 'offset'])
| - [x] closes #23080
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23118 | 2018-10-13T01:38:56Z | 2018-10-14T19:36:33Z | 2018-10-14T19:36:33Z | 2018-10-14T19:36:49Z |
TST: add test cases for reset_index method | diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 4e61c9c62266d..3faf62b4ec28b 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -744,6 +744,15 @@ def test_reset_index(self, float_frame):
xp = xp.set_index(['B'], append=True)
tm.assert_frame_equal(rs, xp, check_names=False)
+ def test_reset_index_name(self):
+ df = DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]],
+ columns=['A', 'B', 'C', 'D'],
+ index=Index(range(2), name='x'))
+ assert df.reset_index().index.name is None
+ assert df.reset_index(drop=True).index.name is None
+ df.reset_index(inplace=True)
+ assert df.index.name is None
+
def test_reset_index_level(self):
df = DataFrame([[1, 2, 3, 4], [5, 6, 7, 8]],
columns=['A', 'B', 'C', 'D'])
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index c3e4cb8bc3abc..7d4aa2d4df6fc 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -144,6 +144,11 @@ def test_reset_index(self):
tm.assert_index_equal(rs.index, Index(index.get_level_values(1)))
assert isinstance(rs, Series)
+ def test_reset_index_name(self):
+ s = Series([1, 2, 3], index=Index(range(3), name='x'))
+ assert s.reset_index().index.name is None
+ assert s.reset_index(drop=True).index.name is None
+
def test_reset_index_level(self):
df = DataFrame([[1, 2, 3], [4, 5, 6]],
columns=['A', 'B', 'C'])
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 0dbbe60283cac..882221bfce4aa 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -740,13 +740,16 @@ def test_delevel_infer_dtype(self):
def test_reset_index_with_drop(self):
deleveled = self.ymd.reset_index(drop=True)
assert len(deleveled.columns) == len(self.ymd.columns)
+ assert deleveled.index.name == self.ymd.index.name
deleveled = self.series.reset_index()
assert isinstance(deleveled, DataFrame)
assert len(deleveled.columns) == len(self.series.index.levels) + 1
+ assert deleveled.index.name == self.series.index.name
deleveled = self.series.reset_index(drop=True)
assert isinstance(deleveled, Series)
+ assert deleveled.index.name == self.series.index.name
def test_count_level(self):
def _check_counts(frame, axis=0):
| - [x] closes #17067
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
the fix appears to work but it breaks previous assert_frame_equal that assumes index.name == None after reset_index() call.
The question remains that is the issue #17067 actually a bug and does it require a fix at all? user can pretty much do `series.index.name = 'new name'` to handle this problem. | https://api.github.com/repos/pandas-dev/pandas/pulls/23116 | 2018-10-12T21:31:03Z | 2018-10-24T12:23:37Z | 2018-10-24T12:23:37Z | 2018-10-24T12:30:31Z |
BUG: Fix ndarray + DataFrame ops | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 700bf4ddc3a37..5811a8c4c45ff 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -844,7 +844,7 @@ Numeric
- Bug in :class:`DataFrame` multiplication between boolean dtype and integer returning ``object`` dtype instead of integer dtype (:issue:`22047`, :issue:`22163`)
- Bug in :meth:`DataFrame.apply` where, when supplied with a string argument and additional positional or keyword arguments (e.g. ``df.apply('sum', min_count=1)``), a ``TypeError`` was wrongly raised (:issue:`22376`)
- Bug in :meth:`DataFrame.astype` to extension dtype may raise ``AttributeError`` (:issue:`22578`)
-
+- Bug in :class:`DataFrame` with ``timedelta64[ns]`` dtype arithmetic operations with ``ndarray`` with integer dtype incorrectly treating the narray as ``timedelta64[ns]`` dtype (:issue:`23114`)
Strings
^^^^^^^
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 1158a025b1319..ba050bfc8db77 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1795,6 +1795,10 @@ def __round__(self, decimals=0):
# ----------------------------------------------------------------------
# Array Interface
+ # This is also set in IndexOpsMixin
+ # GH#23114 Ensure ndarray.__op__(DataFrame) returns NotImplemented
+ __array_priority__ = 1000
+
def __array__(self, dtype=None):
return com.values_from_object(self)
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 0449212713048..e9316221b125b 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -156,7 +156,7 @@ def test_numeric_arr_rdiv_tdscalar(self, three_days, numeric_idx, box):
if box is not pd.Index and broken:
# np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D')
raise pytest.xfail("timedelta64 not converted to nanos; "
- "Tick division not imlpemented")
+ "Tick division not implemented")
expected = TimedeltaIndex(['3 Days', '36 Hours'])
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index 64d7cbc47fddd..511d74a2e790c 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -140,9 +140,6 @@ def test_objarr_radd_str_invalid(self, dtype, data, box):
operator.sub, ops.rsub])
def test_objarr_add_invalid(self, op, box):
# invalid ops
- if box is pd.DataFrame and op is ops.radd:
- pytest.xfail(reason="DataFrame op incorrectly casts the np.array"
- "case to M8[ns]")
obj_ser = tm.makeObjectSeries()
obj_ser.name = 'objects'
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index a8e61b3fd9d3a..fa1a2d9df9a58 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -364,13 +364,13 @@ def test_td64arr_add_str_invalid(self, box):
operator.sub, ops.rsub],
ids=lambda x: x.__name__)
def test_td64arr_add_sub_float(self, box, op, other):
+ if box is pd.DataFrame and isinstance(other, np.ndarray):
+ pytest.xfail("Tries to broadcast, raising "
+ "ValueError instead of TypeError")
+
tdi = TimedeltaIndex(['-1 days', '-1 days'])
tdi = tm.box_expected(tdi, box)
- if box is pd.DataFrame and op in [operator.add, operator.sub]:
- pytest.xfail(reason="Tries to align incorrectly, "
- "raises ValueError")
-
with pytest.raises(TypeError):
op(tdi, other)
@@ -1126,9 +1126,6 @@ def test_td64arr_floordiv_tdscalar(self, box, scalar_td):
def test_td64arr_rfloordiv_tdscalar(self, box, scalar_td):
# GH#18831
- if box is pd.DataFrame and isinstance(scalar_td, np.timedelta64):
- pytest.xfail(reason="raises TypeError, not sure why")
-
td1 = Series([timedelta(minutes=5, seconds=3)] * 3)
td1.iloc[2] = np.nan
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 66bbc1f1a649b..8864e5fffeb12 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -2046,8 +2046,11 @@ def test_matmul(self):
# np.array @ DataFrame
result = operator.matmul(a.values, b)
+ assert isinstance(result, DataFrame)
+ assert result.columns.equals(b.columns)
+ assert result.index.equals(pd.Index(range(3)))
expected = np.dot(a.values, b.values)
- tm.assert_almost_equal(result, expected)
+ tm.assert_almost_equal(result.values, expected)
# nested list @ DataFrame (__rmatmul__)
result = operator.matmul(a.values.tolist(), b)
| - [X] closes #22537
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23114 | 2018-10-12T16:20:21Z | 2018-10-14T16:48:41Z | 2018-10-14T16:48:41Z | 2018-10-23T03:28:44Z |
CLN: Move to_period, to_perioddelta up to EA subclasses | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 0f07a9cf3c0e0..ac90483513af5 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -820,6 +820,25 @@ def to_period(self, freq=None):
return PeriodArrayMixin(self.values, freq=freq)
+ def to_perioddelta(self, freq):
+ """
+ Calculate TimedeltaArray of difference between index
+ values and index converted to PeriodArray at specified
+ freq. Used for vectorized offsets
+
+ Parameters
+ ----------
+ freq: Period frequency
+
+ Returns
+ -------
+ TimedeltaArray/Index
+ """
+ # TODO: consider privatizing (discussion in GH#23113)
+ from pandas.core.arrays.timedeltas import TimedeltaArrayMixin
+ i8delta = self.asi8 - self.to_period(freq).to_timestamp().asi8
+ return TimedeltaArrayMixin(i8delta)
+
# -----------------------------------------------------------------
# Properties - Vectorized Timestamp Properties/Methods
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e0219acc115b5..7fb7e24cce94e 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -42,7 +42,6 @@
from pandas.tseries.offsets import (
CDay, prefix_mapping)
-from pandas.core.tools.timedeltas import to_timedelta
from pandas.util._decorators import Appender, cache_readonly, Substitution
import pandas.core.common as com
import pandas.tseries.offsets as offsets
@@ -545,13 +544,6 @@ def to_series(self, keep_tz=False, index=None, name=None):
return Series(values, index=index, name=name)
- @Appender(DatetimeArrayMixin.to_period.__doc__)
- def to_period(self, freq=None):
- from pandas.core.indexes.period import PeriodIndex
-
- result = DatetimeArrayMixin.to_period(self, freq=freq)
- return PeriodIndex(result, name=self.name)
-
def snap(self, freq='S'):
"""
Snap time stamps to nearest occurring frequency
@@ -623,23 +615,6 @@ def union(self, other):
result.freq = to_offset(result.inferred_freq)
return result
- def to_perioddelta(self, freq):
- """
- Calculate TimedeltaIndex of difference between index
- values and index converted to periodIndex at specified
- freq. Used for vectorized offsets
-
- Parameters
- ----------
- freq: Period frequency
-
- Returns
- -------
- y: TimedeltaIndex
- """
- return to_timedelta(self.asi8 - self.to_period(freq)
- .to_timestamp().asi8)
-
def union_many(self, others):
"""
A bit of a hack to accelerate unioning a collection of indexes
@@ -1168,6 +1143,9 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
is_year_end = wrap_field_accessor(DatetimeArrayMixin.is_year_end)
is_leap_year = wrap_field_accessor(DatetimeArrayMixin.is_leap_year)
+ to_perioddelta = wrap_array_method(DatetimeArrayMixin.to_perioddelta,
+ False)
+ to_period = wrap_array_method(DatetimeArrayMixin.to_period, True)
normalize = wrap_array_method(DatetimeArrayMixin.normalize, True)
to_julian_date = wrap_array_method(DatetimeArrayMixin.to_julian_date,
False)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index cbcd39317e17e..6c68ba2c35919 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -289,9 +289,9 @@ def __array_wrap__(self, result, context=None):
"""
if isinstance(context, tuple) and len(context) > 0:
func = context[0]
- if (func is np.add):
+ if func is np.add:
pass
- elif (func is np.subtract):
+ elif func is np.subtract:
name = self.name
left = context[1][0]
right = context[1][1]
@@ -312,7 +312,7 @@ def __array_wrap__(self, result, context=None):
return result
# the result is object dtype array of Period
# cannot pass _simple_new as it is
- return self._shallow_copy(result, freq=self.freq, name=self.name)
+ return type(self)(result, freq=self.freq, name=self.name)
@property
def size(self):
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index bfce5fb1462d9..ac7692c4afa74 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -67,6 +67,20 @@ def test_astype_object(self, tz_naive_fixture):
assert asobj.dtype == 'O'
assert list(asobj) == list(dti)
+ @pytest.mark.parametrize('freqstr', ['D', 'B', 'W', 'M', 'Q', 'Y'])
+ def test_to_perioddelta(self, datetime_index, freqstr):
+ # GH#23113
+ dti = datetime_index
+ arr = DatetimeArrayMixin(dti)
+
+ expected = dti.to_perioddelta(freq=freqstr)
+ result = arr.to_perioddelta(freq=freqstr)
+ assert isinstance(result, TimedeltaArrayMixin)
+
+ # placeholder until these become actual EA subclasses and we can use
+ # an EA-specific tm.assert_ function
+ tm.assert_index_equal(pd.Index(result), pd.Index(expected))
+
@pytest.mark.parametrize('freqstr', ['D', 'B', 'W', 'M', 'Q', 'Y'])
def test_to_period(self, datetime_index, freqstr):
dti = datetime_index
| Should be orthogonal to other outstanding datetimelike EA PRs. | https://api.github.com/repos/pandas-dev/pandas/pulls/23113 | 2018-10-12T16:14:28Z | 2018-10-22T07:08:43Z | 2018-10-22T07:08:43Z | 2018-12-10T16:34:49Z |
Deprecate read_feather nthreads argument + update feather-format to pyarrow.feather | diff --git a/ci/azure-windows-36.yaml b/ci/azure-windows-36.yaml
index 979443661f99b..af42545af7971 100644
--- a/ci/azure-windows-36.yaml
+++ b/ci/azure-windows-36.yaml
@@ -7,7 +7,6 @@ dependencies:
- bottleneck
- boost-cpp<1.67
- fastparquet
- - feather-format
- matplotlib
- numexpr
- numpy=1.14*
diff --git a/ci/requirements-optional-conda.txt b/ci/requirements-optional-conda.txt
index 04abfede67163..c9dc385b87986 100644
--- a/ci/requirements-optional-conda.txt
+++ b/ci/requirements-optional-conda.txt
@@ -2,7 +2,6 @@ beautifulsoup4>=4.2.1
blosc
bottleneck>=1.2.0
fastparquet
-feather-format
gcsfs
html5lib
ipython>=5.6.0
@@ -13,7 +12,7 @@ matplotlib>=2.0.0
nbsphinx
numexpr>=2.6.1
openpyxl
-pyarrow
+pyarrow>=0.4.1
pymysql
pytables>=3.4.2
pytest-cov
diff --git a/ci/requirements-optional-pip.txt b/ci/requirements-optional-pip.txt
index 0153bdb6edf04..347ea0d9832b0 100644
--- a/ci/requirements-optional-pip.txt
+++ b/ci/requirements-optional-pip.txt
@@ -4,7 +4,6 @@ beautifulsoup4>=4.2.1
blosc
bottleneck>=1.2.0
fastparquet
-feather-format
gcsfs
html5lib
ipython>=5.6.0
@@ -15,7 +14,7 @@ matplotlib>=2.0.0
nbsphinx
numexpr>=2.6.1
openpyxl
-pyarrow
+pyarrow>=0.4.1
pymysql
tables
pytest-cov
@@ -28,4 +27,4 @@ statsmodels
xarray
xlrd
xlsxwriter
-xlwt
\ No newline at end of file
+xlwt
diff --git a/ci/travis-27.yaml b/ci/travis-27.yaml
index 8955bea1fc010..9641a76152d7b 100644
--- a/ci/travis-27.yaml
+++ b/ci/travis-27.yaml
@@ -7,7 +7,6 @@ dependencies:
- bottleneck
- cython=0.28.2
- fastparquet
- - feather-format
- gcsfs
- html5lib
- ipython
diff --git a/ci/travis-36-doc.yaml b/ci/travis-36-doc.yaml
index f1f64546374af..ce095b887f189 100644
--- a/ci/travis-36-doc.yaml
+++ b/ci/travis-36-doc.yaml
@@ -8,7 +8,6 @@ dependencies:
- bottleneck
- cython>=0.28.2
- fastparquet
- - feather-format
- html5lib
- hypothesis>=3.58.0
- ipykernel
@@ -24,6 +23,7 @@ dependencies:
- numpy=1.13*
- openpyxl
- pandoc
+ - pyarrow
- pyqt
- pytables
- python-dateutil
diff --git a/ci/travis-36.yaml b/ci/travis-36.yaml
index 257f830ec6c48..352717a842214 100644
--- a/ci/travis-36.yaml
+++ b/ci/travis-36.yaml
@@ -7,7 +7,6 @@ dependencies:
- cython>=0.28.2
- dask
- fastparquet
- - feather-format
- flake8>=3.5
- flake8-comprehensions
- gcsfs
@@ -23,7 +22,7 @@ dependencies:
- numpy
- openpyxl
- psycopg2
- - pyarrow
+ - pyarrow=0.9.0
- pymysql
- pytables
- python-snappy
diff --git a/ci/travis-37.yaml b/ci/travis-37.yaml
index 4f2138d8555e3..7dbd85ac27df6 100644
--- a/ci/travis-37.yaml
+++ b/ci/travis-37.yaml
@@ -9,6 +9,7 @@ dependencies:
- numpy
- python-dateutil
- nomkl
+ - pyarrow
- pytz
- pytest
- pytest-xdist
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 843384b680cf8..b32c5b1145e85 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -258,7 +258,7 @@ Optional Dependencies
* `SciPy <http://www.scipy.org>`__: miscellaneous statistical functions, Version 0.18.1 or higher
* `xarray <http://xarray.pydata.org>`__: pandas like handling for > 2 dims, needed for converting Panels to xarray objects. Version 0.7.0 or higher is recommended.
* `PyTables <http://www.pytables.org>`__: necessary for HDF5-based storage, Version 3.4.2 or higher
-* `Feather Format <https://github.com/wesm/feather>`__: necessary for feather-based storage, version 0.3.1 or higher.
+* `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.4.1): necessary for feather-based storage.
* `Apache Parquet <https://parquet.apache.org/>`__, either `pyarrow <http://arrow.apache.org/docs/python/>`__ (>= 0.4.1) or `fastparquet <https://fastparquet.readthedocs.io/en/latest>`__ (>= 0.0.6) for parquet-based storage. The `snappy <https://pypi.org/project/python-snappy>`__ and `brotli <https://pypi.org/project/brotlipy>`__ are available for compression support.
* `SQLAlchemy <http://www.sqlalchemy.org>`__: for SQL database support. Version 0.8.1 or higher recommended. Besides SQLAlchemy, you also need a database specific driver. You can find an overview of supported drivers for each SQL dialect in the `SQLAlchemy docs <http://docs.sqlalchemy.org/en/latest/dialects/index.html>`__. Some common drivers are:
diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index c7820a8cb9de1..14dd4e8a19a45 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -269,6 +269,9 @@ If installed, we now require:
| scipy | 0.18.1 | |
+-----------------+-----------------+----------+
+Additionally we no longer depend on `feather-format` for feather based storage
+and replaced it with references to `pyarrow` (:issue:`21639` and :issue:`23053`).
+
.. _whatsnew_0240.api_breaking.csv_line_terminator:
`os.linesep` is used for ``line_terminator`` of ``DataFrame.to_csv``
@@ -954,6 +957,8 @@ Deprecations
- The ``fastpath`` keyword of the different Index constructors is deprecated (:issue:`23110`).
- :meth:`Timestamp.tz_localize`, :meth:`DatetimeIndex.tz_localize`, and :meth:`Series.tz_localize` have deprecated the ``errors`` argument in favor of the ``nonexistent`` argument (:issue:`8917`)
- The class ``FrozenNDArray`` has been deprecated. When unpickling, ``FrozenNDArray`` will be unpickled to ``np.ndarray`` once this class is removed (:issue:`9031`)
+- Deprecated the `nthreads` keyword of :func:`pandas.read_feather` in favor of
+ `use_threads` to reflect the changes in pyarrow 0.11.0. (:issue:`23053`)
.. _whatsnew_0240.deprecations.datetimelike_int_ops:
diff --git a/pandas/io/feather_format.py b/pandas/io/feather_format.py
index 8d2715fe5beed..ea2d96cd896d9 100644
--- a/pandas/io/feather_format.py
+++ b/pandas/io/feather_format.py
@@ -3,6 +3,7 @@
from distutils.version import LooseVersion
from pandas.compat import range
+from pandas.util._decorators import deprecate_kwarg
from pandas import DataFrame, Int64Index, RangeIndex
@@ -10,31 +11,27 @@
def _try_import():
- # since pandas is a dependency of feather
+ # since pandas is a dependency of pyarrow
# we need to import on first use
-
try:
- import feather
+ import pyarrow
+ from pyarrow import feather
except ImportError:
-
# give a nice error message
- raise ImportError("the feather-format library is not installed\n"
+ raise ImportError("pyarrow is not installed\n\n"
"you can install via conda\n"
- "conda install feather-format -c conda-forge\n"
+ "conda install pyarrow -c conda-forge\n"
"or via pip\n"
- "pip install -U feather-format\n")
+ "pip install -U pyarrow\n")
- try:
- LooseVersion(feather.__version__) >= LooseVersion('0.3.1')
- except AttributeError:
- raise ImportError("the feather-format library must be >= "
- "version 0.3.1\n"
+ if LooseVersion(pyarrow.__version__) < LooseVersion('0.4.1'):
+ raise ImportError("pyarrow >= 0.4.1 required for feather support\n\n"
"you can install via conda\n"
- "conda install feather-format -c conda-forge"
+ "conda install pyarrow -c conda-forge"
"or via pip\n"
- "pip install -U feather-format\n")
+ "pip install -U pyarrow\n")
- return feather
+ return feather, pyarrow
def to_feather(df, path):
@@ -51,7 +48,7 @@ def to_feather(df, path):
if not isinstance(df, DataFrame):
raise ValueError("feather only support IO with DataFrames")
- feather = _try_import()
+ feather = _try_import()[0]
valid_types = {'string', 'unicode'}
# validate index
@@ -83,10 +80,11 @@ def to_feather(df, path):
if df.columns.inferred_type not in valid_types:
raise ValueError("feather must have string column names")
- feather.write_dataframe(df, path)
+ feather.write_feather(df, path)
-def read_feather(path, nthreads=1):
+@deprecate_kwarg(old_arg_name='nthreads', new_arg_name='use_threads')
+def read_feather(path, use_threads=True):
"""
Load a feather-format object from the file path
@@ -99,6 +97,11 @@ def read_feather(path, nthreads=1):
Number of CPU threads to use when reading to pandas.DataFrame
.. versionadded 0.21.0
+ .. deprecated 0.24.0
+ use_threads: bool, default True
+ Whether to parallelize reading using multiple threads
+
+ .. versionadded 0.24.0
Returns
-------
@@ -106,10 +109,13 @@ def read_feather(path, nthreads=1):
"""
- feather = _try_import()
+ feather, pyarrow = _try_import()
path = _stringify_path(path)
- if LooseVersion(feather.__version__) < LooseVersion('0.4.0'):
- return feather.read_dataframe(path)
+ if LooseVersion(pyarrow.__version__) < LooseVersion('0.11.0'):
+ int_use_threads = int(use_threads)
+ if int_use_threads < 1:
+ int_use_threads = 1
+ return feather.read_feather(path, nthreads=int_use_threads)
- return feather.read_dataframe(path, nthreads=nthreads)
+ return feather.read_feather(path, use_threads=bool(use_threads))
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 88a2fded3500c..73e29e6eb9a6a 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -135,9 +135,7 @@ def test_iterator(self):
(pd.read_csv, 'os', FileNotFoundError, 'csv'),
(pd.read_fwf, 'os', FileNotFoundError, 'txt'),
(pd.read_excel, 'xlrd', FileNotFoundError, 'xlsx'),
- pytest.param(
- pd.read_feather, 'feather', Exception, 'feather',
- marks=pytest.mark.xfail(reason="failing for pyarrow < 0.11.0")),
+ (pd.read_feather, 'feather', Exception, 'feather'),
(pd.read_hdf, 'tables', FileNotFoundError, 'h5'),
(pd.read_stata, 'os', FileNotFoundError, 'dta'),
(pd.read_sas, 'os', FileNotFoundError, 'sas7bdat'),
@@ -162,10 +160,7 @@ def test_read_non_existant_read_table(self):
(pd.read_csv, 'os', ('io', 'data', 'iris.csv')),
(pd.read_fwf, 'os', ('io', 'data', 'fixed_width_format.txt')),
(pd.read_excel, 'xlrd', ('io', 'data', 'test1.xlsx')),
- pytest.param(
- pd.read_feather, 'feather',
- ('io', 'data', 'feather-0_3_1.feather'),
- marks=pytest.mark.xfail(reason="failing for pyarrow < 0.11.0")),
+ (pd.read_feather, 'feather', ('io', 'data', 'feather-0_3_1.feather')),
(pd.read_hdf, 'tables', ('io', 'data', 'legacy_hdf',
'datetimetz_object.h5')),
(pd.read_stata, 'os', ('io', 'data', 'stata10_115.dta')),
diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index 82f9f7253e65c..16b59526c8233 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -1,6 +1,5 @@
""" test feather-format compat """
from distutils.version import LooseVersion
-from warnings import catch_warnings
import numpy as np
@@ -9,15 +8,13 @@
from pandas.util.testing import assert_frame_equal, ensure_clean
import pytest
-feather = pytest.importorskip('feather')
-from feather import FeatherError # noqa:E402
+pyarrow = pytest.importorskip('pyarrow')
from pandas.io.feather_format import to_feather, read_feather # noqa:E402
-fv = LooseVersion(feather.__version__)
+pyarrow_version = LooseVersion(pyarrow.__version__)
-@pytest.mark.xfail(reason="failing for pyarrow < 0.11.0")
@pytest.mark.single
class TestFeather(object):
@@ -34,8 +31,7 @@ def check_round_trip(self, df, **kwargs):
with ensure_clean() as path:
to_feather(df, path)
- with catch_warnings(record=True):
- result = read_feather(path, **kwargs)
+ result = read_feather(path, **kwargs)
assert_frame_equal(result, df)
def test_error(self):
@@ -65,13 +61,6 @@ def test_basic(self):
assert df.dttz.dtype.tz.zone == 'US/Eastern'
self.check_round_trip(df)
- @pytest.mark.skipif(fv >= LooseVersion('0.4.0'), reason='fixed in 0.4.0')
- def test_strided_data_issues(self):
-
- # strided data issuehttps://github.com/wesm/feather/issues/97
- df = pd.DataFrame(np.arange(12).reshape(4, 3), columns=list('abc'))
- self.check_error_on_write(df, FeatherError)
-
def test_duplicate_columns(self):
# https://github.com/wesm/feather/issues/53
@@ -85,17 +74,6 @@ def test_stringify_columns(self):
df = pd.DataFrame(np.arange(12).reshape(4, 3)).copy()
self.check_error_on_write(df, ValueError)
- @pytest.mark.skipif(fv >= LooseVersion('0.4.0'), reason='fixed in 0.4.0')
- def test_unsupported(self):
-
- # timedelta
- df = pd.DataFrame({'a': pd.timedelta_range('1 day', periods=3)})
- self.check_error_on_write(df, FeatherError)
-
- # non-strings
- df = pd.DataFrame({'a': ['a', 1, 2.0]})
- self.check_error_on_write(df, ValueError)
-
def test_unsupported_other(self):
# period
@@ -103,11 +81,26 @@ def test_unsupported_other(self):
# Some versions raise ValueError, others raise ArrowInvalid.
self.check_error_on_write(df, Exception)
- @pytest.mark.skipif(fv < LooseVersion('0.4.0'), reason='new in 0.4.0')
def test_rw_nthreads(self):
-
df = pd.DataFrame({'A': np.arange(100000)})
- self.check_round_trip(df, nthreads=2)
+ expected_warning = (
+ "the 'nthreads' keyword is deprecated, "
+ "use 'use_threads' instead"
+ )
+ with tm.assert_produces_warning(FutureWarning) as w:
+ self.check_round_trip(df, nthreads=2)
+ assert len(w) == 1
+ assert expected_warning in str(w[0])
+
+ with tm.assert_produces_warning(FutureWarning) as w:
+ self.check_round_trip(df, nthreads=1)
+ assert len(w) == 1
+ assert expected_warning in str(w[0])
+
+ def test_rw_use_threads(self):
+ df = pd.DataFrame({'A': np.arange(100000)})
+ self.check_round_trip(df, use_threads=True)
+ self.check_round_trip(df, use_threads=False)
def test_write_with_index(self):
| - [x] closes #23053, closes https://github.com/pandas-dev/pandas/issues/21639
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23112 | 2018-10-12T15:56:18Z | 2018-11-01T12:02:41Z | 2018-11-01T12:02:41Z | 2021-04-27T14:29:33Z |
CI: Some Azure Pipelines cleanups | diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 409b1ac8c9df3..1e4e43ac03815 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -1,28 +1,20 @@
# Adapted from https://github.com/numba/numba/blob/master/azure-pipelines.yml
jobs:
-# Mac and Linux could potentially use the same template
-# except it isn't clear how to use a different build matrix
-# for each, so for now they are separate
-- template: ci/azure/macos.yml
+# Mac and Linux use the same template
+- template: ci/azure/posix.yml
parameters:
name: macOS
vmImage: xcode9-macos10.13
-- template: ci/azure/linux.yml
+- template: ci/azure/posix.yml
parameters:
name: Linux
vmImage: ubuntu-16.04
-# Windows Python 2.7 needs VC 9.0 installed, and not sure
-# how to make that a conditional task, so for now these are
-# separate templates as well
+# Windows Python 2.7 needs VC 9.0 installed, handled in the template
- template: ci/azure/windows.yml
parameters:
name: Windows
vmImage: vs2017-win2016
-- template: ci/azure/windows-py27.yml
- parameters:
- name: WindowsPy27
- vmImage: vs2017-win2016
- job: 'Checks_and_doc'
pool:
diff --git a/ci/azure/linux.yml b/ci/azure/linux.yml
deleted file mode 100644
index fe64307e9d08f..0000000000000
--- a/ci/azure/linux.yml
+++ /dev/null
@@ -1,79 +0,0 @@
-parameters:
- name: ''
- vmImage: ''
-
-jobs:
-- job: ${{ parameters.name }}
- pool:
- vmImage: ${{ parameters.vmImage }}
- strategy:
- maxParallel: 11
- matrix:
- py27_np_120:
- ENV_FILE: ci/deps/azure-27-compat.yaml
- CONDA_PY: "27"
- PATTERN: "not slow and not network"
-
- py37_locale:
- ENV_FILE: ci/deps/azure-37-locale.yaml
- CONDA_PY: "37"
- PATTERN: "not slow and not network"
- LOCALE_OVERRIDE: "zh_CN.UTF-8"
-
- py36_locale_slow:
- ENV_FILE: ci/deps/azure-36-locale_slow.yaml
- CONDA_PY: "36"
- PATTERN: "not slow and not network"
- LOCALE_OVERRIDE: "it_IT.UTF-8"
-
- steps:
- - script: |
- if [ "$(uname)" == "Linux" ]; then sudo apt-get install -y libc6-dev-i386; fi
- echo "Installing Miniconda"{
- ci/incremental/install_miniconda.sh
- export PATH=$HOME/miniconda3/bin:$PATH
- echo "Setting up Conda environment"
- ci/incremental/setup_conda_environment.sh
- displayName: 'Before Install'
- - script: |
- export PATH=$HOME/miniconda3/bin:$PATH
- source activate pandas-dev
- ci/incremental/build.sh
- displayName: 'Build'
- - script: |
- export PATH=$HOME/miniconda3/bin:$PATH
- source activate pandas-dev
- ci/run_tests.sh
- displayName: 'Test'
- - script: |
- export PATH=$HOME/miniconda3/bin:$PATH
- source activate pandas-dev && pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
- - task: PublishTestResults@2
- inputs:
- testResultsFiles: 'test-data-*.xml'
- testRunTitle: 'Linux'
- - powershell: |
- $junitXml = "test-data-single.xml"
- $(Get-Content $junitXml | Out-String) -match 'failures="(.*?)"'
- if ($matches[1] -eq 0)
- {
- Write-Host "No test failures in test-data-single"
- }
- else
- {
- # note that this will produce $LASTEXITCODE=1
- Write-Error "$($matches[1]) tests failed"
- }
-
- $junitXmlMulti = "test-data-multiple.xml"
- $(Get-Content $junitXmlMulti | Out-String) -match 'failures="(.*?)"'
- if ($matches[1] -eq 0)
- {
- Write-Host "No test failures in test-data-multi"
- }
- else
- {
- # note that this will produce $LASTEXITCODE=1
- Write-Error "$($matches[1]) tests failed"
- }
- displayName: Check for test failures
diff --git a/ci/azure/macos.yml b/ci/azure/posix.yml
similarity index 69%
rename from ci/azure/macos.yml
rename to ci/azure/posix.yml
index 98409576a5a87..374a82a5ed7d0 100644
--- a/ci/azure/macos.yml
+++ b/ci/azure/posix.yml
@@ -7,12 +7,30 @@ jobs:
pool:
vmImage: ${{ parameters.vmImage }}
strategy:
- maxParallel: 11
matrix:
- py35_np_120:
- ENV_FILE: ci/deps/azure-macos-35.yaml
- CONDA_PY: "35"
- PATTERN: "not slow and not network"
+ ${{ if eq(parameters.name, 'macOS') }}:
+ py35_np_120:
+ ENV_FILE: ci/deps/azure-macos-35.yaml
+ CONDA_PY: "35"
+ PATTERN: "not slow and not network"
+
+ ${{ if eq(parameters.name, 'Linux') }}:
+ py27_np_120:
+ ENV_FILE: ci/deps/azure-27-compat.yaml
+ CONDA_PY: "27"
+ PATTERN: "not slow and not network"
+
+ py37_locale:
+ ENV_FILE: ci/deps/azure-37-locale.yaml
+ CONDA_PY: "37"
+ PATTERN: "not slow and not network"
+ LOCALE_OVERRIDE: "zh_CN.UTF-8"
+
+ py36_locale_slow:
+ ENV_FILE: ci/deps/azure-36-locale_slow.yaml
+ CONDA_PY: "36"
+ PATTERN: "not slow and not network"
+ LOCALE_OVERRIDE: "it_IT.UTF-8"
steps:
- script: |
@@ -39,7 +57,7 @@ jobs:
- task: PublishTestResults@2
inputs:
testResultsFiles: 'test-data-*.xml'
- testRunTitle: 'MacOS-35'
+ testRunTitle: ${{ format('{0}-$(CONDA_PY)', parameters.name) }}
- powershell: |
$junitXml = "test-data-single.xml"
$(Get-Content $junitXml | Out-String) -match 'failures="(.*?)"'
diff --git a/ci/azure/windows-py27.yml b/ci/azure/windows-py27.yml
deleted file mode 100644
index 0d9aea816c4ad..0000000000000
--- a/ci/azure/windows-py27.yml
+++ /dev/null
@@ -1,58 +0,0 @@
-parameters:
- name: ''
- vmImage: ''
-
-jobs:
-- job: ${{ parameters.name }}
- pool:
- vmImage: ${{ parameters.vmImage }}
- strategy:
- maxParallel: 11
- matrix:
- py36_np121:
- ENV_FILE: ci/deps/azure-windows-27.yaml
- CONDA_PY: "27"
-
- steps:
- - task: CondaEnvironment@1
- inputs:
- updateConda: no
- packageSpecs: ''
-
- # Need to install VC 9.0 only for Python 2.7
- # Once we understand how to do tasks conditional on build matrix variables
- # we could merge this into azure-windows.yml
- - powershell: |
- $wc = New-Object net.webclient
- $wc.Downloadfile("https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi", "VCForPython27.msi")
- Start-Process "VCForPython27.msi" /qn -Wait
- displayName: 'Install VC 9.0'
-
- - script: |
- ci\\incremental\\setup_conda_environment.cmd
- displayName: 'Before Install'
- - script: |
- call activate pandas-dev
- ci\\incremental\\build.cmd
- displayName: 'Build'
- - script: |
- call activate pandas-dev
- pytest -m "not slow and not network" --junitxml=test-data.xml pandas -n 2 -r sxX --strict --durations=10 %*
- displayName: 'Test'
- - task: PublishTestResults@2
- inputs:
- testResultsFiles: 'test-data.xml'
- testRunTitle: 'Windows 27'
- - powershell: |
- $junitXml = "test-data.xml"
- $(Get-Content $junitXml | Out-String) -match 'failures="(.*?)"'
- if ($matches[1] -eq 0)
- {
- Write-Host "No test failures in test-data"
- }
- else
- {
- # note that this will produce $LASTEXITCODE=1
- Write-Error "$($matches[1]) tests failed"
- }
- displayName: Check for test failures
diff --git a/ci/azure/windows.yml b/ci/azure/windows.yml
index b69c210ca27ba..cece002024936 100644
--- a/ci/azure/windows.yml
+++ b/ci/azure/windows.yml
@@ -7,18 +7,28 @@ jobs:
pool:
vmImage: ${{ parameters.vmImage }}
strategy:
- maxParallel: 11
matrix:
py36_np14:
ENV_FILE: ci/deps/azure-windows-36.yaml
CONDA_PY: "36"
+ py27_np121:
+ ENV_FILE: ci/deps/azure-windows-27.yaml
+ CONDA_PY: "27"
+
steps:
- task: CondaEnvironment@1
inputs:
updateConda: no
packageSpecs: ''
+ - powershell: |
+ $wc = New-Object net.webclient
+ $wc.Downloadfile("https://download.microsoft.com/download/7/9/6/796EF2E4-801B-4FC4-AB28-B59FBF6D907B/VCForPython27.msi", "VCForPython27.msi")
+ Start-Process "VCForPython27.msi" /qn -Wait
+ displayName: 'Install VC 9.0 only for Python 2.7'
+ condition: eq(variables.CONDA_PY, '27')
+
- script: |
ci\\incremental\\setup_conda_environment.cmd
displayName: 'Before Install'
@@ -33,7 +43,7 @@ jobs:
- task: PublishTestResults@2
inputs:
testResultsFiles: 'test-data.xml'
- testRunTitle: 'Windows 36'
+ testRunTitle: 'Windows-$(CONDA_PY)'
- powershell: |
$junitXml = "test-data.xml"
$(Get-Content $junitXml | Out-String) -match 'failures="(.*?)"'
| Hey all, I'm a PM on Azure Pipelines. I noticed you had some comments in your `azure-pipelines.yml` regarding conditional steps and possibly adding a Linux flavor. I've taken the liberty of addressing those issues.
One thing I didn't completely finish: getting Linux to publish test results. The problem is that there are directories the publish task can't read already sitting in `/tmp`. I think the right solution is to use `mktemp -d` and then use that directory in the test scripts. But that felt like a pretty invasive change that you'd want to discuss before I just powered through and did it :)
Also, the contents of `azure-Linux-35.yaml` are identical to macOS right now, because I wasn't sure what needed to differ between those environments. | https://api.github.com/repos/pandas-dev/pandas/pulls/23111 | 2018-10-12T15:46:52Z | 2018-12-11T15:08:57Z | 2018-12-11T15:08:57Z | 2018-12-11T15:08:57Z |
DEPR: deprecate fastpath keyword in Index constructors | diff --git a/doc/source/whatsnew/v0.24.0.txt b/doc/source/whatsnew/v0.24.0.txt
index 9b70dd4ba549f..4da7ebb3eff36 100644
--- a/doc/source/whatsnew/v0.24.0.txt
+++ b/doc/source/whatsnew/v0.24.0.txt
@@ -664,6 +664,7 @@ Deprecations
many ``Series``, ``Index`` or 1-dimensional ``np.ndarray``, or alternatively, only scalar values. (:issue:`21950`)
- :meth:`FrozenNDArray.searchsorted` has deprecated the ``v`` parameter in favor of ``value`` (:issue:`14645`)
- :func:`DatetimeIndex.shift` and :func:`PeriodIndex.shift` now accept ``periods`` argument instead of ``n`` for consistency with :func:`Index.shift` and :func:`Series.shift`. Using ``n`` throws a deprecation warning (:issue:`22458`, :issue:`22912`)
+- The ``fastpath`` keyword of the different Index constructors is deprecated (:issue:`23110`).
.. _whatsnew_0240.prior_deprecations:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 51c84d6e28cb4..e5a379166e581 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -252,13 +252,17 @@ class Index(IndexOpsMixin, PandasObject):
str = CachedAccessor("str", StringMethods)
def __new__(cls, data=None, dtype=None, copy=False, name=None,
- fastpath=False, tupleize_cols=True, **kwargs):
+ fastpath=None, tupleize_cols=True, **kwargs):
if name is None and hasattr(data, 'name'):
name = data.name
- if fastpath:
- return cls._simple_new(data, name)
+ if fastpath is not None:
+ warnings.warn("The 'fastpath' keyword is deprecated, and will be "
+ "removed in a future version.",
+ FutureWarning, stacklevel=2)
+ if fastpath:
+ return cls._simple_new(data, name)
from .range import RangeIndex
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 45703c220a4be..715b8eae12656 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -1,4 +1,5 @@
import operator
+import warnings
import numpy as np
from pandas._libs import index as libindex
@@ -87,10 +88,14 @@ class CategoricalIndex(Index, accessor.PandasDelegate):
_attributes = ['name']
def __new__(cls, data=None, categories=None, ordered=None, dtype=None,
- copy=False, name=None, fastpath=False):
-
- if fastpath:
- return cls._simple_new(data, name=name, dtype=dtype)
+ copy=False, name=None, fastpath=None):
+
+ if fastpath is not None:
+ warnings.warn("The 'fastpath' keyword is deprecated, and will be "
+ "removed in a future version.",
+ FutureWarning, stacklevel=2)
+ if fastpath:
+ return cls._simple_new(data, name=name, dtype=dtype)
if name is None and hasattr(data, 'name'):
name = data.name
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 7f64fb744c682..5d9e99b19bee0 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -1,3 +1,5 @@
+import warnings
+
import numpy as np
from pandas._libs import (index as libindex,
join as libjoin)
@@ -35,10 +37,14 @@ class NumericIndex(Index):
_is_numeric_dtype = True
def __new__(cls, data=None, dtype=None, copy=False, name=None,
- fastpath=False):
-
- if fastpath:
- return cls._simple_new(data, name=name)
+ fastpath=None):
+
+ if fastpath is not None:
+ warnings.warn("The 'fastpath' keyword is deprecated, and will be "
+ "removed in a future version.",
+ FutureWarning, stacklevel=2)
+ if fastpath:
+ return cls._simple_new(data, name=name)
# is_scalar, generators handled in coerce_to_ndarray
data = cls._coerce_to_ndarray(data)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index f151389b02463..5afa23e2b11e2 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -306,7 +306,7 @@ def __contains__(self, key):
@cache_readonly
def _int64index(self):
- return Int64Index(self.asi8, name=self.name, fastpath=True)
+ return Int64Index._simple_new(self.asi8, name=self.name)
@property
def values(self):
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index a0c3243ddbc3c..eae1f03f22de0 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -1,6 +1,7 @@
import operator
from datetime import timedelta
from sys import getsizeof
+import warnings
import numpy as np
@@ -68,10 +69,14 @@ class RangeIndex(Int64Index):
_engine_type = libindex.Int64Engine
def __new__(cls, start=None, stop=None, step=None,
- dtype=None, copy=False, name=None, fastpath=False):
+ dtype=None, copy=False, name=None, fastpath=None):
- if fastpath:
- return cls._simple_new(start, stop, step, name=name)
+ if fastpath is not None:
+ warnings.warn("The 'fastpath' keyword is deprecated, and will be "
+ "removed in a future version.",
+ FutureWarning, stacklevel=2)
+ if fastpath:
+ return cls._simple_new(start, stop, step, name=name)
cls._validate_dtype(dtype)
@@ -174,7 +179,7 @@ def _data(self):
@cache_readonly
def _int64index(self):
- return Int64Index(self._data, name=self.name, fastpath=True)
+ return Int64Index._simple_new(self._data, name=self.name)
def _get_data_as_items(self):
""" return a list of tuples of start, stop, step """
@@ -262,8 +267,8 @@ def tolist(self):
@Appender(_index_shared_docs['_shallow_copy'])
def _shallow_copy(self, values=None, **kwargs):
if values is None:
- return RangeIndex(name=self.name, fastpath=True,
- **dict(self._get_data_as_items()))
+ return RangeIndex._simple_new(
+ name=self.name, **dict(self._get_data_as_items()))
else:
kwargs.setdefault('name', self.name)
return self._int64index._shallow_copy(values, **kwargs)
@@ -273,8 +278,8 @@ def copy(self, name=None, deep=False, dtype=None, **kwargs):
self._validate_dtype(dtype)
if name is None:
name = self.name
- return RangeIndex(name=name, fastpath=True,
- **dict(self._get_data_as_items()))
+ return RangeIndex._simple_new(
+ name=name, **dict(self._get_data_as_items()))
def _minmax(self, meth):
no_steps = len(self) - 1
@@ -374,7 +379,7 @@ def intersection(self, other):
tmp_start = first._start + (second._start - first._start) * \
first._step // gcd * s
new_step = first._step * second._step // gcd
- new_index = RangeIndex(tmp_start, int_high, new_step, fastpath=True)
+ new_index = RangeIndex._simple_new(tmp_start, int_high, new_step)
# adjust index to limiting interval
new_index._start = new_index._min_fitting_element(int_low)
@@ -552,7 +557,7 @@ def __getitem__(self, key):
stop = self._start + self._step * stop
step = self._step * step
- return RangeIndex(start, stop, step, name=self.name, fastpath=True)
+ return RangeIndex._simple_new(start, stop, step, name=self.name)
# fall back to Int64Index
return super_getitem(key)
@@ -565,12 +570,12 @@ def __floordiv__(self, other):
start = self._start // other
step = self._step // other
stop = start + len(self) * step
- return RangeIndex(start, stop, step, name=self.name,
- fastpath=True)
+ return RangeIndex._simple_new(
+ start, stop, step, name=self.name)
if len(self) == 1:
start = self._start // other
- return RangeIndex(start, start + 1, 1, name=self.name,
- fastpath=True)
+ return RangeIndex._simple_new(
+ start, start + 1, 1, name=self.name)
return self._int64index // other
@classmethod
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index a753e925b0ed8..2eaa43e049f62 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -2530,3 +2530,32 @@ def test_index_subclass_constructor_wrong_kwargs(index_maker):
# GH #19348
with tm.assert_raises_regex(TypeError, 'unexpected keyword argument'):
index_maker(foo='bar')
+
+
+def test_deprecated_fastpath():
+
+ with tm.assert_produces_warning(FutureWarning):
+ idx = pd.Index(
+ np.array(['a', 'b'], dtype=object), name='test', fastpath=True)
+
+ expected = pd.Index(['a', 'b'], name='test')
+ tm.assert_index_equal(idx, expected)
+
+ with tm.assert_produces_warning(FutureWarning):
+ idx = pd.Int64Index(
+ np.array([1, 2, 3], dtype='int64'), name='test', fastpath=True)
+
+ expected = pd.Index([1, 2, 3], name='test', dtype='int64')
+ tm.assert_index_equal(idx, expected)
+
+ with tm.assert_produces_warning(FutureWarning):
+ idx = pd.RangeIndex(0, 5, 2, name='test', fastpath=True)
+
+ expected = pd.RangeIndex(0, 5, 2, name='test')
+ tm.assert_index_equal(idx, expected)
+
+ with tm.assert_produces_warning(FutureWarning):
+ idx = pd.CategoricalIndex(['a', 'b', 'c'], name='test', fastpath=True)
+
+ expected = pd.CategoricalIndex(['a', 'b', 'c'], name='test')
+ tm.assert_index_equal(idx, expected)
| From all that looking at Index constructors ...
Apparently it was even hardly used internally for the Index classes (except a little bit in RangeIndex).
Part of https://github.com/pandas-dev/pandas/issues/20110
| https://api.github.com/repos/pandas-dev/pandas/pulls/23110 | 2018-10-12T15:12:06Z | 2018-10-18T17:25:37Z | 2018-10-18T17:25:37Z | 2018-10-18T18:05:08Z |
Ignoring W504 (line break after binary operator) in linting | diff --git a/.pep8speaks.yml b/.pep8speaks.yml
index c3a85d595eb59..79101a59ac767 100644
--- a/.pep8speaks.yml
+++ b/.pep8speaks.yml
@@ -11,6 +11,7 @@ pycodestyle:
max-line-length: 79
ignore:
- W503, # line break before binary operator
+ - W504, # line break after binary operator
- E402, # module level import not at top of file
- E722, # do not use bare except
- E731, # do not assign a lambda expression, use a def
diff --git a/setup.cfg b/setup.cfg
index 29392d7f15345..1981d2209240e 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -15,6 +15,7 @@ parentdir_prefix = pandas-
max-line-length = 79
ignore =
W503, # line break before binary operator
+ W504, # line break after binary operator
E402, # module level import not at top of file
E722, # do not use bare except
E731, # do not assign a lambda expression, use a def
| Based on the discussion in #23073, `flake8` is not linting for `W504`. `pep8speak` does, and future versions of `flake8` are likely to do so. So, I think we need to update our settings to ignore it explicitly, so `pep8speaks` is consistent with the CI, and updates on `flake8` do not start to generate unwanted errors.
| https://api.github.com/repos/pandas-dev/pandas/pulls/23101 | 2018-10-12T09:31:32Z | 2018-10-12T12:46:47Z | 2018-10-12T12:46:46Z | 2018-10-12T12:46:49Z |
DOC: Improve the docstring of pd.Index.contains and closes PR #20211 | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 88510e84a29a5..0e393476bdf5f 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3812,19 +3812,44 @@ def _is_memory_usage_qualified(self):
def is_type_compatible(self, kind):
return kind == self.inferred_type
- _index_shared_docs['__contains__'] = """
- Return a boolean if this key is IN the index.
+ _index_shared_docs['contains'] = """
+ Return a boolean indicating whether the provided key is in the index.
Parameters
----------
- key : object
+ key : label
+ The key to check if it is present in the index.
Returns
-------
- boolean
+ bool
+ Whether the key search is in the index.
+
+ See Also
+ --------
+ Index.isin : Returns an ndarray of boolean dtype indicating whether the
+ list-like key is in the index.
+
+ Examples
+ --------
+ >>> idx = pd.Index([1, 2, 3, 4])
+ >>> idx
+ Int64Index([1, 2, 3, 4], dtype='int64')
+
+ >>> idx.contains(2)
+ True
+ >>> idx.contains(6)
+ False
+
+ This is equivalent to:
+
+ >>> 2 in idx
+ True
+ >>> 6 in idx
+ False
"""
- @Appender(_index_shared_docs['__contains__'] % _index_doc_kwargs)
+ @Appender(_index_shared_docs['contains'] % _index_doc_kwargs)
def __contains__(self, key):
hash(key)
try:
@@ -3832,18 +3857,6 @@ def __contains__(self, key):
except (OverflowError, TypeError, ValueError):
return False
- _index_shared_docs['contains'] = """
- Return a boolean if this key is IN the index.
-
- Parameters
- ----------
- key : object
-
- Returns
- -------
- boolean
- """
-
@Appender(_index_shared_docs['contains'] % _index_doc_kwargs)
def contains(self, key):
hash(key)
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 6d26894514a9c..9ce4949992f4c 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -359,7 +359,7 @@ def ordered(self):
def _reverse_indexer(self):
return self._data._reverse_indexer()
- @Appender(_index_shared_docs['__contains__'] % _index_doc_kwargs)
+ @Appender(_index_shared_docs['contains'] % _index_doc_kwargs)
def __contains__(self, key):
# if key is a NaN, check if any NaN is in self.
if isna(key):
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 52127811b584a..dd2537c11a94c 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -150,7 +150,7 @@ def _box_values_as_index(self):
from pandas.core.index import Index
return Index(self._box_values(self.asi8), name=self.name, dtype=object)
- @Appender(_index_shared_docs['__contains__'] % _index_doc_kwargs)
+ @Appender(_index_shared_docs['contains'] % _index_doc_kwargs)
def __contains__(self, key):
try:
res = self.get_loc(key)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 5e26a3c6c439e..b8712f1678f4e 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -755,7 +755,7 @@ def _shallow_copy_with_infer(self, values, **kwargs):
**kwargs)
return self._shallow_copy(values, **kwargs)
- @Appender(_index_shared_docs['__contains__'] % _index_doc_kwargs)
+ @Appender(_index_shared_docs['contains'] % _index_doc_kwargs)
def __contains__(self, key):
hash(key)
try:
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 3d69a0a84f7ae..727f819e69056 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -400,7 +400,7 @@ def _mpl_repr(self):
def _engine(self):
return self._engine_type(lambda: self, len(self))
- @Appender(_index_shared_docs['__contains__'])
+ @Appender(_index_shared_docs['contains'])
def __contains__(self, key):
if isinstance(key, Period):
if key.freq != self.freq:
| - [x] closes #20211
- [ ] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
This PR is an update to PR #20211, improving the `Index.contains` docstring.
| https://api.github.com/repos/pandas-dev/pandas/pulls/23100 | 2018-10-12T08:56:08Z | 2018-12-07T14:16:52Z | 2018-12-07T14:16:52Z | 2018-12-07T14:17:06Z |
REF: de-duplicate datetimelike wrapping code | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index e4ace2bfe1509..73c0c3c5056bc 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -41,7 +41,7 @@
from pandas.util._decorators import deprecate_kwarg
-def _make_comparison_op(op, cls):
+def _make_comparison_op(cls, op):
# TODO: share code with indexes.base version? Main difference is that
# the block for MultiIndex was removed here.
def cmp_method(self, other):
@@ -740,6 +740,9 @@ def __isub__(self, other):
# --------------------------------------------------------------
# Comparison Methods
+ # Called by _add_comparison_methods defined in ExtensionOpsMixin
+ _create_comparison_method = classmethod(_make_comparison_op)
+
def _evaluate_compare(self, other, op):
"""
We have been called because a comparison between
@@ -773,21 +776,8 @@ def _evaluate_compare(self, other, op):
result[mask] = filler
return result
- # TODO: get this from ExtensionOpsMixin
- @classmethod
- def _add_comparison_methods(cls):
- """ add in comparison methods """
- # DatetimeArray and TimedeltaArray comparison methods will
- # call these as their super(...) methods
- cls.__eq__ = _make_comparison_op(operator.eq, cls)
- cls.__ne__ = _make_comparison_op(operator.ne, cls)
- cls.__lt__ = _make_comparison_op(operator.lt, cls)
- cls.__gt__ = _make_comparison_op(operator.gt, cls)
- cls.__le__ = _make_comparison_op(operator.le, cls)
- cls.__ge__ = _make_comparison_op(operator.ge, cls)
-
-
-DatetimeLikeArrayMixin._add_comparison_methods()
+
+DatetimeLikeArrayMixin._add_comparison_ops()
# -------------------------------------------------------------------
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 1ec30ecbb3a3b..8e919ba3599fc 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -744,3 +744,60 @@ def wrap_arithmetic_op(self, other, result):
res_name = ops.get_op_result_name(self, other)
result.name = res_name
return result
+
+
+def wrap_array_method(method, pin_name=False):
+ """
+ Wrap a DatetimeArray/TimedeltaArray/PeriodArray method so that the
+ returned object is an Index subclass instead of ndarray or ExtensionArray
+ subclass.
+
+ Parameters
+ ----------
+ method : method of Datetime/Timedelta/Period Array class
+ pin_name : bool
+ Whether to set name=self.name on the output Index
+
+ Returns
+ -------
+ method
+ """
+ def index_method(self, *args, **kwargs):
+ result = method(self, *args, **kwargs)
+
+ # Index.__new__ will choose the appropriate subclass to return
+ result = Index(result)
+ if pin_name:
+ result.name = self.name
+ return result
+
+ index_method.__name__ = method.__name__
+ index_method.__doc__ = method.__doc__
+ return index_method
+
+
+def wrap_field_accessor(prop):
+ """
+ Wrap a DatetimeArray/TimedeltaArray/PeriodArray array-returning property
+ to return an Index subclass instead of ndarray or ExtensionArray subclass.
+
+ Parameters
+ ----------
+ prop : property
+
+ Returns
+ -------
+ new_prop : property
+ """
+ fget = prop.fget
+
+ def f(self):
+ result = fget(self)
+ if is_bool_dtype(result):
+ # return numpy array b/c there is no BoolIndex
+ return result
+ return Index(result, name=self.name)
+
+ f.__name__ = fget.__name__
+ f.__doc__ = fget.__doc__
+ return property(f)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e40ceadc1a083..87009d692689a 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -20,7 +20,6 @@
is_integer_dtype,
is_datetime64_ns_dtype,
is_period_dtype,
- is_bool_dtype,
is_string_like,
is_list_like,
is_scalar,
@@ -34,11 +33,12 @@
from pandas.core.arrays import datetimelike as dtl
from pandas.core.indexes.base import Index, _index_shared_docs
-from pandas.core.indexes.numeric import Int64Index, Float64Index
+from pandas.core.indexes.numeric import Int64Index
import pandas.compat as compat
from pandas.tseries.frequencies import to_offset, Resolution
from pandas.core.indexes.datetimelike import (
- DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin)
+ DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin,
+ wrap_field_accessor, wrap_array_method)
from pandas.tseries.offsets import (
generate_range, CDay, prefix_mapping)
@@ -53,49 +53,6 @@
from pandas._libs.tslibs import (timezones, conversion, fields, parsing,
ccalendar)
-# -------- some conversion wrapper functions
-
-
-def _wrap_field_accessor(name):
- fget = getattr(DatetimeArrayMixin, name).fget
-
- def f(self):
- result = fget(self)
- if is_bool_dtype(result):
- return result
- return Index(result, name=self.name)
-
- f.__name__ = name
- f.__doc__ = fget.__doc__
- return property(f)
-
-
-def _wrap_in_index(name):
- meth = getattr(DatetimeArrayMixin, name)
-
- def func(self, *args, **kwargs):
- result = meth(self, *args, **kwargs)
- return Index(result, name=self.name)
-
- func.__doc__ = meth.__doc__
- func.__name__ = name
- return func
-
-
-def _dt_index_cmp(cls, op):
- """
- Wrap comparison operations to convert datetime-like to datetime64
- """
- opname = '__{name}__'.format(name=op.__name__)
-
- def wrapper(self, other):
- result = getattr(DatetimeArrayMixin, opname)(self, other)
- if is_bool_dtype(result):
- return result
- return Index(result)
-
- return compat.set_function_name(wrapper, opname, cls)
-
def _new_DatetimeIndex(cls, d):
""" This is called upon unpickling, rather than the default which doesn't
@@ -233,16 +190,6 @@ def _join_i8_wrapper(joinf, **kwargs):
_left_indexer_unique = _join_i8_wrapper(
libjoin.left_join_indexer_unique_int64, with_indexers=False)
- @classmethod
- def _add_comparison_methods(cls):
- """ add in comparison methods """
- cls.__eq__ = _dt_index_cmp(cls, operator.eq)
- cls.__ne__ = _dt_index_cmp(cls, operator.ne)
- cls.__lt__ = _dt_index_cmp(cls, operator.lt)
- cls.__gt__ = _dt_index_cmp(cls, operator.gt)
- cls.__le__ = _dt_index_cmp(cls, operator.le)
- cls.__ge__ = _dt_index_cmp(cls, operator.ge)
-
_engine_type = libindex.DatetimeEngine
tz = None
@@ -1273,38 +1220,38 @@ def slice_indexer(self, start=None, end=None, step=None, kind=None):
else:
raise
- year = _wrap_field_accessor('year')
- month = _wrap_field_accessor('month')
- day = _wrap_field_accessor('day')
- hour = _wrap_field_accessor('hour')
- minute = _wrap_field_accessor('minute')
- second = _wrap_field_accessor('second')
- microsecond = _wrap_field_accessor('microsecond')
- nanosecond = _wrap_field_accessor('nanosecond')
- weekofyear = _wrap_field_accessor('weekofyear')
+ year = wrap_field_accessor(DatetimeArrayMixin.year)
+ month = wrap_field_accessor(DatetimeArrayMixin.month)
+ day = wrap_field_accessor(DatetimeArrayMixin.day)
+ hour = wrap_field_accessor(DatetimeArrayMixin.hour)
+ minute = wrap_field_accessor(DatetimeArrayMixin.minute)
+ second = wrap_field_accessor(DatetimeArrayMixin.second)
+ microsecond = wrap_field_accessor(DatetimeArrayMixin.microsecond)
+ nanosecond = wrap_field_accessor(DatetimeArrayMixin.nanosecond)
+ weekofyear = wrap_field_accessor(DatetimeArrayMixin.weekofyear)
week = weekofyear
- dayofweek = _wrap_field_accessor('dayofweek')
+ dayofweek = wrap_field_accessor(DatetimeArrayMixin.dayofweek)
weekday = dayofweek
- weekday_name = _wrap_field_accessor('weekday_name')
+ weekday_name = wrap_field_accessor(DatetimeArrayMixin.weekday_name)
- dayofyear = _wrap_field_accessor('dayofyear')
- quarter = _wrap_field_accessor('quarter')
- days_in_month = _wrap_field_accessor('days_in_month')
+ dayofyear = wrap_field_accessor(DatetimeArrayMixin.dayofyear)
+ quarter = wrap_field_accessor(DatetimeArrayMixin.quarter)
+ days_in_month = wrap_field_accessor(DatetimeArrayMixin.days_in_month)
daysinmonth = days_in_month
- is_month_start = _wrap_field_accessor('is_month_start')
- is_month_end = _wrap_field_accessor('is_month_end')
- is_quarter_start = _wrap_field_accessor('is_quarter_start')
- is_quarter_end = _wrap_field_accessor('is_quarter_end')
- is_year_start = _wrap_field_accessor('is_year_start')
- is_year_end = _wrap_field_accessor('is_year_end')
- is_leap_year = _wrap_field_accessor('is_leap_year')
-
- @Appender(DatetimeArrayMixin.normalize.__doc__)
- def normalize(self):
- result = DatetimeArrayMixin.normalize(self)
- result.name = self.name
- return result
+ is_month_start = wrap_field_accessor(DatetimeArrayMixin.is_month_start)
+ is_month_end = wrap_field_accessor(DatetimeArrayMixin.is_month_end)
+ is_quarter_start = wrap_field_accessor(DatetimeArrayMixin.is_quarter_start)
+ is_quarter_end = wrap_field_accessor(DatetimeArrayMixin.is_quarter_end)
+ is_year_start = wrap_field_accessor(DatetimeArrayMixin.is_year_start)
+ is_year_end = wrap_field_accessor(DatetimeArrayMixin.is_year_end)
+ is_leap_year = wrap_field_accessor(DatetimeArrayMixin.is_leap_year)
+
+ normalize = wrap_array_method(DatetimeArrayMixin.normalize, True)
+ to_julian_date = wrap_array_method(DatetimeArrayMixin.to_julian_date,
+ False)
+ month_name = wrap_array_method(DatetimeArrayMixin.month_name, True)
+ day_name = wrap_array_method(DatetimeArrayMixin.day_name, True)
@Substitution(klass='DatetimeIndex')
@Appender(_shared_docs['searchsorted'])
@@ -1492,20 +1439,8 @@ def indexer_between_time(self, start_time, end_time, include_start=True,
return mask.nonzero()[0]
- def to_julian_date(self):
- """
- Convert DatetimeIndex to Float64Index of Julian Dates.
- 0 Julian date is noon January 1, 4713 BC.
- http://en.wikipedia.org/wiki/Julian_day
- """
- result = DatetimeArrayMixin.to_julian_date(self)
- return Float64Index(result)
-
- month_name = _wrap_in_index("month_name")
- day_name = _wrap_in_index("day_name")
-
-DatetimeIndex._add_comparison_methods()
+DatetimeIndex._add_comparison_ops()
DatetimeIndex._add_numeric_methods_disabled()
DatetimeIndex._add_logical_methods_disabled()
DatetimeIndex._add_datetimelike_methods()
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index f151389b02463..bfb69b2440286 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -20,7 +20,9 @@
from pandas.tseries.frequencies import get_freq_code as _gfc
from pandas.core.indexes.datetimes import DatetimeIndex, Int64Index, Index
-from pandas.core.indexes.datetimelike import DatelikeOps, DatetimeIndexOpsMixin
+from pandas.core.indexes.datetimelike import (
+ DatelikeOps, DatetimeIndexOpsMixin,
+ wrap_array_method, wrap_field_accessor)
from pandas.core.tools.datetimes import parse_time_string
from pandas._libs.lib import infer_dtype
@@ -43,19 +45,6 @@
_index_doc_kwargs.update(
dict(target_klass='PeriodIndex or list of Periods'))
-
-def _wrap_field_accessor(name):
- fget = getattr(PeriodArrayMixin, name).fget
-
- def f(self):
- result = fget(self)
- return Index(result, name=self.name)
-
- f.__name__ = name
- f.__doc__ = fget.__doc__
- return property(f)
-
-
# --- Period index sketch
@@ -431,22 +420,24 @@ def is_full(self):
values = self.asi8
return ((values[1:] - values[:-1]) < 2).all()
- year = _wrap_field_accessor('year')
- month = _wrap_field_accessor('month')
- day = _wrap_field_accessor('day')
- hour = _wrap_field_accessor('hour')
- minute = _wrap_field_accessor('minute')
- second = _wrap_field_accessor('second')
- weekofyear = _wrap_field_accessor('week')
+ year = wrap_field_accessor(PeriodArrayMixin.year)
+ month = wrap_field_accessor(PeriodArrayMixin.month)
+ day = wrap_field_accessor(PeriodArrayMixin.day)
+ hour = wrap_field_accessor(PeriodArrayMixin.hour)
+ minute = wrap_field_accessor(PeriodArrayMixin.minute)
+ second = wrap_field_accessor(PeriodArrayMixin.second)
+ weekofyear = wrap_field_accessor(PeriodArrayMixin.week)
week = weekofyear
- dayofweek = _wrap_field_accessor('dayofweek')
+ dayofweek = wrap_field_accessor(PeriodArrayMixin.dayofweek)
weekday = dayofweek
- dayofyear = day_of_year = _wrap_field_accessor('dayofyear')
- quarter = _wrap_field_accessor('quarter')
- qyear = _wrap_field_accessor('qyear')
- days_in_month = _wrap_field_accessor('days_in_month')
+ dayofyear = day_of_year = wrap_field_accessor(PeriodArrayMixin.dayofyear)
+ quarter = wrap_field_accessor(PeriodArrayMixin.quarter)
+ qyear = wrap_field_accessor(PeriodArrayMixin.qyear)
+ days_in_month = wrap_field_accessor(PeriodArrayMixin.days_in_month)
daysinmonth = days_in_month
+ to_timestamp = wrap_array_method(PeriodArrayMixin.to_timestamp, True)
+
@property
@Appender(PeriodArrayMixin.start_time.__doc__)
def start_time(self):
@@ -461,11 +452,6 @@ def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.astype(object).values
- @Appender(PeriodArrayMixin.to_timestamp.__doc__)
- def to_timestamp(self, freq=None, how='start'):
- result = PeriodArrayMixin.to_timestamp(self, freq=freq, how=how)
- return DatetimeIndex(result, name=self.name)
-
@property
def inferred_type(self):
# b/c data is represented as ints make sure we can't have ambiguous
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index ee604f44b98e0..56b6dc7051d9f 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -1,5 +1,4 @@
""" implement the TimedeltaIndex """
-import operator
from datetime import datetime
import numpy as np
@@ -7,7 +6,6 @@
_TD_DTYPE,
is_integer,
is_float,
- is_bool_dtype,
is_list_like,
is_scalar,
is_timedelta64_dtype,
@@ -31,41 +29,14 @@
import pandas.core.dtypes.concat as _concat
from pandas.util._decorators import Appender, Substitution
from pandas.core.indexes.datetimelike import (
- TimelikeOps, DatetimeIndexOpsMixin, wrap_arithmetic_op)
+ TimelikeOps, DatetimeIndexOpsMixin, wrap_arithmetic_op,
+ wrap_array_method, wrap_field_accessor)
from pandas.core.tools.timedeltas import (
to_timedelta, _coerce_scalar_to_timedelta_type)
from pandas._libs import (lib, index as libindex,
join as libjoin, Timedelta, NaT)
-def _wrap_field_accessor(name):
- fget = getattr(TimedeltaArrayMixin, name).fget
-
- def f(self):
- result = fget(self)
- return Index(result, name=self.name)
-
- f.__name__ = name
- f.__doc__ = fget.__doc__
- return property(f)
-
-
-def _td_index_cmp(cls, op):
- """
- Wrap comparison operations to convert timedelta-like to timedelta64
- """
- opname = '__{name}__'.format(name=op.__name__)
-
- def wrapper(self, other):
- result = getattr(TimedeltaArrayMixin, opname)(self, other)
- if is_bool_dtype(result):
- # support of bool dtype indexers
- return result
- return Index(result)
-
- return compat.set_function_name(wrapper, opname, cls)
-
-
class TimedeltaIndex(TimedeltaArrayMixin, DatetimeIndexOpsMixin,
TimelikeOps, Int64Index):
"""
@@ -153,16 +124,6 @@ def _join_i8_wrapper(joinf, **kwargs):
_datetimelike_methods = ["to_pytimedelta", "total_seconds",
"round", "floor", "ceil"]
- @classmethod
- def _add_comparison_methods(cls):
- """ add in comparison methods """
- cls.__eq__ = _td_index_cmp(cls, operator.eq)
- cls.__ne__ = _td_index_cmp(cls, operator.ne)
- cls.__lt__ = _td_index_cmp(cls, operator.lt)
- cls.__gt__ = _td_index_cmp(cls, operator.gt)
- cls.__le__ = _td_index_cmp(cls, operator.le)
- cls.__ge__ = _td_index_cmp(cls, operator.ge)
-
_engine_type = libindex.TimedeltaEngine
_comparables = ['name', 'freq']
@@ -269,15 +230,12 @@ def _format_native_types(self, na_rep=u'NaT', date_format=None, **kwargs):
nat_rep=na_rep,
justify='all').get_result()
- days = _wrap_field_accessor("days")
- seconds = _wrap_field_accessor("seconds")
- microseconds = _wrap_field_accessor("microseconds")
- nanoseconds = _wrap_field_accessor("nanoseconds")
+ days = wrap_field_accessor(TimedeltaArrayMixin.days)
+ seconds = wrap_field_accessor(TimedeltaArrayMixin.seconds)
+ microseconds = wrap_field_accessor(TimedeltaArrayMixin.microseconds)
+ nanoseconds = wrap_field_accessor(TimedeltaArrayMixin.nanoseconds)
- @Appender(TimedeltaArrayMixin.total_seconds.__doc__)
- def total_seconds(self):
- result = TimedeltaArrayMixin.total_seconds(self)
- return Index(result, name=self.name)
+ total_seconds = wrap_array_method(TimedeltaArrayMixin.total_seconds, True)
@Appender(_index_shared_docs['astype'])
def astype(self, dtype, copy=True):
@@ -708,7 +666,7 @@ def delete(self, loc):
return TimedeltaIndex(new_tds, name=self.name, freq=freq)
-TimedeltaIndex._add_comparison_methods()
+TimedeltaIndex._add_comparison_ops()
TimedeltaIndex._add_numeric_methods()
TimedeltaIndex._add_logical_methods_disabled()
TimedeltaIndex._add_datetimelike_methods()
| Should be orthogonal to other outstanding datetimelike index/array PRs | https://api.github.com/repos/pandas-dev/pandas/pulls/23099 | 2018-10-12T00:42:06Z | 2018-10-12T12:41:39Z | 2018-10-12T12:41:39Z | 2018-10-12T15:19:51Z |
Add isort config | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index f2188e6bb56b8..00f17d5c91537 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -56,6 +56,11 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
cpplint --quiet --extensions=c,h --headers=h --recursive --filter=-readability/casting,-runtime/int,-build/include_subdir pandas/_libs/src/*.h pandas/_libs/src/parser pandas/_libs/ujson pandas/_libs/tslibs/src/datetime
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ # Imports - Check formatting using isort see setup.cfg for settings
+ MSG='Check import format using isort ' ; echo $MSG
+ isort --recursive --check-only pandas
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
fi
### PATTERNS ###
diff --git a/ci/environment-dev.yaml b/ci/environment-dev.yaml
index f3323face4144..3e69b1f725b24 100644
--- a/ci/environment-dev.yaml
+++ b/ci/environment-dev.yaml
@@ -8,6 +8,7 @@ dependencies:
- flake8
- flake8-comprehensions
- hypothesis>=3.58.0
+ - isort
- moto
- pytest>=3.6
- python-dateutil>=2.5.0
diff --git a/ci/requirements_dev.txt b/ci/requirements_dev.txt
index 68fffe5d0df09..6a8b8d64d943b 100644
--- a/ci/requirements_dev.txt
+++ b/ci/requirements_dev.txt
@@ -5,6 +5,7 @@ NumPy
flake8
flake8-comprehensions
hypothesis>=3.58.0
+isort
moto
pytest>=3.6
python-dateutil>=2.5.0
diff --git a/ci/travis-36.yaml b/ci/travis-36.yaml
index 90c892709d9f6..7aa27beacf976 100644
--- a/ci/travis-36.yaml
+++ b/ci/travis-36.yaml
@@ -14,6 +14,7 @@ dependencies:
- geopandas
- html5lib
- ipython
+ - isort
- jinja2
- lxml
- matplotlib
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index a0c3243ddbc3c..ef19e9c88b072 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -4,27 +4,21 @@
import numpy as np
+import pandas.core.common as com
+import pandas.core.indexes.base as ibase
from pandas import compat
-
from pandas._libs import index as libindex
-from pandas.util._decorators import Appender, cache_readonly
-
-from pandas.compat import lrange, range, get_range_parameters
+from pandas.compat import get_range_parameters, lrange, range
from pandas.compat.numpy import function as nv
-
+from pandas.core import ops
+from pandas.core.dtypes import concat as _concat
from pandas.core.dtypes.common import (
- is_integer,
- is_scalar,
- is_timedelta64_dtype,
- is_int64_dtype)
+ is_int64_dtype, is_integer, is_scalar, is_timedelta64_dtype
+)
from pandas.core.dtypes.generic import ABCSeries, ABCTimedeltaIndex
-from pandas.core.dtypes import concat as _concat
-
-import pandas.core.common as com
-import pandas.core.indexes.base as ibase
-from pandas.core import ops
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.core.indexes.numeric import Int64Index
+from pandas.util._decorators import Appender, cache_readonly
class RangeIndex(Int64Index):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b4566ebd36d13..7ebbe0dfb4bb7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3,87 +3,67 @@
"""
from __future__ import division
-# pylint: disable=E1101,E1103
-# pylint: disable=W0703,W0622,W0613,W0201
-
import warnings
from textwrap import dedent
import numpy as np
import numpy.ma as ma
+import pandas.core.algorithms as algorithms
+import pandas.core.common as com
+import pandas.core.indexes.base as ibase
+import pandas.core.nanops as nanops
+import pandas.core.ops as ops
+import pandas.io.formats.format as fmt
+import pandas.plotting._core as gfx
+from pandas import compat
+from pandas._libs import iNaT, index as libindex, lib, tslibs
+from pandas.compat import (
+ PY36, OrderedDict, StringIO, get_range_parameters, range, u, zip
+)
+from pandas.compat.numpy import function as nv
+from pandas.core import base, generic
from pandas.core.accessor import CachedAccessor
from pandas.core.arrays import ExtensionArray
+from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
+from pandas.core.config import get_option
+from pandas.core.dtypes.cast import (
+ construct_1d_arraylike_from_scalar, construct_1d_ndarray_preserving_na,
+ construct_1d_object_array_from_listlike, infer_dtype_from_scalar,
+ maybe_cast_to_datetime, maybe_cast_to_integer_array, maybe_castable,
+ maybe_convert_platform, maybe_upcast
+)
from pandas.core.dtypes.common import (
- is_categorical_dtype,
- is_string_like,
- is_bool,
- is_integer, is_integer_dtype,
- is_float_dtype,
- is_extension_type,
- is_extension_array_dtype,
- is_datetimelike,
- is_datetime64tz_dtype,
- is_timedelta64_dtype,
- is_object_dtype,
- is_list_like,
- is_hashable,
- is_iterator,
- is_dict_like,
- is_scalar,
- _is_unorderable_exception,
- ensure_platform_int,
- pandas_dtype)
+ _is_unorderable_exception, ensure_platform_int, is_bool,
+ is_categorical_dtype, is_datetime64tz_dtype, is_datetimelike, is_dict_like,
+ is_extension_array_dtype, is_extension_type, is_float_dtype, is_hashable,
+ is_integer, is_integer_dtype, is_iterator, is_list_like, is_object_dtype,
+ is_scalar, is_string_like, is_timedelta64_dtype, pandas_dtype
+)
from pandas.core.dtypes.generic import (
- ABCSparseArray, ABCDataFrame, ABCIndexClass,
- ABCSeries, ABCSparseSeries)
-from pandas.core.dtypes.cast import (
- maybe_upcast, infer_dtype_from_scalar,
- maybe_convert_platform,
- maybe_cast_to_datetime, maybe_castable,
- construct_1d_arraylike_from_scalar,
- construct_1d_ndarray_preserving_na,
- construct_1d_object_array_from_listlike,
- maybe_cast_to_integer_array)
+ ABCDataFrame, ABCIndexClass, ABCSeries, ABCSparseArray, ABCSparseSeries
+)
from pandas.core.dtypes.missing import (
- isna,
- notna,
- remove_na_arraylike,
- na_value_for_dtype)
-
-from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
- Float64Index, ensure_index)
-from pandas.core.indexing import check_bool_indexer, maybe_convert_indices
-from pandas.core import generic, base
-from pandas.core.internals import SingleBlockManager
-from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
+ isna, na_value_for_dtype, notna, remove_na_arraylike
+)
+from pandas.core.index import (
+ Float64Index, Index, InvalidIndexError, MultiIndex, ensure_index
+)
from pandas.core.indexes.accessors import CombinedDatetimelikeProperties
from pandas.core.indexes.datetimes import DatetimeIndex
-from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexes.period import PeriodIndex
-from pandas import compat
+from pandas.core.indexes.timedeltas import TimedeltaIndex
+from pandas.core.indexing import check_bool_indexer, maybe_convert_indices
+from pandas.core.internals import SingleBlockManager
+from pandas.core.strings import StringMethods
+from pandas.core.tools.datetimes import to_datetime
from pandas.io.formats.terminal import get_terminal_size
-from pandas.compat import (
- zip, u, OrderedDict, StringIO, range, get_range_parameters, PY36)
-from pandas.compat.numpy import function as nv
-
-import pandas.core.ops as ops
-import pandas.core.algorithms as algorithms
-
-import pandas.core.common as com
-import pandas.core.nanops as nanops
-import pandas.core.indexes.base as ibase
-
-import pandas.io.formats.format as fmt
-from pandas.util._decorators import Appender, deprecate, Substitution
+from pandas.util._decorators import Appender, Substitution, deprecate
from pandas.util._validators import validate_bool_kwarg
-from pandas._libs import index as libindex, tslibs, lib, iNaT
-from pandas.core.config import get_option
-from pandas.core.strings import StringMethods
-from pandas.core.tools.datetimes import to_datetime
+# pylint: disable=E1101,E1103
+# pylint: disable=W0703,W0622,W0613,W0201
-import pandas.plotting._core as gfx
__all__ = ['Series']
diff --git a/pandas/io/packers.py b/pandas/io/packers.py
index 135f9e89eaaef..73b9e1dfc24e7 100644
--- a/pandas/io/packers.py
+++ b/pandas/io/packers.py
@@ -40,39 +40,33 @@
import os
import warnings
-from datetime import datetime, date, timedelta
+from datetime import date, datetime, timedelta
from textwrap import dedent
import numpy as np
from dateutil.parser import parse
-from pandas import compat
-from pandas import (Timestamp, Period, Series, DataFrame, # noqa:F401
- Index, MultiIndex, Float64Index, Int64Index,
- Panel, RangeIndex, PeriodIndex, DatetimeIndex, NaT,
- Categorical, CategoricalIndex, IntervalIndex, Interval,
- TimedeltaIndex)
-
-from pandas.util._move import (
- BadMove as _BadMove,
- move_into_mutable_buffer as _move_into_mutable_buffer,
+from pandas import ( # noqa:F401
+ Categorical, CategoricalIndex, DataFrame, DatetimeIndex, Float64Index,
+ Index, Int64Index, Interval, IntervalIndex, MultiIndex, NaT, Panel, Period,
+ PeriodIndex, RangeIndex, Series, TimedeltaIndex, Timestamp, compat
)
-from pandas.errors import PerformanceWarning
-
from pandas.compat import u, u_safe
-
-from pandas.core.dtypes.common import (
- is_categorical_dtype, is_object_dtype,
- needs_i8_conversion, pandas_dtype)
-
from pandas.core import internals
from pandas.core.arrays import IntervalArray
-from pandas.core.generic import NDFrame
-from pandas.core.internals import BlockManager, make_block, _safe_reshape
-from pandas.core.sparse.api import SparseSeries, SparseDataFrame
from pandas.core.arrays.sparse import BlockIndex, IntIndex
-from pandas.io.common import get_filepath_or_buffer, _stringify_path
-from pandas.io.msgpack import Unpacker as _Unpacker, Packer as _Packer, ExtType
+from pandas.core.dtypes.common import (
+ is_categorical_dtype, is_object_dtype, needs_i8_conversion, pandas_dtype
+)
+from pandas.core.generic import NDFrame
+from pandas.core.internals import BlockManager, _safe_reshape, make_block
+from pandas.core.sparse.api import SparseDataFrame, SparseSeries
+from pandas.errors import PerformanceWarning
+from pandas.io.common import _stringify_path, get_filepath_or_buffer
+from pandas.io.msgpack import ExtType, Packer as _Packer, Unpacker as _Unpacker
+from pandas.util._move import (
+ BadMove as _BadMove, move_into_mutable_buffer as _move_into_mutable_buffer
+)
# check which compression libs we have installed
try:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 9cceff30c9e0e..f9595af711621 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -10,48 +10,41 @@
import re
import time
import warnings
-
-from datetime import datetime, date
+from datetime import date, datetime
from distutils.version import LooseVersion
import numpy as np
-from pandas import (Series, DataFrame, Panel, Index,
- MultiIndex, Int64Index, isna, concat, to_datetime,
- SparseSeries, SparseDataFrame, PeriodIndex,
- DatetimeIndex, TimedeltaIndex)
-from pandas import compat
+import pandas.core.common as com
+from pandas import (
+ DataFrame, DatetimeIndex, Index, Int64Index, MultiIndex, Panel,
+ PeriodIndex, Series, SparseDataFrame, SparseSeries, TimedeltaIndex, compat,
+ concat, isna, to_datetime
+)
from pandas._libs import algos, lib, writers as libwriters
from pandas._libs.tslibs import timezones
-
-from pandas.errors import PerformanceWarning
-from pandas.compat import PY3, range, lrange, string_types, filter
-
-from pandas.core.dtypes.common import (
- is_list_like,
- is_categorical_dtype,
- is_timedelta64_dtype,
- is_datetime64tz_dtype,
- is_datetime64_dtype,
- ensure_object,
- ensure_int64,
- ensure_platform_int)
-from pandas.core.dtypes.missing import array_equivalent
-
+from pandas.compat import PY3, filter, lrange, range, string_types
from pandas.core import config
-import pandas.core.common as com
from pandas.core.algorithms import match, unique
-from pandas.core.arrays.categorical import (Categorical,
- _factorize_from_iterables)
+from pandas.core.arrays.categorical import (
+ Categorical, _factorize_from_iterables
+)
+from pandas.core.arrays.sparse import BlockIndex, IntIndex
from pandas.core.base import StringMixin
from pandas.core.computation.pytables import Expr, maybe_expression
from pandas.core.config import get_option
+from pandas.core.dtypes.common import (
+ ensure_int64, ensure_object, ensure_platform_int, is_categorical_dtype,
+ is_datetime64_dtype, is_datetime64tz_dtype, is_list_like,
+ is_timedelta64_dtype
+)
+from pandas.core.dtypes.missing import array_equivalent
from pandas.core.index import ensure_index
-from pandas.core.internals import (BlockManager, make_block,
- _block2d_to_blocknd,
- _factor_indexer, _block_shape)
-from pandas.core.arrays.sparse import BlockIndex, IntIndex
-
+from pandas.core.internals import (
+ BlockManager, _block2d_to_blocknd, _block_shape, _factor_indexer,
+ make_block
+)
+from pandas.errors import PerformanceWarning
from pandas.io.common import _stringify_path
from pandas.io.formats.printing import adjoin, pprint_thing
diff --git a/pandas/tests/arrays/categorical/test_subclass.py b/pandas/tests/arrays/categorical/test_subclass.py
index 4060d2ebf633a..08ebb7d1b05f7 100644
--- a/pandas/tests/arrays/categorical/test_subclass.py
+++ b/pandas/tests/arrays/categorical/test_subclass.py
@@ -1,8 +1,7 @@
# -*- coding: utf-8 -*-
-from pandas import Categorical
-
import pandas.util.testing as tm
+from pandas import Categorical
class TestCategoricalSubclassing(object):
diff --git a/pandas/tests/tslibs/test_liboffsets.py b/pandas/tests/tslibs/test_liboffsets.py
index a31a79d2f68ed..50d8f546d8e58 100644
--- a/pandas/tests/tslibs/test_liboffsets.py
+++ b/pandas/tests/tslibs/test_liboffsets.py
@@ -6,9 +6,8 @@
import pytest
-from pandas import Timestamp
-
import pandas._libs.tslibs.offsets as liboffsets
+from pandas import Timestamp
from pandas._libs.tslibs.offsets import roll_qtrday
diff --git a/pandas/tseries/api.py b/pandas/tseries/api.py
index 2094791ecdc60..982a0e715e360 100644
--- a/pandas/tseries/api.py
+++ b/pandas/tseries/api.py
@@ -4,5 +4,5 @@
# flake8: noqa
-from pandas.tseries.frequencies import infer_freq
import pandas.tseries.offsets as offsets
+from pandas.tseries.frequencies import infer_freq
diff --git a/setup.cfg b/setup.cfg
index 6c6f7e78eded7..84f19e56ad3bc 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -76,3 +76,560 @@ exclude_lines =
[coverage:html]
directory = coverage_html_report
+
+[isort]
+known_first_party=pandas
+known_third_party=Cython,numpy,python-dateutil,pytz
+multi_line_output=5
+force_grid_wrap=0
+combine_as_imports=True
+skip=
+ pandas/lib.py,
+ pandas/tslib.py,
+ pandas/testing.py,
+ pandas/conftest.py,
+ pandas/_version.py,
+ pandas/parser.py,
+ pandas/util/_depr_module.py,
+ pandas/util/testing.py,
+ pandas/util/_doctools.py,
+ pandas/util/decorators.py,
+ pandas/util/_print_versions.py,
+ pandas/util/_decorators.py,
+ pandas/util/_test_decorators.py,
+ pandas/io/s3.py,
+ pandas/io/parquet.py,
+ pandas/io/feather_format.py,
+ pandas/io/api.py,
+ pandas/io/sql.py,
+ pandas/io/clipboards.py,
+ pandas/io/excel.py,
+ pandas/io/date_converters.py,
+ pandas/io/testing.py,
+ pandas/io/common.py,
+ pandas/io/parsers.py,
+ pandas/io/html.py,
+ pandas/io/pickle.py,
+ pandas/io/stata.py,
+ pandas/io/sas/sas_xport.py,
+ pandas/io/sas/sas7bdat.py,
+ pandas/io/formats/console.py,
+ pandas/io/formats/excel.py,
+ pandas/io/formats/style.py,
+ pandas/io/formats/printing.py,
+ pandas/io/formats/latex.py,
+ pandas/io/formats/csvs.py,
+ pandas/io/formats/html.py,
+ pandas/io/formats/terminal.py,
+ pandas/io/formats/format.py,
+ pandas/io/json/normalize.py,
+ pandas/io/json/json.py,
+ pandas/io/json/table_schema.py,
+ pandas/io/clipboard/windows.py,
+ pandas/io/clipboard/clipboards.py,
+ pandas/compat/pickle_compat.py,
+ pandas/compat/numpy/function.py,
+ pandas/core/ops.py,
+ pandas/core/categorical.py,
+ pandas/core/api.py,
+ pandas/core/window.py,
+ pandas/core/indexing.py,
+ pandas/core/apply.py,
+ pandas/core/generic.py,
+ pandas/core/sorting.py,
+ pandas/core/frame.py,
+ pandas/core/nanops.py,
+ pandas/core/algorithms.py,
+ pandas/core/strings.py,
+ pandas/core/panel.py,
+ pandas/core/config.py,
+ pandas/core/resample.py,
+ pandas/core/base.py,
+ pandas/core/common.py,
+ pandas/core/missing.py,
+ pandas/core/config_init.py,
+ pandas/core/indexes/category.py,
+ pandas/core/indexes/api.py,
+ pandas/core/indexes/numeric.py,
+ pandas/core/indexes/interval.py,
+ pandas/core/indexes/multi.py,
+ pandas/core/indexes/timedeltas.py,
+ pandas/core/indexes/datetimelike.py,
+ pandas/core/indexes/datetimes.py,
+ pandas/core/indexes/base.py,
+ pandas/core/indexes/accessors.py,
+ pandas/core/indexes/period.py,
+ pandas/core/indexes/frozen.py,
+ pandas/core/arrays/categorical.py,
+ pandas/core/arrays/integer.py,
+ pandas/core/arrays/interval.py,
+ pandas/core/arrays/timedeltas.py,
+ pandas/core/arrays/datetimelike.py,
+ pandas/core/arrays/datetimes.py,
+ pandas/core/arrays/base.py,
+ pandas/core/arrays/period.py,
+ pandas/core/util/hashing.py,
+ pandas/core/tools/numeric.py,
+ pandas/core/tools/timedeltas.py,
+ pandas/core/tools/datetimes.py,
+ pandas/core/dtypes/concat.py,
+ pandas/core/dtypes/cast.py,
+ pandas/core/dtypes/api.py,
+ pandas/core/dtypes/dtypes.py,
+ pandas/core/dtypes/base.py,
+ pandas/core/dtypes/common.py,
+ pandas/core/dtypes/missing.py,
+ pandas/core/dtypes/inference.py,
+ pandas/core/internals/concat.py,
+ pandas/core/internals/managers.py,
+ pandas/core/internals/blocks.py,
+ pandas/core/groupby/ops.py,
+ pandas/core/groupby/categorical.py,
+ pandas/core/groupby/generic.py,
+ pandas/core/groupby/groupby.py,
+ pandas/core/groupby/grouper.py,
+ pandas/core/groupby/base.py,
+ pandas/core/reshape/concat.py,
+ pandas/core/reshape/tile.py,
+ pandas/core/reshape/melt.py,
+ pandas/core/reshape/api.py,
+ pandas/core/reshape/util.py,
+ pandas/core/reshape/merge.py,
+ pandas/core/reshape/reshape.py,
+ pandas/core/reshape/pivot.py,
+ pandas/core/sparse/array.py,
+ pandas/core/arrays/sparse.py,
+ pandas/core/sparse/api.py,
+ pandas/core/sparse/series.py,
+ pandas/core/sparse/frame.py,
+ pandas/core/sparse/scipy_sparse.py,
+ pandas/core/computation/check.py,
+ pandas/core/computation/ops.py,
+ pandas/core/computation/pytables.py,
+ pandas/core/computation/eval.py,
+ pandas/core/computation/expressions.py,
+ pandas/core/computation/common.py,
+ pandas/core/computation/engines.py,
+ pandas/core/computation/expr.py,
+ pandas/core/computation/align.py,
+ pandas/core/computation/scope.py,
+ pandas/tests/test_errors.py,
+ pandas/tests/test_base.py,
+ pandas/tests/test_register_accessor.py,
+ pandas/tests/test_window.py,
+ pandas/tests/test_downstream.py,
+ pandas/tests/test_multilevel.py,
+ pandas/tests/test_common.py,
+ pandas/tests/test_compat.py,
+ pandas/tests/test_sorting.py,
+ pandas/tests/test_resample.py,
+ pandas/tests/test_algos.py,
+ pandas/tests/test_expressions.py,
+ pandas/tests/test_strings.py,
+ pandas/tests/test_lib.py,
+ pandas/tests/test_join.py,
+ pandas/tests/test_panel.py,
+ pandas/tests/test_take.py,
+ pandas/tests/test_nanops.py,
+ pandas/tests/test_config.py,
+ pandas/tests/indexes/test_frozen.py,
+ pandas/tests/indexes/test_base.py,
+ pandas/tests/indexes/test_numeric.py,
+ pandas/tests/indexes/test_range.py,
+ pandas/tests/indexes/datetimelike.py,
+ pandas/tests/indexes/test_category.py,
+ pandas/tests/indexes/common.py,
+ pandas/tests/indexes/conftest.py,
+ pandas/tests/indexes/datetimes/test_indexing.py,
+ pandas/tests/indexes/datetimes/test_construction.py,
+ pandas/tests/indexes/datetimes/test_datetimelike.py,
+ pandas/tests/indexes/datetimes/test_setops.py,
+ pandas/tests/indexes/datetimes/test_timezones.py,
+ pandas/tests/indexes/datetimes/test_datetime.py,
+ pandas/tests/indexes/datetimes/test_tools.py,
+ pandas/tests/indexes/datetimes/test_arithmetic.py,
+ pandas/tests/indexes/datetimes/test_astype.py,
+ pandas/tests/indexes/datetimes/test_date_range.py,
+ pandas/tests/indexes/datetimes/test_misc.py,
+ pandas/tests/indexes/datetimes/test_scalar_compat.py,
+ pandas/tests/indexes/datetimes/test_partial_slicing.py,
+ pandas/tests/indexes/datetimes/test_missing.py,
+ pandas/tests/indexes/datetimes/test_ops.py,
+ pandas/tests/indexes/datetimes/test_formats.py,
+ pandas/tests/indexes/multi/test_duplicates.py,
+ pandas/tests/indexes/multi/test_partial_indexing.py,
+ pandas/tests/indexes/multi/test_indexing.py,
+ pandas/tests/indexes/multi/test_get_set.py,
+ pandas/tests/indexes/multi/test_copy.py,
+ pandas/tests/indexes/multi/test_constructor.py,
+ pandas/tests/indexes/multi/test_names.py,
+ pandas/tests/indexes/multi/test_equivalence.py,
+ pandas/tests/indexes/multi/test_reshape.py,
+ pandas/tests/indexes/multi/test_compat.py,
+ pandas/tests/indexes/multi/test_contains.py,
+ pandas/tests/indexes/multi/test_sorting.py,
+ pandas/tests/indexes/multi/test_format.py,
+ pandas/tests/indexes/multi/test_set_ops.py,
+ pandas/tests/indexes/multi/test_monotonic.py,
+ pandas/tests/indexes/multi/test_reindex.py,
+ pandas/tests/indexes/multi/test_drop.py,
+ pandas/tests/indexes/multi/test_integrity.py,
+ pandas/tests/indexes/multi/test_astype.py,
+ pandas/tests/indexes/multi/test_analytics.py,
+ pandas/tests/indexes/multi/test_missing.py,
+ pandas/tests/indexes/multi/conftest.py,
+ pandas/tests/indexes/multi/test_join.py,
+ pandas/tests/indexes/multi/test_conversion.py,
+ pandas/tests/indexes/period/test_indexing.py,
+ pandas/tests/indexes/period/test_construction.py,
+ pandas/tests/indexes/period/test_asfreq.py,
+ pandas/tests/indexes/period/test_setops.py,
+ pandas/tests/indexes/period/test_period.py,
+ pandas/tests/indexes/period/test_tools.py,
+ pandas/tests/indexes/period/test_period_range.py,
+ pandas/tests/indexes/period/test_arithmetic.py,
+ pandas/tests/indexes/period/test_astype.py,
+ pandas/tests/indexes/period/test_scalar_compat.py,
+ pandas/tests/indexes/period/test_partial_slicing.py,
+ pandas/tests/indexes/period/test_ops.py,
+ pandas/tests/indexes/period/test_formats.py,
+ pandas/tests/indexes/interval/test_construction.py,
+ pandas/tests/indexes/interval/test_interval_new.py,
+ pandas/tests/indexes/interval/test_interval.py,
+ pandas/tests/indexes/interval/test_interval_range.py,
+ pandas/tests/indexes/interval/test_astype.py,
+ pandas/tests/indexes/interval/test_interval_tree.py,
+ pandas/tests/indexes/timedeltas/test_indexing.py,
+ pandas/tests/indexes/timedeltas/test_construction.py,
+ pandas/tests/indexes/timedeltas/test_setops.py,
+ pandas/tests/indexes/timedeltas/test_timedelta.py,
+ pandas/tests/indexes/timedeltas/test_tools.py,
+ pandas/tests/indexes/timedeltas/test_arithmetic.py,
+ pandas/tests/indexes/timedeltas/test_astype.py,
+ pandas/tests/indexes/timedeltas/test_scalar_compat.py,
+ pandas/tests/indexes/timedeltas/test_partial_slicing.py,
+ pandas/tests/indexes/timedeltas/test_timedelta_range.py,
+ pandas/tests/indexes/timedeltas/test_ops.py,
+ pandas/tests/series/test_duplicates.py,
+ pandas/tests/series/test_internals.py,
+ pandas/tests/series/test_quantile.py,
+ pandas/tests/series/test_period.py,
+ pandas/tests/series/test_io.py,
+ pandas/tests/series/test_validate.py,
+ pandas/tests/series/test_timezones.py,
+ pandas/tests/series/test_datetime_values.py,
+ pandas/tests/series/test_sorting.py,
+ pandas/tests/series/test_subclass.py,
+ pandas/tests/series/test_operators.py,
+ pandas/tests/series/test_asof.py,
+ pandas/tests/series/test_apply.py,
+ pandas/tests/series/test_arithmetic.py,
+ pandas/tests/series/test_replace.py,
+ pandas/tests/series/test_dtypes.py,
+ pandas/tests/series/test_timeseries.py,
+ pandas/tests/series/test_repr.py,
+ pandas/tests/series/test_analytics.py,
+ pandas/tests/series/test_combine_concat.py,
+ pandas/tests/series/common.py,
+ pandas/tests/series/test_missing.py,
+ pandas/tests/series/conftest.py,
+ pandas/tests/series/test_api.py,
+ pandas/tests/series/test_constructors.py,
+ pandas/tests/series/test_alter_axes.py,
+ pandas/tests/series/test_rank.py,
+ pandas/tests/series/indexing/test_indexing.py,
+ pandas/tests/series/indexing/test_alter_index.py,
+ pandas/tests/series/indexing/test_numeric.py,
+ pandas/tests/series/indexing/test_boolean.py,
+ pandas/tests/series/indexing/test_callable.py,
+ pandas/tests/series/indexing/test_datetime.py,
+ pandas/tests/series/indexing/test_iloc.py,
+ pandas/tests/series/indexing/test_loc.py,
+ pandas/tests/arrays/test_datetimelike.py,
+ pandas/tests/arrays/test_integer.py,
+ pandas/tests/arrays/test_interval.py,
+ pandas/tests/arrays/categorical/test_indexing.py,
+ pandas/tests/arrays/categorical/test_sorting.py,
+ pandas/tests/arrays/categorical/test_operators.py,
+ pandas/tests/arrays/categorical/test_algos.py,
+ pandas/tests/arrays/categorical/test_dtypes.py,
+ pandas/tests/arrays/categorical/test_repr.py,
+ pandas/tests/arrays/categorical/test_analytics.py,
+ pandas/tests/arrays/categorical/test_missing.py,
+ pandas/tests/arrays/categorical/test_api.py,
+ pandas/tests/arrays/categorical/test_constructors.py,
+ pandas/tests/util/test_testing.py,
+ pandas/tests/util/test_util.py,
+ pandas/tests/util/test_hashing.py,
+ pandas/tests/extension/test_common.py,
+ pandas/tests/extension/test_integer.py,
+ pandas/tests/extension/test_external_block.py,
+ pandas/tests/extension/test_interval.py,
+ pandas/tests/extension/test_categorical.py,
+ pandas/tests/extension/base/ops.py,
+ pandas/tests/extension/base/reshaping.py,
+ pandas/tests/extension/base/getitem.py,
+ pandas/tests/extension/base/groupby.py,
+ pandas/tests/extension/base/constructors.py,
+ pandas/tests/extension/base/interface.py,
+ pandas/tests/extension/base/dtype.py,
+ pandas/tests/extension/base/casting.py,
+ pandas/tests/extension/base/methods.py,
+ pandas/tests/extension/base/missing.py,
+ pandas/tests/extension/base/setitem.py,
+ pandas/tests/extension/arrow/test_bool.py,
+ pandas/tests/extension/arrow/bool.py,
+ pandas/tests/extension/decimal/array.py,
+ pandas/tests/extension/decimal/test_decimal.py,
+ pandas/tests/extension/json/array.py,
+ pandas/tests/extension/json/test_json.py,
+ pandas/tests/io/test_clipboard.py,
+ pandas/tests/io/test_compression.py,
+ pandas/tests/io/test_pytables.py,
+ pandas/tests/io/test_parquet.py,
+ pandas/tests/io/generate_legacy_storage_files.py,
+ pandas/tests/io/test_common.py,
+ pandas/tests/io/test_excel.py,
+ pandas/tests/io/test_feather.py,
+ pandas/tests/io/test_s3.py,
+ pandas/tests/io/test_html.py,
+ pandas/tests/io/test_sql.py,
+ pandas/tests/io/test_packers.py,
+ pandas/tests/io/test_stata.py,
+ pandas/tests/io/conftest.py,
+ pandas/tests/io/test_pickle.py,
+ pandas/tests/io/test_gbq.py,
+ pandas/tests/io/test_gcs.py,
+ pandas/tests/io/sas/test_sas.py,
+ pandas/tests/io/sas/test_sas7bdat.py,
+ pandas/tests/io/sas/test_xport.py,
+ pandas/tests/io/formats/test_eng_formatting.py,
+ pandas/tests/io/formats/test_to_excel.py,
+ pandas/tests/io/formats/test_to_html.py,
+ pandas/tests/io/formats/test_style.py,
+ pandas/tests/io/formats/test_format.py,
+ pandas/tests/io/formats/test_to_csv.py,
+ pandas/tests/io/formats/test_css.py,
+ pandas/tests/io/formats/test_to_latex.py,
+ pandas/tests/io/formats/test_printing.py,
+ pandas/tests/io/parser/skiprows.py,
+ pandas/tests/io/parser/test_textreader.py,
+ pandas/tests/io/parser/converters.py,
+ pandas/tests/io/parser/na_values.py,
+ pandas/tests/io/parser/comment.py,
+ pandas/tests/io/parser/test_network.py,
+ pandas/tests/io/parser/dtypes.py,
+ pandas/tests/io/parser/parse_dates.py,
+ pandas/tests/io/parser/quoting.py,
+ pandas/tests/io/parser/multithread.py,
+ pandas/tests/io/parser/index_col.py,
+ pandas/tests/io/parser/test_read_fwf.py,
+ pandas/tests/io/parser/test_unsupported.py,
+ pandas/tests/io/parser/python_parser_only.py,
+ pandas/tests/io/parser/test_parsers.py,
+ pandas/tests/io/parser/c_parser_only.py,
+ pandas/tests/io/parser/dialect.py,
+ pandas/tests/io/parser/common.py,
+ pandas/tests/io/parser/compression.py,
+ pandas/tests/io/parser/usecols.py,
+ pandas/tests/io/parser/mangle_dupes.py,
+ pandas/tests/io/parser/header.py,
+ pandas/tests/io/msgpack/test_buffer.py,
+ pandas/tests/io/msgpack/test_read_size.py,
+ pandas/tests/io/msgpack/test_pack.py,
+ pandas/tests/io/msgpack/test_except.py,
+ pandas/tests/io/msgpack/test_unpack_raw.py,
+ pandas/tests/io/msgpack/test_unpack.py,
+ pandas/tests/io/msgpack/test_newspec.py,
+ pandas/tests/io/msgpack/common.py,
+ pandas/tests/io/msgpack/test_limits.py,
+ pandas/tests/io/msgpack/test_extension.py,
+ pandas/tests/io/msgpack/test_sequnpack.py,
+ pandas/tests/io/msgpack/test_subtype.py,
+ pandas/tests/io/msgpack/test_seq.py,
+ pandas/tests/io/json/test_compression.py,
+ pandas/tests/io/json/test_ujson.py,
+ pandas/tests/io/json/test_normalize.py,
+ pandas/tests/io/json/test_readlines.py,
+ pandas/tests/io/json/test_pandas.py,
+ pandas/tests/io/json/test_json_table_schema.py,
+ pandas/tests/api/test_types.py,
+ pandas/tests/api/test_api.py,
+ pandas/tests/tools/test_numeric.py,
+ pandas/tests/dtypes/test_concat.py,
+ pandas/tests/dtypes/test_generic.py,
+ pandas/tests/dtypes/test_common.py,
+ pandas/tests/dtypes/test_cast.py,
+ pandas/tests/dtypes/test_dtypes.py,
+ pandas/tests/dtypes/test_inference.py,
+ pandas/tests/dtypes/test_missing.py,
+ pandas/tests/arithmetic/test_numeric.py,
+ pandas/tests/arithmetic/test_object.py,
+ pandas/tests/arithmetic/test_period.py,
+ pandas/tests/arithmetic/test_datetime64.py,
+ pandas/tests/arithmetic/conftest.py,
+ pandas/tests/arithmetic/test_timedelta64.py,
+ pandas/tests/scalar/test_nat.py,
+ pandas/tests/scalar/timestamp/test_rendering.py,
+ pandas/tests/scalar/timestamp/test_timestamp.py,
+ pandas/tests/scalar/timestamp/test_timezones.py,
+ pandas/tests/scalar/timestamp/test_unary_ops.py,
+ pandas/tests/scalar/timestamp/test_arithmetic.py,
+ pandas/tests/scalar/timestamp/test_comparisons.py,
+ pandas/tests/scalar/period/test_asfreq.py,
+ pandas/tests/scalar/period/test_period.py,
+ pandas/tests/scalar/timedelta/test_construction.py,
+ pandas/tests/scalar/timedelta/test_timedelta.py,
+ pandas/tests/scalar/timedelta/test_arithmetic.py,
+ pandas/tests/scalar/interval/test_interval.py,
+ pandas/tests/tslibs/test_tslib.py,
+ pandas/tests/tslibs/test_period_asfreq.py,
+ pandas/tests/tslibs/test_timezones.py,
+ pandas/tests/tslibs/test_libfrequencies.py,
+ pandas/tests/tslibs/test_parsing.py,
+ pandas/tests/tslibs/test_array_to_datetime.py,
+ pandas/tests/tslibs/test_conversion.py,
+ pandas/tests/internals/test_internals.py,
+ pandas/tests/groupby/test_value_counts.py,
+ pandas/tests/groupby/test_filters.py,
+ pandas/tests/groupby/test_nth.py,
+ pandas/tests/groupby/test_timegrouper.py,
+ pandas/tests/groupby/test_transform.py,
+ pandas/tests/groupby/test_bin_groupby.py,
+ pandas/tests/groupby/test_index_as_string.py,
+ pandas/tests/groupby/test_groupby.py,
+ pandas/tests/groupby/test_whitelist.py,
+ pandas/tests/groupby/test_function.py,
+ pandas/tests/groupby/test_apply.py,
+ pandas/tests/groupby/conftest.py,
+ pandas/tests/groupby/test_counting.py,
+ pandas/tests/groupby/test_categorical.py,
+ pandas/tests/groupby/test_grouping.py,
+ pandas/tests/groupby/test_rank.py,
+ pandas/tests/groupby/aggregate/test_cython.py,
+ pandas/tests/groupby/aggregate/test_other.py,
+ pandas/tests/groupby/aggregate/test_aggregate.py,
+ pandas/tests/tseries/test_frequencies.py,
+ pandas/tests/tseries/test_holiday.py,
+ pandas/tests/tseries/offsets/test_offsets_properties.py,
+ pandas/tests/tseries/offsets/test_yqm_offsets.py,
+ pandas/tests/tseries/offsets/test_offsets.py,
+ pandas/tests/tseries/offsets/test_ticks.py,
+ pandas/tests/tseries/offsets/conftest.py,
+ pandas/tests/tseries/offsets/test_fiscal.py,
+ pandas/tests/plotting/test_datetimelike.py,
+ pandas/tests/plotting/test_series.py,
+ pandas/tests/plotting/test_groupby.py,
+ pandas/tests/plotting/test_converter.py,
+ pandas/tests/plotting/test_misc.py,
+ pandas/tests/plotting/test_frame.py,
+ pandas/tests/plotting/test_hist_method.py,
+ pandas/tests/plotting/common.py,
+ pandas/tests/plotting/test_boxplot_method.py,
+ pandas/tests/plotting/test_deprecated.py,
+ pandas/tests/frame/test_duplicates.py,
+ pandas/tests/frame/test_quantile.py,
+ pandas/tests/frame/test_indexing.py,
+ pandas/tests/frame/test_nonunique_indexes.py,
+ pandas/tests/frame/test_sort_values_level_as_str.py,
+ pandas/tests/frame/test_period.py,
+ pandas/tests/frame/test_validate.py,
+ pandas/tests/frame/test_timezones.py,
+ pandas/tests/frame/test_reshape.py,
+ pandas/tests/frame/test_sorting.py,
+ pandas/tests/frame/test_to_csv.py,
+ pandas/tests/frame/test_subclass.py,
+ pandas/tests/frame/test_operators.py,
+ pandas/tests/frame/test_asof.py,
+ pandas/tests/frame/test_apply.py,
+ pandas/tests/frame/test_arithmetic.py,
+ pandas/tests/frame/test_axis_select_reindex.py,
+ pandas/tests/frame/test_replace.py,
+ pandas/tests/frame/test_dtypes.py,
+ pandas/tests/frame/test_timeseries.py,
+ pandas/tests/frame/test_analytics.py,
+ pandas/tests/frame/test_repr_info.py,
+ pandas/tests/frame/test_combine_concat.py,
+ pandas/tests/frame/common.py,
+ pandas/tests/frame/test_block_internals.py,
+ pandas/tests/frame/test_missing.py,
+ pandas/tests/frame/conftest.py,
+ pandas/tests/frame/test_query_eval.py,
+ pandas/tests/frame/test_api.py,
+ pandas/tests/frame/test_convert_to.py,
+ pandas/tests/frame/test_join.py,
+ pandas/tests/frame/test_constructors.py,
+ pandas/tests/frame/test_mutate_columns.py,
+ pandas/tests/frame/test_alter_axes.py,
+ pandas/tests/frame/test_rank.py,
+ pandas/tests/generic/test_generic.py,
+ pandas/tests/generic/test_label_or_level_utils.py,
+ pandas/tests/generic/test_series.py,
+ pandas/tests/generic/test_frame.py,
+ pandas/tests/generic/test_panel.py,
+ pandas/tests/reshape/test_concat.py,
+ pandas/tests/reshape/test_util.py,
+ pandas/tests/reshape/test_reshape.py,
+ pandas/tests/reshape/test_tile.py,
+ pandas/tests/reshape/test_pivot.py,
+ pandas/tests/reshape/test_melt.py,
+ pandas/tests/reshape/test_union_categoricals.py,
+ pandas/tests/reshape/merge/test_merge_index_as_string.py,
+ pandas/tests/reshape/merge/test_merge.py,
+ pandas/tests/reshape/merge/test_merge_asof.py,
+ pandas/tests/reshape/merge/test_join.py,
+ pandas/tests/reshape/merge/test_merge_ordered.py,
+ pandas/tests/indexing/test_multiindex.py,
+ pandas/tests/indexing/test_indexing.py,
+ pandas/tests/indexing/test_scalar.py,
+ pandas/tests/indexing/test_timedelta.py,
+ pandas/tests/indexing/test_callable.py,
+ pandas/tests/indexing/test_datetime.py,
+ pandas/tests/indexing/test_ix.py,
+ pandas/tests/indexing/test_iloc.py,
+ pandas/tests/indexing/test_partial.py,
+ pandas/tests/indexing/test_indexing_slow.py,
+ pandas/tests/indexing/test_loc.py,
+ pandas/tests/indexing/test_floats.py,
+ pandas/tests/indexing/test_coercion.py,
+ pandas/tests/indexing/common.py,
+ pandas/tests/indexing/test_chaining_and_caching.py,
+ pandas/tests/indexing/test_panel.py,
+ pandas/tests/indexing/test_categorical.py,
+ pandas/tests/indexing/interval/test_interval_new.py,
+ pandas/tests/indexing/interval/test_interval.py,
+ pandas/tests/sparse/test_indexing.py,
+ pandas/tests/arrays/sparse/test_libsparse.py,
+ pandas/tests/arrays/sparse/test_array.py,
+ pandas/tests/arrays/sparse/test_dtype.py,
+ pandas/tests/extension/test_sparse.py,
+ pandas/tests/extension/base/reduce.py,
+ pandas/tests/sparse/test_reshape.py,
+ pandas/tests/sparse/test_pivot.py,
+ pandas/tests/sparse/test_format.py,
+ pandas/tests/sparse/test_groupby.py,
+ pandas/tests/arrays/sparse/test_arithmetics.py,
+ pandas/tests/sparse/test_combine_concat.py,
+ pandas/tests/sparse/series/test_indexing.py,
+ pandas/tests/sparse/series/test_series.py,
+ pandas/tests/sparse/frame/test_indexing.py,
+ pandas/tests/sparse/frame/test_to_from_scipy.py,
+ pandas/tests/sparse/frame/test_to_csv.py,
+ pandas/tests/sparse/frame/test_apply.py,
+ pandas/tests/sparse/frame/test_analytics.py,
+ pandas/tests/sparse/frame/test_frame.py,
+ pandas/tests/sparse/frame/conftest.py,
+ pandas/tests/computation/test_compat.py,
+ pandas/tests/computation/test_eval.py,
+ pandas/tseries/holiday.py,
+ pandas/tseries/converter.py,
+ pandas/tseries/offsets.py,
+ pandas/tseries/frequencies.py,
+ pandas/plotting/_core.py,
+ pandas/plotting/_style.py,
+ pandas/plotting/_timeseries.py,
+ pandas/plotting/_tools.py,
+ pandas/plotting/_converter.py,
+ pandas/plotting/_misc.py,
+ pandas/types/common.py,
+ pandas/plotting/_compat.py,
| - [x] closes #23048
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Confirming that CI fails when Imports are incorrectly formatted. See here:
https://travis-ci.org/pandas-dev/pandas/jobs/440557769
Sample output:
Doctests top-level reshaping functions DONE
Check import format using isort
ERROR: /home/travis/build/pandas-dev/pandas/pandas/core/series.py Imports are incorrectly sorted.
ERROR: /home/travis/build/pandas-dev/pandas/pandas/core/indexes/range.py Imports are incorrectly sorted.
ERROR: /home/travis/build/pandas-dev/pandas/pandas/io/packers.py Imports are incorrectly sorted.
ERROR: /home/travis/build/pandas-dev/pandas/pandas/io/pytables.py Imports are incorrectly sorted.
Check import format using isort DONE
The command "ci/code_checks.sh" exited with 1.
| https://api.github.com/repos/pandas-dev/pandas/pulls/23096 | 2018-10-11T22:39:34Z | 2018-10-17T13:59:26Z | 2018-10-17T13:59:25Z | 2019-12-25T20:21:52Z |
REF: Simplify Period/Datetime Array/Index constructors | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index e4ace2bfe1509..5aa9d7aa1be75 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -18,6 +18,7 @@
from pandas.tseries.offsets import Tick, DateOffset
from pandas.core.dtypes.common import (
+ pandas_dtype,
needs_i8_conversion,
is_list_like,
is_offsetlike,
@@ -911,3 +912,34 @@ def validate_tz_from_dtype(dtype, tz):
except TypeError:
pass
return tz
+
+
+def validate_dtype_freq(dtype, freq):
+ """
+ If both a dtype and a freq are available, ensure they match. If only
+ dtype is available, extract the implied freq.
+
+ Parameters
+ ----------
+ dtype : dtype
+ freq : DateOffset or None
+
+ Returns
+ -------
+ freq : DateOffset
+
+ Raises
+ ------
+ ValueError : non-period dtype
+ IncompatibleFrequency : mismatch between dtype and freq
+ """
+ if dtype is not None:
+ dtype = pandas_dtype(dtype)
+ if not is_period_dtype(dtype):
+ raise ValueError('dtype must be PeriodDtype')
+ if freq is None:
+ freq = dtype.freq
+ elif freq != dtype.freq:
+ raise IncompatibleFrequency('specified freq and dtype '
+ 'are different')
+ return freq
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 7daaa8de1734f..4c75927135b22 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -222,6 +222,12 @@ def __new__(cls, values, freq=None, tz=None, dtype=None):
@classmethod
def _generate_range(cls, start, end, periods, freq, tz=None,
normalize=False, ambiguous='raise', closed=None):
+
+ periods = dtl.validate_periods(periods)
+ if freq is None and any(x is None for x in [periods, start, end]):
+ raise ValueError('Must provide freq argument if no data is '
+ 'supplied')
+
if com.count_not_none(start, end, periods, freq) != 3:
raise ValueError('Of the four parameters: start, end, periods, '
'and freq, exactly three must be specified')
@@ -264,7 +270,7 @@ def _generate_range(cls, start, end, periods, freq, tz=None,
if freq is not None:
if cls._use_cached_range(freq, _normalized, start, end):
# Currently always False; never hit
- # Should be reimplemented as apart of GH 17914
+ # Should be reimplemented as a part of GH#17914
index = cls._cached_range(start, end, periods=periods,
freq=freq)
else:
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 6d13fb9ecaa39..d32ff76c0819b 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -27,6 +27,7 @@
from pandas.tseries import frequencies
from pandas.tseries.offsets import Tick, DateOffset
+from pandas.core.arrays import datetimelike as dtl
from pandas.core.arrays.datetimelike import DatetimeLikeArrayMixin
@@ -132,7 +133,7 @@ def __new__(cls, values, freq=None, **kwargs):
# TODO: what if it has tz?
values = dt64arr_to_periodarr(values, freq)
- return cls._simple_new(values, freq, **kwargs)
+ return cls._simple_new(values, freq=freq, **kwargs)
@classmethod
def _simple_new(cls, values, freq=None, **kwargs):
@@ -141,21 +142,27 @@ def _simple_new(cls, values, freq=None, **kwargs):
Ordinals in an ndarray are fastpath-ed to `_from_ordinals`
"""
+ if is_period_dtype(values):
+ freq = dtl.validate_dtype_freq(values.dtype, freq)
+ values = values.asi8
+
if not is_integer_dtype(values):
values = np.array(values, copy=False)
if len(values) > 0 and is_float_dtype(values):
raise TypeError("{cls} can't take floats"
.format(cls=cls.__name__))
- return cls(values, freq=freq)
+ return cls(values, freq=freq, **kwargs)
- return cls._from_ordinals(values, freq)
+ return cls._from_ordinals(values, freq=freq, **kwargs)
@classmethod
- def _from_ordinals(cls, values, freq=None):
+ def _from_ordinals(cls, values, freq=None, **kwargs):
"""
Values should be int ordinals
`__new__` & `_simple_new` cooerce to ordinals and call this method
"""
+ # **kwargs are included so that the signature matches PeriodIndex,
+ # letting us share _simple_new
values = np.array(values, dtype='int64', copy=False)
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index df9e57cb5f0e1..4904a90ab7b2b 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -131,15 +131,6 @@ def __new__(cls, values, freq=None, start=None, end=None, periods=None,
freq, freq_infer = dtl.maybe_infer_freq(freq)
- if values is None:
- # TODO: Remove this block and associated kwargs; GH#20535
- if freq is None and com._any_none(periods, start, end):
- raise ValueError('Must provide freq argument if no data is '
- 'supplied')
- periods = dtl.validate_periods(periods)
- return cls._generate_range(start, end, periods, freq,
- closed=closed)
-
result = cls._simple_new(values, freq=freq)
if freq_infer:
inferred = result.inferred_freq
@@ -151,6 +142,12 @@ def __new__(cls, values, freq=None, start=None, end=None, periods=None,
@classmethod
def _generate_range(cls, start, end, periods, freq, closed=None, **kwargs):
# **kwargs are for compat with TimedeltaIndex, which includes `name`
+
+ periods = dtl.validate_periods(periods)
+ if freq is None and any(x is None for x in [periods, start, end]):
+ raise ValueError('Must provide freq argument if no data is '
+ 'supplied')
+
if com.count_not_none(start, end, periods, freq) != 3:
raise ValueError('Of the four parameters: start, end, periods, '
'and freq, exactly three must be specified')
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e40ceadc1a083..f5b426bee74c8 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -294,10 +294,6 @@ def __new__(cls, data=None,
if data is None:
# TODO: Remove this block and associated kwargs; GH#20535
- if freq is None and com._any_none(periods, start, end):
- raise ValueError('Must provide freq argument if no data is '
- 'supplied')
- periods = dtl.validate_periods(periods)
return cls._generate_range(start, end, periods, name, freq,
tz=tz, normalize=normalize,
closed=closed, ambiguous=ambiguous)
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 7833dd851db34..ebecf62ecf754 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -8,7 +8,6 @@
is_integer,
is_float,
is_integer_dtype,
- is_float_dtype,
is_scalar,
is_datetime64_dtype,
is_datetime64_any_dtype,
@@ -181,15 +180,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
if name is None and hasattr(data, 'name'):
name = data.name
- if dtype is not None:
- dtype = pandas_dtype(dtype)
- if not is_period_dtype(dtype):
- raise ValueError('dtype must be PeriodDtype')
- if freq is None:
- freq = dtype.freq
- elif freq != dtype.freq:
- msg = 'specified freq and dtype are different'
- raise IncompatibleFrequency(msg)
+ freq = dtl.validate_dtype_freq(dtype, freq)
# coerce freq to freq object, otherwise it can be coerced elementwise
# which is slow
@@ -202,7 +193,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
else:
data, freq = cls._generate_range(start, end, periods,
freq, fields)
- return cls._from_ordinals(data, name=name, freq=freq)
+ return cls._simple_new(data, name=name, freq=freq)
if isinstance(data, PeriodIndex):
if freq is None or freq == data.freq: # no freq change
@@ -218,7 +209,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
# not array / index
if not isinstance(data, (np.ndarray, PeriodIndex,
DatetimeIndex, Int64Index)):
- if is_scalar(data) or isinstance(data, Period):
+ if is_scalar(data):
cls._scalar_data_error(data)
# other iterable of some kind
@@ -230,7 +221,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
# datetime other than period
if is_datetime64_dtype(data.dtype):
data = dt64arr_to_periodarr(data, freq, tz)
- return cls._from_ordinals(data, name=name, freq=freq)
+ return cls._simple_new(data, name=name, freq=freq)
# check not floats
if infer_dtype(data) == 'floating' and len(data) > 0:
@@ -241,33 +232,15 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
data = ensure_object(data)
freq = freq or period.extract_freq(data)
data = period.extract_ordinals(data, freq)
- return cls._from_ordinals(data, name=name, freq=freq)
+ return cls._simple_new(data, name=name, freq=freq)
@cache_readonly
def _engine(self):
return self._engine_type(lambda: self, len(self))
@classmethod
- def _simple_new(cls, values, name=None, freq=None, **kwargs):
- """
- Values can be any type that can be coerced to Periods.
- Ordinals in an ndarray are fastpath-ed to `_from_ordinals`
- """
- if not is_integer_dtype(values):
- values = np.array(values, copy=False)
- if len(values) > 0 and is_float_dtype(values):
- raise TypeError("PeriodIndex can't take floats")
- return cls(values, name=name, freq=freq, **kwargs)
-
- return cls._from_ordinals(values, name, freq, **kwargs)
-
- @classmethod
- def _from_ordinals(cls, values, name=None, freq=None, **kwargs):
- """
- Values should be int ordinals
- `__new__` & `_simple_new` cooerce to ordinals and call this method
- """
- result = super(PeriodIndex, cls)._from_ordinals(values, freq)
+ def _simple_new(cls, values, freq=None, name=None, **kwargs):
+ result = super(PeriodIndex, cls)._simple_new(values, freq)
result.name = name
result._reset_identity()
@@ -703,8 +676,8 @@ def _wrap_union_result(self, other, result):
def _apply_meta(self, rawarr):
if not isinstance(rawarr, PeriodIndex):
- rawarr = PeriodIndex._from_ordinals(rawarr, freq=self.freq,
- name=self.name)
+ rawarr = PeriodIndex._simple_new(rawarr, freq=self.freq,
+ name=self.name)
return rawarr
def _format_native_types(self, na_rep=u'NaT', date_format=None, **kwargs):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index ff37036533b4f..956b72d30893d 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2476,7 +2476,7 @@ def _get_index_factory(self, klass):
if klass == DatetimeIndex:
def f(values, freq=None, tz=None):
# data are already in UTC, localize and convert if tz present
- result = DatetimeIndex._simple_new(values.values, None,
+ result = DatetimeIndex._simple_new(values.values, name=None,
freq=freq)
if tz is not None:
result = result.tz_localize('UTC').tz_convert(tz)
@@ -2484,7 +2484,7 @@ def f(values, freq=None, tz=None):
return f
elif klass == PeriodIndex:
def f(values, freq=None, tz=None):
- return PeriodIndex._simple_new(values, None, freq=freq)
+ return PeriodIndex._simple_new(values, name=None, freq=freq)
return f
return klass
diff --git a/pandas/tests/indexes/period/test_construction.py b/pandas/tests/indexes/period/test_construction.py
index be741592ec7a2..d54dac5867845 100644
--- a/pandas/tests/indexes/period/test_construction.py
+++ b/pandas/tests/indexes/period/test_construction.py
@@ -264,20 +264,20 @@ def test_constructor_mixed(self):
def test_constructor_simple_new(self):
idx = period_range('2007-01', name='p', periods=2, freq='M')
- result = idx._simple_new(idx, 'p', freq=idx.freq)
+ result = idx._simple_new(idx, name='p', freq=idx.freq)
tm.assert_index_equal(result, idx)
- result = idx._simple_new(idx.astype('i8'), 'p', freq=idx.freq)
+ result = idx._simple_new(idx.astype('i8'), name='p', freq=idx.freq)
tm.assert_index_equal(result, idx)
result = idx._simple_new([pd.Period('2007-01', freq='M'),
pd.Period('2007-02', freq='M')],
- 'p', freq=idx.freq)
+ name='p', freq=idx.freq)
tm.assert_index_equal(result, idx)
result = idx._simple_new(np.array([pd.Period('2007-01', freq='M'),
pd.Period('2007-02', freq='M')]),
- 'p', freq=idx.freq)
+ name='p', freq=idx.freq)
tm.assert_index_equal(result, idx)
def test_constructor_simple_new_empty(self):
| Split off from the same branch that spawned #23083.
| https://api.github.com/repos/pandas-dev/pandas/pulls/23093 | 2018-10-11T15:19:19Z | 2018-10-12T21:50:47Z | 2018-10-12T21:50:47Z | 2018-10-14T01:41:14Z |
clean CategoricalIndex.get_loc and improve error message | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 45703c220a4be..7eab123497282 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -426,6 +426,10 @@ def get_loc(self, key, method=None):
-------
loc : int if unique index, slice if monotonic index, else mask
+ Raises
+ ------
+ KeyError : if the key is not in the index
+
Examples
---------
>>> unique_index = pd.CategoricalIndex(list('abc'))
@@ -440,10 +444,11 @@ def get_loc(self, key, method=None):
>>> non_monotonic_index.get_loc('b')
array([False, True, False, True], dtype=bool)
"""
- codes = self.categories.get_loc(key)
- if (codes == -1):
+ code = self.categories.get_loc(key)
+ try:
+ return self._engine.get_loc(code)
+ except KeyError:
raise KeyError(key)
- return self._engine.get_loc(codes)
def get_value(self, series, key):
"""
| This is an offspring from #21699 to do the a cleanup in a contained PR.
``self.categories.get_loc(key)`` can never return -1, so the ``if (codes == -1): raise KeyError(key)`` is unnecessary. Instead, if key is not in self.categories, a KeyError is raised straight away.
```python
>>> ci = pd.CategoricalIndex(['a', 'b'], categories=['a', 'b', 'c'])
>>> ci.get_loc('d')
KeyError: 'd' # both master and this PR
```
A slightly confusing issue is that if the key is found in categories, but the code is not found in self._engine, the code is shown in the KeyError message. It would probably be more intuitive that the key would be shown:
```python
>>> ci.get_loc('c')
KeyError: 2 # master, confusing IMO
KeyError: 'c' # this PR
```
As the error type is unchanged, does this change warrant a whatsnew entry?
| https://api.github.com/repos/pandas-dev/pandas/pulls/23091 | 2018-10-11T13:26:22Z | 2018-10-18T16:13:22Z | 2018-10-18T16:13:22Z | 2018-10-18T17:43:21Z |
BUG: DataFrame(ndarray, dtype=categoricaldtype) | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 049c4fe653107..9981b2c4b395e 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -184,8 +184,8 @@ Categorical
^^^^^^^^^^^
- Bug in :class:`CategoricalIndex` incorrectly failing to raise ``TypeError`` when scalar data is passed (:issue:`38614`)
- Bug in ``CategoricalIndex.reindex`` failed when ``Index`` passed with elements all in category (:issue:`28690`)
-- Bug where construcing a :class:`Categorical` from an object-dtype array of ``date`` objects did not round-trip correctly with ``astype`` (:issue:`38552`)
-
+- Bug where constructing a :class:`Categorical` from an object-dtype array of ``date`` objects did not round-trip correctly with ``astype`` (:issue:`38552`)
+- Bug in constructing a :class:`DataFrame` from an ``ndarray`` and a :class:`CategoricalDtype` (:issue:`38857`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index d59cfc436f13d..f1cd221bae15c 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -21,7 +21,6 @@
maybe_upcast,
)
from pandas.core.dtypes.common import (
- is_categorical_dtype,
is_datetime64tz_dtype,
is_dtype_equal,
is_extension_array_dtype,
@@ -160,21 +159,7 @@ def init_ndarray(values, index, columns, dtype: Optional[DtypeObj], copy: bool):
if not len(values) and columns is not None and len(columns):
values = np.empty((0, 1), dtype=object)
- # we could have a categorical type passed or coerced to 'category'
- # recast this to an arrays_to_mgr
- if is_categorical_dtype(getattr(values, "dtype", None)) or is_categorical_dtype(
- dtype
- ):
-
- if not hasattr(values, "dtype"):
- values = _prep_ndarray(values, copy=copy)
- values = values.ravel()
- elif copy:
- values = values.copy()
-
- index, columns = _get_axes(len(values), 1, index, columns)
- return arrays_to_mgr([values], columns, index, columns, dtype=dtype)
- elif is_extension_array_dtype(values) or is_extension_array_dtype(dtype):
+ if is_extension_array_dtype(values) or is_extension_array_dtype(dtype):
# GH#19157
if isinstance(values, np.ndarray) and values.ndim > 1:
@@ -308,6 +293,7 @@ def nested_data_to_arrays(
if isinstance(data[0], ABCSeries):
index = _get_names_from_index(data)
elif isinstance(data[0], Categorical):
+ # GH#38845 hit in test_constructor_categorical
index = ibase.default_index(len(data[0]))
else:
index = ibase.default_index(len(data))
@@ -486,7 +472,9 @@ def _get_names_from_index(data):
return index
-def _get_axes(N, K, index, columns) -> Tuple[Index, Index]:
+def _get_axes(
+ N: int, K: int, index: Optional[Index], columns: Optional[Index]
+) -> Tuple[Index, Index]:
# helper to create the axes as indexes
# return axes or defaults
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 4d57b43df2387..f408a3ddde04e 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1890,6 +1890,16 @@ def test_constructor_lists_to_object_dtype(self):
assert d["a"].dtype == np.object_
assert not d["a"][1]
+ def test_constructor_ndarray_categorical_dtype(self):
+ cat = Categorical(["A", "B", "C"])
+ arr = np.array(cat).reshape(-1, 1)
+ arr = np.broadcast_to(arr, (3, 4))
+
+ result = DataFrame(arr, dtype=cat.dtype)
+
+ expected = DataFrame({0: cat, 1: cat, 2: cat, 3: cat})
+ tm.assert_frame_equal(result, expected)
+
def test_constructor_categorical(self):
# GH8626
@@ -1913,11 +1923,13 @@ def test_constructor_categorical(self):
expected = Series(list("abc"), dtype="category", name=0)
tm.assert_series_equal(df[0], expected)
+ def test_construct_from_1item_list_of_categorical(self):
# ndim != 1
df = DataFrame([Categorical(list("abc"))])
expected = DataFrame({0: Series(list("abc"), dtype="category")})
tm.assert_frame_equal(df, expected)
+ def test_construct_from_list_of_categoricals(self):
df = DataFrame([Categorical(list("abc")), Categorical(list("abd"))])
expected = DataFrame(
{
@@ -1928,6 +1940,7 @@ def test_constructor_categorical(self):
)
tm.assert_frame_equal(df, expected)
+ def test_from_nested_listlike_mixed_types(self):
# mixed
df = DataFrame([Categorical(list("abc")), list("def")])
expected = DataFrame(
@@ -1935,11 +1948,14 @@ def test_constructor_categorical(self):
)
tm.assert_frame_equal(df, expected)
+ def test_construct_from_listlikes_mismatched_lengths(self):
# invalid (shape)
msg = r"Shape of passed values is \(6, 2\), indices imply \(3, 2\)"
with pytest.raises(ValueError, match=msg):
DataFrame([Categorical(list("abc")), Categorical(list("abdefg"))])
+ def test_categorical_1d_only(self):
+ # TODO: belongs in Categorical tests
# ndim > 1
msg = "> 1 ndim Categorical are not supported at this time"
with pytest.raises(NotImplementedError, match=msg):
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index ca7a171947ca0..c7bd38bbd00b9 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -326,13 +326,16 @@ def test_constructor_categorical(self):
expected = Series([1, 2, 3], dtype="int64")
tm.assert_series_equal(result, expected)
+ def test_construct_from_categorical_with_dtype(self):
# GH12574
cat = Series(Categorical([1, 2, 3]), dtype="category")
assert is_categorical_dtype(cat)
assert is_categorical_dtype(cat.dtype)
- s = Series([1, 2, 3], dtype="category")
- assert is_categorical_dtype(s)
- assert is_categorical_dtype(s.dtype)
+
+ def test_construct_intlist_values_category_dtype(self):
+ ser = Series([1, 2, 3], dtype="category")
+ assert is_categorical_dtype(ser)
+ assert is_categorical_dtype(ser.dtype)
def test_constructor_categorical_with_coercion(self):
factor = Categorical(["a", "b", "b", "a", "a", "c", "c", "c"])
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38857 | 2020-12-31T17:01:44Z | 2021-01-02T02:00:55Z | 2021-01-02T02:00:55Z | 2021-01-02T02:17:50Z |
TST: GH30999 address all bare pytest.raises in pandas/tests/series and add EMPTY_STRING_PATTERN to tm | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 24359809065b1..897f4ab59c370 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -130,6 +130,7 @@
NULL_OBJECTS = [None, np.nan, pd.NaT, float("nan"), pd.NA]
+EMPTY_STRING_PATTERN = re.compile("^$")
# set testing_mode
_testing_mode_warnings = (DeprecationWarning, ResourceWarning)
diff --git a/pandas/tests/series/apply/test_series_apply.py b/pandas/tests/series/apply/test_series_apply.py
index 5935d0c81af88..1242126c19527 100644
--- a/pandas/tests/series/apply/test_series_apply.py
+++ b/pandas/tests/series/apply/test_series_apply.py
@@ -454,7 +454,8 @@ def test_agg_cython_table_transform(self, series, func, expected):
)
def test_agg_cython_table_raises(self, series, func, expected):
# GH21224
- with pytest.raises(expected):
+ msg = r"[Cc]ould not convert|can't multiply sequence by non-int of type"
+ with pytest.raises(expected, match=msg):
# e.g. Series('a b'.split()).cumprod() will raise
series.agg(func)
@@ -714,7 +715,7 @@ def test_map_categorical(self):
tm.assert_series_equal(result, exp)
assert result.dtype == object
- with pytest.raises(NotImplementedError):
+ with pytest.raises(NotImplementedError, match=tm.EMPTY_STRING_PATTERN):
s.map(lambda x: x, na_action="ignore")
def test_map_datetimetz(self):
@@ -737,7 +738,7 @@ def test_map_datetimetz(self):
exp = Series(list(range(24)) + [0], name="XX", dtype=np.int64)
tm.assert_series_equal(result, exp)
- with pytest.raises(NotImplementedError):
+ with pytest.raises(NotImplementedError, match=tm.EMPTY_STRING_PATTERN):
s.map(lambda x: x, na_action="ignore")
# not vectorized
diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py
index 271ac31d303ae..a5aa319e4772f 100644
--- a/pandas/tests/series/test_ufunc.py
+++ b/pandas/tests/series/test_ufunc.py
@@ -300,5 +300,5 @@ def test_outer():
s = pd.Series([1, 2, 3])
o = np.array([1, 2, 3])
- with pytest.raises(NotImplementedError):
+ with pytest.raises(NotImplementedError, match=tm.EMPTY_STRING_PATTERN):
np.subtract.outer(s, o)
| This PR is to address xref #30999 in pandas/tests/series
Three of the four errors raised within `pytest.raises` were for `NotImplementedError` with the empty string as the error message. Rather than individually making a regular expression for the empty string, I added one as a constant to `pandas/_testing/__init__.py` . There are two more similar cases in other modules so I wanted to put it somewhere that all the testing modules import. But if you don't like it I can revert it and instead use an individual empty string pattern in each case.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38855 | 2020-12-31T16:04:09Z | 2020-12-31T18:47:38Z | 2020-12-31T18:47:38Z | 2020-12-31T18:47:41Z |
Fix broken link in docs of DataFrame.to_hdf | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c3db8ef58deb6..eee5f72a05738 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2576,7 +2576,7 @@ def to_hdf(
See Also
--------
- DataFrame.read_hdf : Read from HDF file.
+ read_hdf : Read from HDF file.
DataFrame.to_parquet : Write a DataFrame to the binary parquet format.
DataFrame.to_sql : Write to a sql table.
DataFrame.to_feather : Write out feather-format for DataFrames.
| The link in https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_hdf.html in "See also" seems to be broken. This should fix it.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38854 | 2020-12-31T15:58:31Z | 2020-12-31T18:48:28Z | 2020-12-31T18:48:28Z | 2021-01-01T12:14:28Z |
Backport PR #38850 on branch 1.2.x (DOC: sphinx error in 1.2.1 release notes) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index c83a2ff7c1d22..a1612117072a5 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -21,8 +21,8 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
-- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
-- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
+- Fixed regression in :func:`read_excel` with non-rawbyte file handles (:issue:`38788`)
+- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings. This resulted in a regression in some cases as the default for ``float_precision`` was changed in pandas 1.2.0 (:issue:`38753`)
-
.. ---------------------------------------------------------------------------
| Backport PR #38850: DOC: sphinx error in 1.2.1 release notes | https://api.github.com/repos/pandas-dev/pandas/pulls/38853 | 2020-12-31T15:51:36Z | 2020-12-31T17:19:33Z | 2020-12-31T17:19:33Z | 2020-12-31T17:19:33Z |
BLD: move metadata to setup.cfg | diff --git a/MANIFEST.in b/MANIFEST.in
index cf6a1835433a4..494ad69efbc56 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -1,9 +1,4 @@
-include MANIFEST.in
-include LICENSE
include RELEASE.md
-include README.md
-include setup.py
-include pyproject.toml
graft doc
prune doc/build
@@ -16,10 +11,12 @@ global-exclude *.bz2
global-exclude *.csv
global-exclude *.dta
global-exclude *.feather
+global-exclude *.tar
global-exclude *.gz
global-exclude *.h5
global-exclude *.html
global-exclude *.json
+global-exclude *.jsonl
global-exclude *.pickle
global-exclude *.png
global-exclude *.pyc
@@ -40,6 +37,11 @@ global-exclude .DS_Store
global-exclude .git*
global-exclude \#*
+# GH 39321
+# csv_dir_path fixture checks the existence of the directory
+# exclude the whole directory to avoid running related tests in sdist
+prune pandas/tests/io/parser/data
+
include versioneer.py
include pandas/_version.py
include pandas/io/formats/templates/*.tpl
diff --git a/conda.recipe/meta.yaml b/conda.recipe/meta.yaml
index e833ea1f1f398..53ee212360475 100644
--- a/conda.recipe/meta.yaml
+++ b/conda.recipe/meta.yaml
@@ -19,7 +19,7 @@ requirements:
- pip
- cython
- numpy
- - setuptools >=3.3
+ - setuptools >=38.6.0
- python-dateutil >=2.7.3
- pytz
run:
diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 06e1af75053d3..1ee8e3401e7f4 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -219,7 +219,7 @@ Dependencies
================================================================ ==========================
Package Minimum supported version
================================================================ ==========================
-`setuptools <https://setuptools.readthedocs.io/en/latest/>`__ 24.2.0
+`setuptools <https://setuptools.readthedocs.io/en/latest/>`__ 38.6.0
`NumPy <https://numpy.org>`__ 1.16.5
`python-dateutil <https://dateutil.readthedocs.io/en/stable/>`__ 2.7.3
`pytz <https://pypi.org/project/pytz/>`__ 2017.3
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 76bd95c1c5d9d..f6d79cea84839 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -164,6 +164,8 @@ If installed, we now require:
+-----------------+-----------------+----------+---------+
| mypy (dev) | 0.800 | | X |
+-----------------+-----------------+----------+---------+
+| setuptools | 38.6.0 | | X |
++-----------------+-----------------+----------+---------+
For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
diff --git a/pyproject.toml b/pyproject.toml
index 2b78147e9294d..9f11475234566 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -1,16 +1,17 @@
[build-system]
# Minimum requirements for the build system to execute.
-# See https://github.com/scipy/scipy/pull/10431 for the AIX issue.
+# See https://github.com/scipy/scipy/pull/12940 for the AIX issue.
requires = [
- "setuptools",
+ "setuptools>=38.6.0",
"wheel",
"Cython>=0.29.21,<3", # Note: sync with setup.py
- "numpy==1.16.5; python_version=='3.7' and platform_system!='AIX'",
- "numpy==1.17.3; python_version=='3.8' and platform_system!='AIX'",
- "numpy==1.16.5; python_version=='3.7' and platform_system=='AIX'",
- "numpy==1.17.3; python_version=='3.8' and platform_system=='AIX'",
+ "numpy==1.16.5; python_version=='3.7'",
+ "numpy==1.17.3; python_version=='3.8'",
"numpy; python_version>='3.9'",
]
+# uncomment to enable pep517 after versioneer problem is fixed.
+# https://github.com/python-versioneer/python-versioneer/issues/193
+# build-backend = "setuptools.build_meta"
[tool.black]
target-version = ['py37', 'py38']
diff --git a/setup.cfg b/setup.cfg
index a6d636704664e..5093ff81ad17f 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -1,11 +1,65 @@
+[metadata]
+name = pandas
+description = Powerful data structures for data analysis, time series, and statistics
+long_description = file: README.md
+long_description_content_type = text/markdown
+url = https://pandas.pydata.org
+author = The Pandas Development Team
+author_email = pandas-dev@python.org
+license = BSD-3-Clause
+license_file = LICENSE
+platforms = any
+classifiers =
+ Development Status :: 5 - Production/Stable
+ Environment :: Console
+ Intended Audience :: Science/Research
+ License :: OSI Approved :: BSD License
+ Operating System :: OS Independen
+ Programming Language :: Cython
+ Programming Language :: Python
+ Programming Language :: Python :: 3
+ Programming Language :: Python :: 3 :: Only
+ Programming Language :: Python :: 3.7
+ Programming Language :: Python :: 3.8
+ Programming Language :: Python :: 3.9
+ Topic :: Scientific/Engineering
+project_urls =
+ Bug Tracker = https://github.com/pandas-dev/pandas/issues
+ Documentation = https://pandas.pydata.org/pandas-docs/stable
+ Source Code = https://github.com/pandas-dev/pandas
+
+[options]
+packages = find:
+install_requires =
+ numpy>=1.16.5
+ python-dateutil>=2.7.3
+ pytz>=2017.3
+python_requires = >=3.7.1
+include_package_data = True
+zip_safe = False
+
+[options.entry_points]
+pandas_plotting_backends =
+ matplotlib = pandas:plotting._matplotlib
+
+[options.extras_require]
+test =
+ hypothesis>=3.58
+ pytest>=5.0.1
+ pytest-xdist
+
+[options.package_data]
+* = templates/*, _libs/**/*.dll
[build_ext]
-inplace = 1
+inplace = True
+
+[options.packages.find]
+include = pandas, pandas.*
# See the docstring in versioneer.py for instructions. Note that you must
# re-run 'versioneer.py setup' after changing this section, and commit the
# resulting files.
-
[versioneer]
VCS = git
style = pep440
@@ -38,16 +92,16 @@ bootstrap =
import pandas as pd
np # avoiding error when importing again numpy or pandas
pd # (in some cases we want to do it to show users)
-ignore = E203, # space before : (needed for how black formats slicing)
- E402, # module level import not at top of file
- W503, # line break before binary operator
- # Classes/functions in different blocks can generate those errors
- E302, # expected 2 blank lines, found 0
- E305, # expected 2 blank lines after class or function definition, found 0
- # We use semicolon at the end to avoid displaying plot objects
- E703, # statement ends with a semicolon
- E711, # comparison to none should be 'if cond is none:'
-
+ignore =
+ E203, # space before : (needed for how black formats slicing)
+ E402, # module level import not at top of file
+ W503, # line break before binary operator
+ # Classes/functions in different blocks can generate those errors
+ E302, # expected 2 blank lines, found 0
+ E305, # expected 2 blank lines after class or function definition, found 0
+ # We use semicolon at the end to avoid displaying plot objects
+ E703, # statement ends with a semicolon
+ E711, # comparison to none should be 'if cond is none:'
exclude =
doc/source/development/contributing_docstring.rst,
# work around issue of undefined variable warnings
@@ -64,18 +118,18 @@ xfail_strict = True
filterwarnings =
error:Sparse:FutureWarning
error:The SparseArray:FutureWarning
-junit_family=xunit2
+junit_family = xunit2
[codespell]
-ignore-words-list=ba,blocs,coo,hist,nd,ser
-ignore-regex=https://(\w+\.)+
+ignore-words-list = ba,blocs,coo,hist,nd,ser
+ignore-regex = https://(\w+\.)+
[coverage:run]
branch = False
omit =
- */tests/*
- pandas/_typing.py
- pandas/_version.py
+ */tests/*
+ pandas/_typing.py
+ pandas/_version.py
plugins = Cython.Coverage
[coverage:report]
@@ -130,10 +184,10 @@ warn_unused_ignores = True
show_error_codes = True
[mypy-pandas.tests.*]
-check_untyped_defs=False
+check_untyped_defs = False
[mypy-pandas._version]
-check_untyped_defs=False
+check_untyped_defs = False
[mypy-pandas.io.clipboard]
-check_untyped_defs=False
+check_untyped_defs = False
diff --git a/setup.py b/setup.py
index f9c4a1158fee0..34c80925a80a8 100755
--- a/setup.py
+++ b/setup.py
@@ -18,7 +18,7 @@
import sys
import numpy
-from setuptools import Command, Extension, find_packages, setup
+from setuptools import Command, Extension, setup
from setuptools.command.build_ext import build_ext as _build_ext
import versioneer
@@ -34,7 +34,6 @@ def is_platform_mac():
return sys.platform == "darwin"
-min_numpy_ver = "1.16.5"
min_cython_ver = "0.29.21" # note: sync with pyproject.toml
try:
@@ -99,96 +98,6 @@ def build_extensions(self):
super().build_extensions()
-DESCRIPTION = "Powerful data structures for data analysis, time series, and statistics"
-LONG_DESCRIPTION = """
-**pandas** is a Python package that provides fast, flexible, and expressive data
-structures designed to make working with structured (tabular, multidimensional,
-potentially heterogeneous) and time series data both easy and intuitive. It
-aims to be the fundamental high-level building block for doing practical,
-**real world** data analysis in Python. Additionally, it has the broader goal
-of becoming **the most powerful and flexible open source data analysis /
-manipulation tool available in any language**. It is already well on its way
-toward this goal.
-
-pandas is well suited for many different kinds of data:
-
- - Tabular data with heterogeneously-typed columns, as in an SQL table or
- Excel spreadsheet
- - Ordered and unordered (not necessarily fixed-frequency) time series data.
- - Arbitrary matrix data (homogeneously typed or heterogeneous) with row and
- column labels
- - Any other form of observational / statistical data sets. The data actually
- need not be labeled at all to be placed into a pandas data structure
-
-The two primary data structures of pandas, Series (1-dimensional) and DataFrame
-(2-dimensional), handle the vast majority of typical use cases in finance,
-statistics, social science, and many areas of engineering. For R users,
-DataFrame provides everything that R's ``data.frame`` provides and much
-more. pandas is built on top of `NumPy <https://www.numpy.org>`__ and is
-intended to integrate well within a scientific computing environment with many
-other 3rd party libraries.
-
-Here are just a few of the things that pandas does well:
-
- - Easy handling of **missing data** (represented as NaN) in floating point as
- well as non-floating point data
- - Size mutability: columns can be **inserted and deleted** from DataFrame and
- higher dimensional objects
- - Automatic and explicit **data alignment**: objects can be explicitly
- aligned to a set of labels, or the user can simply ignore the labels and
- let `Series`, `DataFrame`, etc. automatically align the data for you in
- computations
- - Powerful, flexible **group by** functionality to perform
- split-apply-combine operations on data sets, for both aggregating and
- transforming data
- - Make it **easy to convert** ragged, differently-indexed data in other
- Python and NumPy data structures into DataFrame objects
- - Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
- of large data sets
- - Intuitive **merging** and **joining** data sets
- - Flexible **reshaping** and pivoting of data sets
- - **Hierarchical** labeling of axes (possible to have multiple labels per
- tick)
- - Robust IO tools for loading data from **flat files** (CSV and delimited),
- Excel files, databases, and saving / loading data from the ultrafast **HDF5
- format**
- - **Time series**-specific functionality: date range generation and frequency
- conversion, moving window statistics, date shifting and lagging.
-
-Many of these principles are here to address the shortcomings frequently
-experienced using other languages / scientific research environments. For data
-scientists, working with data is typically divided into multiple stages:
-munging and cleaning data, analyzing / modeling it, then organizing the results
-of the analysis into a form suitable for plotting or tabular display. pandas is
-the ideal tool for all of these tasks.
-"""
-
-DISTNAME = "pandas"
-LICENSE = "BSD"
-AUTHOR = "The PyData Development Team"
-EMAIL = "pydata@googlegroups.com"
-URL = "https://pandas.pydata.org"
-DOWNLOAD_URL = ""
-PROJECT_URLS = {
- "Bug Tracker": "https://github.com/pandas-dev/pandas/issues",
- "Documentation": "https://pandas.pydata.org/pandas-docs/stable/",
- "Source Code": "https://github.com/pandas-dev/pandas",
-}
-CLASSIFIERS = [
- "Development Status :: 5 - Production/Stable",
- "Environment :: Console",
- "Operating System :: OS Independent",
- "Intended Audience :: Science/Research",
- "Programming Language :: Python",
- "Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.7",
- "Programming Language :: Python :: 3.8",
- "Programming Language :: Python :: 3.9",
- "Programming Language :: Cython",
- "Topic :: Scientific/Engineering",
-]
-
-
class CleanCommand(Command):
"""Custom distutils command to clean the .so and .pyc files."""
@@ -711,51 +620,11 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
# ----------------------------------------------------------------------
-def setup_package():
- setuptools_kwargs = {
- "install_requires": [
- "python-dateutil >= 2.7.3",
- "pytz >= 2017.3",
- f"numpy >= {min_numpy_ver}",
- ],
- "setup_requires": [f"numpy >= {min_numpy_ver}"],
- "zip_safe": False,
- }
-
+if __name__ == "__main__":
+ # Freeze to support parallel compilation when using spawn instead of fork
+ multiprocessing.freeze_support()
setup(
- name=DISTNAME,
- maintainer=AUTHOR,
version=versioneer.get_version(),
- packages=find_packages(include=["pandas", "pandas.*"]),
- package_data={"": ["templates/*", "_libs/**/*.dll"]},
ext_modules=maybe_cythonize(extensions, compiler_directives=directives),
- maintainer_email=EMAIL,
- description=DESCRIPTION,
- license=LICENSE,
cmdclass=cmdclass,
- url=URL,
- download_url=DOWNLOAD_URL,
- project_urls=PROJECT_URLS,
- long_description=LONG_DESCRIPTION,
- classifiers=CLASSIFIERS,
- platforms="any",
- python_requires=">=3.7.1",
- extras_require={
- "test": [
- # sync with setup.cfg minversion & install.rst
- "pytest>=5.0.1",
- "pytest-xdist",
- "hypothesis>=3.58",
- ]
- },
- entry_points={
- "pandas_plotting_backends": ["matplotlib = pandas:plotting._matplotlib"]
- },
- **setuptools_kwargs,
)
-
-
-if __name__ == "__main__":
- # Freeze to support parallel compilation when using spawn instead of fork
- multiprocessing.freeze_support()
- setup_package()
| move metadata to setup.cfg to separate them from build logic.
| https://api.github.com/repos/pandas-dev/pandas/pulls/38852 | 2020-12-31T15:28:54Z | 2021-02-16T09:51:44Z | 2021-02-16T09:51:44Z | 2021-06-12T15:40:36Z |
DOC: sphinx error in 1.2.1 release notes | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index c83a2ff7c1d22..a1612117072a5 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -21,8 +21,8 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
-- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
-- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
+- Fixed regression in :func:`read_excel` with non-rawbyte file handles (:issue:`38788`)
+- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings. This resulted in a regression in some cases as the default for ``float_precision`` was changed in pandas 1.2.0 (:issue:`38753`)
-
.. ---------------------------------------------------------------------------
| https://api.github.com/repos/pandas-dev/pandas/pulls/38850 | 2020-12-31T12:17:45Z | 2020-12-31T15:49:30Z | 2020-12-31T15:49:30Z | 2020-12-31T15:59:40Z | |
TST: GH30999 address all bare pytest.raises in pandas/tests/arrays/boolean/test_arithmetic.py | diff --git a/pandas/tests/arrays/boolean/test_arithmetic.py b/pandas/tests/arrays/boolean/test_arithmetic.py
index 01de64568a011..f8f1af4c3da51 100644
--- a/pandas/tests/arrays/boolean/test_arithmetic.py
+++ b/pandas/tests/arrays/boolean/test_arithmetic.py
@@ -46,8 +46,11 @@ def test_add_mul(left_array, right_array, opname, exp):
def test_sub(left_array, right_array):
- with pytest.raises(TypeError):
- # numpy points to ^ operator or logical_xor function instead
+ msg = (
+ r"numpy boolean subtract, the `-` operator, is (?:deprecated|not supported), "
+ r"use the bitwise_xor, the `\^` operator, or the logical_xor function instead\."
+ )
+ with pytest.raises(TypeError, match=msg):
left_array - right_array
@@ -92,13 +95,27 @@ def test_error_invalid_values(data, all_arithmetic_operators):
ops = getattr(s, op)
# invalid scalars
- with pytest.raises(TypeError):
+ msg = (
+ "did not contain a loop with signature matching types|"
+ "BooleanArray cannot perform the operation|"
+ "not supported for the input types, and the inputs could not be safely coerced "
+ "to any supported types according to the casting rule ''safe''"
+ )
+ with pytest.raises(TypeError, match=msg):
ops("foo")
- with pytest.raises(TypeError):
+ msg = (
+ r"unsupported operand type\(s\) for|"
+ "Concatenation operation is not implemented for NumPy arrays"
+ )
+ with pytest.raises(TypeError, match=msg):
ops(pd.Timestamp("20180101"))
# invalid array-likes
if op not in ("__mul__", "__rmul__"):
# TODO(extension) numpy's mul with object array sees booleans as numbers
- with pytest.raises(TypeError):
+ msg = (
+ r"unsupported operand type\(s\) for|can only concatenate str|"
+ "not all arguments converted during string formatting"
+ )
+ with pytest.raises(TypeError, match=msg):
ops(pd.Series("foo", index=s.index))
| xref #30999 I thought I was done with the simple bare `pytest.raise` instances but I somehow missed this test module. Four instances, three fixed by adding `match` and one by using `tm.external_error_raised` because I believe the error comes from numpy.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38849 | 2020-12-31T12:14:36Z | 2021-01-01T00:03:54Z | 2021-01-01T00:03:54Z | 2021-01-01T00:03:54Z |
Backport PR #38806: DOC: fix sphinx directive error in 1.2.1 release notes | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 72f76b4749c54..f275cfa895269 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -15,8 +15,8 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
-- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
-- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
+- Fixed regression in :meth:`to_csv` that created corrupted zip files when there were more rows than ``chunksize`` (:issue:`38714`)
+- Fixed regression in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
| Backport PR #38806 | https://api.github.com/repos/pandas-dev/pandas/pulls/38848 | 2020-12-31T10:35:30Z | 2020-12-31T12:00:36Z | 2020-12-31T12:00:36Z | 2020-12-31T12:00:42Z |
CI,STYLE: narrow down ignore-words-list of codespell | diff --git a/pandas/core/arrays/sparse/accessor.py b/pandas/core/arrays/sparse/accessor.py
index 12f29faab9574..c0bc88dc54e43 100644
--- a/pandas/core/arrays/sparse/accessor.py
+++ b/pandas/core/arrays/sparse/accessor.py
@@ -333,18 +333,18 @@ def to_coo(self):
if isinstance(dtype, SparseDtype):
dtype = dtype.subtype
- cols, rows, datas = [], [], []
+ cols, rows, data = [], [], []
for col, name in enumerate(self._parent):
s = self._parent[name]
row = s.array.sp_index.to_int_index().indices
cols.append(np.repeat(col, len(row)))
rows.append(row)
- datas.append(s.array.sp_values.astype(dtype, copy=False))
+ data.append(s.array.sp_values.astype(dtype, copy=False))
cols = np.concatenate(cols)
rows = np.concatenate(rows)
- datas = np.concatenate(datas)
- return coo_matrix((datas, (rows, cols)), shape=self._parent.shape)
+ data = np.concatenate(data)
+ return coo_matrix((data, (rows, cols)), shape=self._parent.shape)
@property
def density(self) -> float:
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 339ad2653fcfb..eb14f75ab56f7 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1547,40 +1547,40 @@ def slice(self, start=None, stop=None, step=None):
Examples
--------
- >>> s = pd.Series(["koala", "fox", "chameleon"])
+ >>> s = pd.Series(["koala", "dog", "chameleon"])
>>> s
0 koala
- 1 fox
+ 1 dog
2 chameleon
dtype: object
>>> s.str.slice(start=1)
0 oala
- 1 ox
+ 1 og
2 hameleon
dtype: object
>>> s.str.slice(start=-1)
0 a
- 1 x
+ 1 g
2 n
dtype: object
>>> s.str.slice(stop=2)
0 ko
- 1 fo
+ 1 do
2 ch
dtype: object
>>> s.str.slice(step=2)
0 kaa
- 1 fx
+ 1 dg
2 caeen
dtype: object
>>> s.str.slice(start=0, stop=5, step=3)
0 kl
- 1 f
+ 1 d
2 cm
dtype: object
@@ -1588,7 +1588,7 @@ def slice(self, start=None, stop=None, step=None):
>>> s.str[0:5:3]
0 kl
- 1 f
+ 1 d
2 cm
dtype: object
"""
diff --git a/setup.cfg b/setup.cfg
index 56b2fa190ac99..226f5353d8afc 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -64,7 +64,7 @@ filterwarnings =
junit_family=xunit2
[codespell]
-ignore-words-list=ba,blocs,coo,datas,fo,hist,nd,ser
+ignore-words-list=ba,blocs,coo,hist,nd,ser
[coverage:run]
branch = False
| - [x] xref #38820
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
see Issue #38802, task4 | https://api.github.com/repos/pandas-dev/pandas/pulls/38847 | 2020-12-31T09:25:12Z | 2021-01-04T01:13:44Z | 2021-01-04T01:13:44Z | 2021-01-04T01:13:47Z |
remove docs from packages | diff --git a/MANIFEST.in b/MANIFEST.in
index 494ad69efbc56..d0d93f2cdba8c 100644
--- a/MANIFEST.in
+++ b/MANIFEST.in
@@ -17,8 +17,10 @@ global-exclude *.h5
global-exclude *.html
global-exclude *.json
global-exclude *.jsonl
+global-exclude *.pdf
global-exclude *.pickle
global-exclude *.png
+global-exclude *.pptx
global-exclude *.pyc
global-exclude *.pyd
global-exclude *.ods
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 5e95cd6e5ee10..4ac95aef0a2c5 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -518,6 +518,11 @@ Other API changes
- Accessing ``_constructor_expanddim`` on a :class:`DataFrame` and ``_constructor_sliced`` on a :class:`Series` now raise an ``AttributeError``. Previously a ``NotImplementedError`` was raised (:issue:`38782`)
-
+Build
+=====
+
+- Documentation in ``.pptx`` and ``.pdf`` formats are no longer included in wheels or source distributions. (:issue:`30741`)
+
.. ---------------------------------------------------------------------------
.. _whatsnew_130.deprecations:
| - ~closes #xxxx~
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
#30741 discusses removing documentation and tests from package distributions. This is valuable because it makes `pandas` easier to use in storage-sensitive environments such as AWS Lambda (https://github.com/pandas-dev/pandas/issues/30741#issuecomment-638158062, https://github.com/pandas-dev/pandas/issues/30741#issuecomment-752751976).
This PR does not close that issue, but it's a first step. This proposes removing the content of `doc/` from package distributions. This PR trims about 1MB (compressed) and 6MB (uncompressed) out of the `sdist` package.
| | `master` | this PR |
|:-----------------------:|:----------:|:--------:|
| sdist (compressed) | 5.7M | 4.5M |
| sdist (uncompressed) | 25M | 19M |
<details><summary>how I checked these sizes</summary>
```shell
rm -rf pandas.egg-info
rm -rf dist/
rm -rf __pycache__
echo ""
echo "building source distribution"
echo ""
python setup.py sdist > /dev/null
cp pandas.egg-info/SOURCES.txt ~/PANDAS-SOURCES.txt
pushd dist/
echo ""
echo "sdist compressed size"
echo ""
du -a -h .
tar -xf pandas*.tar.gz
rm pandas*.tar.gz
ls .
echo ""
echo "sdist uncompressed size"
echo ""
du -sh .
popd
```
</details>
To confirm that the documentation files were removed correctly by these changes, I ran the following
```shell
python setup.py sdist
cat pandas.egg-info/SOURCES.txt | grep -E "^doc"
```
Thanks for your time and consideration. | https://api.github.com/repos/pandas-dev/pandas/pulls/38846 | 2020-12-31T04:15:58Z | 2021-04-09T00:04:51Z | 2021-04-09T00:04:51Z | 2021-04-09T00:10:24Z |
BUG: inconsistent concat casting EA vs non-EA | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 049c4fe653107..cae04af1ac2c4 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -293,10 +293,11 @@ Groupby/resample/rolling
Reshaping
^^^^^^^^^
-
- Bug in :meth:`DataFrame.unstack` with missing levels led to incorrect index names (:issue:`37510`)
+- Bug in :func:`concat` incorrectly casting to ``object`` dtype in some cases when one or more of the operands is empty (:issue:`38843`)
-
+
Sparse
^^^^^^
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index aea9029972de6..b8d2e7e1a40a9 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -128,7 +128,7 @@ def is_nonempty(x) -> bool:
# marginal given that it would still require shape & dtype calculation and
# np.concatenate which has them both implemented is compiled.
non_empties = [x for x in to_concat if is_nonempty(x)]
- if non_empties and axis == 0:
+ if non_empties:
to_concat = non_empties
typs = _get_dtype_kinds(to_concat)
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 0251fb4a0ebd6..d8dd08ea13341 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -170,11 +170,21 @@ def test_partial_setting_mixed_dtype(self):
with pytest.raises(ValueError, match=msg):
df.loc[0] = [1, 2, 3]
- # TODO: #15657, these are left as object and not coerced
+ @pytest.mark.parametrize("dtype", [None, "int64", "Int64"])
+ def test_loc_setitem_expanding_empty(self, dtype):
df = DataFrame(columns=["A", "B"])
- df.loc[3] = [6, 7]
- exp = DataFrame([[6, 7]], index=[3], columns=["A", "B"], dtype="object")
+ value = [6, 7]
+ if dtype == "int64":
+ value = np.array(value, dtype=dtype)
+ elif dtype == "Int64":
+ value = pd.array(value, dtype=dtype)
+
+ df.loc[3] = value
+
+ exp = DataFrame([[6, 7]], index=[3], columns=["A", "B"], dtype=dtype)
+ if dtype is not None:
+ exp = exp.astype(dtype)
tm.assert_frame_equal(df, exp)
def test_series_partial_set(self):
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 16c4e9456aa05..4750f9b0c40a3 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -474,11 +474,12 @@ def test_concat_will_upcast(dt, pdt):
assert x.values.dtype == "float64"
-def test_concat_empty_and_non_empty_frame_regression():
+@pytest.mark.parametrize("dtype", ["int64", "Int64"])
+def test_concat_empty_and_non_empty_frame_regression(dtype):
# GH 18178 regression test
- df1 = DataFrame({"foo": [1]})
+ df1 = DataFrame({"foo": [1]}).astype(dtype)
df2 = DataFrame({"foo": []})
- expected = DataFrame({"foo": [1.0]})
+ expected = df1
result = pd.concat([df1, df2])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/concat/test_empty.py b/pandas/tests/reshape/concat/test_empty.py
index a97e9265b4f99..dea04e98088e8 100644
--- a/pandas/tests/reshape/concat/test_empty.py
+++ b/pandas/tests/reshape/concat/test_empty.py
@@ -202,12 +202,15 @@ def test_concat_empty_series_dtypes_sparse(self):
expected = pd.SparseDtype("object")
assert result.dtype == expected
- def test_concat_empty_df_object_dtype(self):
+ @pytest.mark.parametrize("dtype", ["int64", "Int64"])
+ def test_concat_empty_df_object_dtype(self, dtype):
# GH 9149
df_1 = DataFrame({"Row": [0, 1, 1], "EmptyCol": np.nan, "NumberCol": [1, 2, 3]})
+ df_1["Row"] = df_1["Row"].astype(dtype)
df_2 = DataFrame(columns=df_1.columns)
result = pd.concat([df_1, df_2], axis=0)
- expected = df_1.astype(object)
+ expected = df_1.copy()
+ expected["EmptyCol"] = expected["EmptyCol"].astype(object) # TODO: why?
tm.assert_frame_equal(result, expected)
def test_concat_empty_dataframe_dtypes(self):
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Because we dont have 2D EAs, we end up passing axis=0 to concat_compat from internals.concat. As a result we drop empty arrays only when EAs are present. This leads to dtypes being preserved in EA cases while being cast to object in non-EA cases.
One solution is just to support 2D EAs. But since I like the dropping-empties behavior better anyway, this is good too. | https://api.github.com/repos/pandas-dev/pandas/pulls/38843 | 2020-12-31T03:04:11Z | 2021-01-01T23:03:25Z | 2021-01-01T23:03:25Z | 2021-01-05T22:14:19Z |
Update conf.py to execute imports during pdf building | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 951a6d4043786..8ab1c8c2f3428 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -427,7 +427,7 @@
ipython_warning_is_error = False
-ipython_exec_lines = [
+ipython_execlines = [
"import numpy as np",
"import pandas as pd",
# This ensures correct rendering on system with console encoding != utf8
| closes #38451
According [https://ipython.readthedocs.io/en/stable/sphinxext.html](), it should be `ipython_execlines` not `ipython_exec_lines`. Should close issue #38451
| https://api.github.com/repos/pandas-dev/pandas/pulls/38841 | 2020-12-31T02:32:07Z | 2021-01-05T12:57:09Z | 2021-01-05T12:57:09Z | 2021-01-06T04:38:01Z |
TYP: follow-ups to recent PRs | diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index a9355e30cd3c2..aea9029972de6 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -89,7 +89,7 @@ def _cast_to_common_type(arr: ArrayLike, dtype: DtypeObj) -> ArrayLike:
# wrap datetime-likes in EA to ensure astype(object) gives Timestamp/Timedelta
# this can happen when concat_compat is called directly on arrays (when arrays
# are not coming from Index/Series._values), eg in BlockManager.quantile
- arr = array(arr)
+ arr = ensure_wrapped_if_datetimelike(arr)
if is_extension_array_dtype(dtype):
if isinstance(arr, np.ndarray):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index cc89823cd7817..03d439bd461da 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9600,7 +9600,7 @@ def _from_nested_dict(data) -> collections.defaultdict:
return new_data
-def _reindex_for_setitem(value, index: Index):
+def _reindex_for_setitem(value: FrameOrSeriesUnion, index: Index) -> ArrayLike:
# reindex if necessary
if value.index.equals(index) or not len(index):
@@ -9608,7 +9608,7 @@ def _reindex_for_setitem(value, index: Index):
# GH#4107
try:
- value = value.reindex(index)._values
+ reindexed_value = value.reindex(index)._values
except ValueError as err:
# raised in MultiIndex.from_tuples, see test_insert_error_msmgs
if not value.index.is_unique:
@@ -9618,7 +9618,7 @@ def _reindex_for_setitem(value, index: Index):
raise TypeError(
"incompatible index of inserted column with frame index"
) from err
- return value
+ return reindexed_value
def _maybe_atleast_2d(value):
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 5363ea2f031b1..7f4e16dc236ac 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1274,22 +1274,6 @@ def shift(self, periods: int, axis: int = 0, fill_value=None):
return [self.make_block(new_values)]
- def _maybe_reshape_where_args(self, values, other, cond, axis):
- transpose = self.ndim == 2
-
- cond = _extract_bool_array(cond)
-
- # If the default broadcasting would go in the wrong direction, then
- # explicitly reshape other instead
- if getattr(other, "ndim", 0) >= 1:
- if values.ndim - 1 == other.ndim and axis == 1:
- other = other.reshape(tuple(other.shape + (1,)))
- elif transpose and values.ndim == self.ndim - 1:
- # TODO(EA2D): not neceesssary with 2D EAs
- cond = cond.T
-
- return other, cond
-
def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
"""
evaluate the block; return result block(s) from the result
@@ -1319,7 +1303,7 @@ def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
if transpose:
values = values.T
- other, cond = self._maybe_reshape_where_args(values, other, cond, axis)
+ cond = _extract_bool_array(cond)
if cond.ravel("K").all():
result = values
@@ -2072,7 +2056,7 @@ def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
# TODO(EA2D): reshape unnecessary with 2D EAs
arr = self.array_values().reshape(self.shape)
- other, cond = self._maybe_reshape_where_args(arr, other, cond, axis)
+ cond = _extract_bool_array(cond)
try:
res_values = arr.T.where(cond, other).T
@@ -2572,23 +2556,20 @@ def _block_shape(values: ArrayLike, ndim: int = 1) -> ArrayLike:
return values
-def safe_reshape(arr, new_shape: Shape):
+def safe_reshape(arr: ArrayLike, new_shape: Shape) -> ArrayLike:
"""
- If possible, reshape `arr` to have shape `new_shape`,
- with a couple of exceptions (see gh-13012):
-
- 1) If `arr` is a ExtensionArray or Index, `arr` will be
- returned as is.
- 2) If `arr` is a Series, the `_values` attribute will
- be reshaped and returned.
+ Reshape `arr` to have shape `new_shape`, unless it is an ExtensionArray,
+ in which case it will be returned unchanged (see gh-13012).
Parameters
----------
- arr : array-like, object to be reshaped
- new_shape : int or tuple of ints, the new shape
+ arr : np.ndarray or ExtensionArray
+ new_shape : Tuple[int]
+
+ Returns
+ -------
+ np.ndarray or ExtensionArray
"""
- if isinstance(arr, ABCSeries):
- arr = arr._values
if not is_extension_array_dtype(arr.dtype):
# Note: this will include TimedeltaArray and tz-naive DatetimeArray
# TODO(EA2D): special case will be unnecessary with 2D EAs
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index dd3a04ccb38e2..013e52248f5c4 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas._libs import NaT, internals as libinternals
-from pandas._typing import DtypeObj, Shape
+from pandas._typing import ArrayLike, DtypeObj, Shape
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.cast import maybe_promote
@@ -29,11 +29,12 @@
from pandas.core.internals.managers import BlockManager
if TYPE_CHECKING:
+ from pandas import Index
from pandas.core.arrays.sparse.dtype import SparseDtype
def concatenate_block_managers(
- mgrs_indexers, axes, concat_axis: int, copy: bool
+ mgrs_indexers, axes: List["Index"], concat_axis: int, copy: bool
) -> BlockManager:
"""
Concatenate block managers into one.
@@ -96,7 +97,7 @@ def concatenate_block_managers(
return BlockManager(blocks, axes)
-def _get_mgr_concatenation_plan(mgr, indexers):
+def _get_mgr_concatenation_plan(mgr: BlockManager, indexers: Dict[int, np.ndarray]):
"""
Construct concatenation plan for given block manager and indexers.
@@ -235,7 +236,7 @@ def is_na(self) -> bool:
return isna_all(values_flat)
- def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na):
+ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na) -> ArrayLike:
if upcasted_na is None:
# No upcasting is necessary
fill_value = self.block.fill_value
@@ -307,7 +308,9 @@ def get_reindexed_values(self, empty_dtype: DtypeObj, upcasted_na):
return values
-def _concatenate_join_units(join_units, concat_axis, copy):
+def _concatenate_join_units(
+ join_units: List[JoinUnit], concat_axis: int, copy: bool
+) -> ArrayLike:
"""
Concatenate values from several join units along selected axis.
"""
@@ -513,7 +516,7 @@ def _is_uniform_reindex(join_units) -> bool:
)
-def _trim_join_unit(join_unit, length):
+def _trim_join_unit(join_unit: JoinUnit, length: int) -> JoinUnit:
"""
Reduce join_unit's shape along item axis to length.
@@ -540,7 +543,7 @@ def _trim_join_unit(join_unit, length):
return JoinUnit(block=extra_block, indexers=extra_indexers, shape=extra_shape)
-def _combine_concat_plans(plans, concat_axis):
+def _combine_concat_plans(plans, concat_axis: int):
"""
Combine multiple concatenation plans into one.
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index 1389aba9525d3..d8a5855d05dfd 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -171,7 +171,7 @@ def to_numeric(arg, errors="raise", downcast=None):
if is_numeric_dtype(values_dtype):
pass
elif is_datetime_or_timedelta_dtype(values_dtype):
- values = values.astype(np.int64)
+ values = values.view(np.int64)
else:
values = ensure_object(values)
coerce_numeric = errors not in ("ignore", "raise")
diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py
index 31a16e21b7ac4..a6aa6c02d1a79 100644
--- a/pandas/tests/plotting/frame/test_frame.py
+++ b/pandas/tests/plotting/frame/test_frame.py
@@ -156,8 +156,8 @@ def test_nullable_int_plot(self):
"A": [1, 2, 3, 4, 5],
"B": [1.0, 2.0, 3.0, 4.0, 5.0],
"C": [7, 5, np.nan, 3, 2],
- "D": pd.to_datetime(dates, format="%Y"),
- "E": pd.to_datetime(dates, format="%Y", utc=True),
+ "D": pd.to_datetime(dates, format="%Y").view("i8"),
+ "E": pd.to_datetime(dates, format="%Y", utc=True).view("i8"),
},
dtype=np.int64,
)
| https://api.github.com/repos/pandas-dev/pandas/pulls/38840 | 2020-12-31T02:31:36Z | 2020-12-31T18:44:22Z | 2020-12-31T18:44:22Z | 2020-12-31T19:10:02Z | |
DOC: fix includes | diff --git a/doc/source/conf.py b/doc/source/conf.py
index 951a6d4043786..7f7309bae7031 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -68,7 +68,12 @@
"contributors", # custom pandas extension
]
-exclude_patterns = ["**.ipynb_checkpoints"]
+exclude_patterns = [
+ "**.ipynb_checkpoints",
+ # to ensure that include files (partial pages) aren't built, exclude them
+ # https://github.com/sphinx-doc/sphinx/issues/1965#issuecomment-124732907
+ "**/includes/**",
+]
try:
import nbconvert
except ImportError:
diff --git a/doc/source/getting_started/intro_tutorials/02_read_write.rst b/doc/source/getting_started/intro_tutorials/02_read_write.rst
index 3457ed142510b..d69a48def0287 100644
--- a/doc/source/getting_started/intro_tutorials/02_read_write.rst
+++ b/doc/source/getting_started/intro_tutorials/02_read_write.rst
@@ -17,7 +17,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-.. include:: titanic.rst
+.. include:: includes/titanic.rst
.. raw:: html
diff --git a/doc/source/getting_started/intro_tutorials/03_subset_data.rst b/doc/source/getting_started/intro_tutorials/03_subset_data.rst
index 083e4f9d8373e..fe3eae6c42959 100644
--- a/doc/source/getting_started/intro_tutorials/03_subset_data.rst
+++ b/doc/source/getting_started/intro_tutorials/03_subset_data.rst
@@ -17,7 +17,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-.. include:: titanic.rst
+.. include:: includes/titanic.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/04_plotting.rst b/doc/source/getting_started/intro_tutorials/04_plotting.rst
index ef0e5592f6f93..615b944fd395f 100644
--- a/doc/source/getting_started/intro_tutorials/04_plotting.rst
+++ b/doc/source/getting_started/intro_tutorials/04_plotting.rst
@@ -18,7 +18,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-. include:: air_quality_no2.rst
+.. include:: includes/air_quality_no2.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/05_add_columns.rst b/doc/source/getting_started/intro_tutorials/05_add_columns.rst
index fc7dfc7dcc29d..dc18be935b973 100644
--- a/doc/source/getting_started/intro_tutorials/05_add_columns.rst
+++ b/doc/source/getting_started/intro_tutorials/05_add_columns.rst
@@ -17,7 +17,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-. include:: air_quality_no2.rst
+.. include:: includes/air_quality_no2.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/06_calculate_statistics.rst b/doc/source/getting_started/intro_tutorials/06_calculate_statistics.rst
index 2420544c28bef..fcf754e340ab2 100644
--- a/doc/source/getting_started/intro_tutorials/06_calculate_statistics.rst
+++ b/doc/source/getting_started/intro_tutorials/06_calculate_statistics.rst
@@ -17,7 +17,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-.. include:: titanic.rst
+.. include:: includes/titanic.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst b/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
index 0f550cbeb2154..bd4a617fe753b 100644
--- a/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
+++ b/doc/source/getting_started/intro_tutorials/07_reshape_table_layout.rst
@@ -17,7 +17,7 @@
<ul class="list-group list-group-flush">
<li class="list-group-item">
-.. include:: titanic.rst
+.. include:: includes/titanic.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/10_text_data.rst b/doc/source/getting_started/intro_tutorials/10_text_data.rst
index 2df8b1cb29770..63db920164ac3 100644
--- a/doc/source/getting_started/intro_tutorials/10_text_data.rst
+++ b/doc/source/getting_started/intro_tutorials/10_text_data.rst
@@ -16,7 +16,7 @@
</div>
<ul class="list-group list-group-flush">
<li class="list-group-item">
-.. include:: titanic.rst
+.. include:: includes/titanic.rst
.. ipython:: python
diff --git a/doc/source/getting_started/intro_tutorials/air_quality_no2.rst b/doc/source/getting_started/intro_tutorials/includes/air_quality_no2.rst
similarity index 98%
rename from doc/source/getting_started/intro_tutorials/air_quality_no2.rst
rename to doc/source/getting_started/intro_tutorials/includes/air_quality_no2.rst
index 7515e004a4177..a5a5442330e43 100644
--- a/doc/source/getting_started/intro_tutorials/air_quality_no2.rst
+++ b/doc/source/getting_started/intro_tutorials/includes/air_quality_no2.rst
@@ -1,5 +1,3 @@
-:orphan:
-
.. raw:: html
<div data-toggle="collapse" href="#collapsedata" role="button" aria-expanded="false" aria-controls="collapsedata">
diff --git a/doc/source/getting_started/intro_tutorials/titanic.rst b/doc/source/getting_started/intro_tutorials/includes/titanic.rst
similarity index 99%
rename from doc/source/getting_started/intro_tutorials/titanic.rst
rename to doc/source/getting_started/intro_tutorials/includes/titanic.rst
index e73f18a6f2669..7032b70b3f1cf 100644
--- a/doc/source/getting_started/intro_tutorials/titanic.rst
+++ b/doc/source/getting_started/intro_tutorials/includes/titanic.rst
@@ -1,5 +1,3 @@
-:orphan:
-
.. raw:: html
<div data-toggle="collapse" href="#collapsedata" role="button" aria-expanded="false" aria-controls="collapsedata">
| - Added missing `.` to the `.. include`s
- Put the includes in a folder to exclude from Sphinx, to avoid `:orphan:` being written to the HTML
Before:
<img width="1156" alt="Screen_Shot_2020-12-30_at_7_08_52_PM" src="https://user-images.githubusercontent.com/86842/103387595-99de4380-4ad2-11eb-85bf-8ac8da85676a.png">
After:
<img width="1186" alt="Screen Shot 2020-12-30 at 7 07 41 PM" src="https://user-images.githubusercontent.com/86842/103387540-5f74a680-4ad2-11eb-992c-44de074462ff.png">
---
- [ ] ~~closes #xxxx~~
- [ ] tests added / passed
- [ ] ~~passes `black pandas`~~
- [ ] ~~passes `git diff upstream/master -u -- "*.py" | flake8 --diff`~~
- [ ] ~~whatsnew entry~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/38839 | 2020-12-31T00:10:27Z | 2020-12-31T16:39:16Z | 2020-12-31T16:39:16Z | 2021-01-01T21:14:25Z |
Backport PR #38816 on branch 1.2.x (REGR: groupby.sem with nuisance columns) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 5fdc06f1fc6a3..72f76b4749c54 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 23f0e178130be..1272ea7547209 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1606,12 +1606,11 @@ def sem(self, ddof: int = 1):
if result.ndim == 1:
result /= np.sqrt(self.count())
else:
- cols = result.columns.get_indexer_for(
- result.columns.difference(self.exclusions).unique()
- )
- result.iloc[:, cols] = result.iloc[:, cols] / np.sqrt(
- self.count().iloc[:, cols]
- )
+ cols = result.columns.difference(self.exclusions).unique()
+ counts = self.count()
+ result_ilocs = result.columns.get_indexer_for(cols)
+ count_ilocs = counts.columns.get_indexer_for(cols)
+ result.iloc[:, result_ilocs] /= np.sqrt(counts.iloc[:, count_ilocs])
return result
@final
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 7c179a79513fa..59d49ad8bdae4 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -842,6 +842,14 @@ def test_omit_nuisance(df):
grouped.agg(lambda x: x.sum(0, numeric_only=False))
+def test_omit_nuisance_sem(df):
+ # GH 38774 - sem should work with nuisance columns
+ grouped = df.groupby("A")
+ result = grouped.sem()
+ expected = df.loc[:, ["A", "C", "D"]].groupby("A").sem()
+ tm.assert_frame_equal(result, expected)
+
+
def test_omit_nuisance_python_multiple(three_group):
grouped = three_group.groupby(["A", "B"])
| Backport PR #38816: REGR: groupby.sem with nuisance columns | https://api.github.com/repos/pandas-dev/pandas/pulls/38838 | 2020-12-30T23:39:23Z | 2020-12-31T03:02:40Z | 2020-12-31T03:02:40Z | 2020-12-31T03:02:41Z |
DEPR: try_cast kwarg in mask, where | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 5197fd2b23dab..95629e9d1ea29 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -155,7 +155,7 @@ Deprecations
- Deprecating allowing scalars passed to the :class:`Categorical` constructor (:issue:`38433`)
- Deprecated allowing subclass-specific keyword arguments in the :class:`Index` constructor, use the specific subclass directly instead (:issue:`14093`,:issue:`21311`,:issue:`22315`,:issue:`26974`)
- Deprecated ``astype`` of datetimelike (``timedelta64[ns]``, ``datetime64[ns]``, ``Datetime64TZDtype``, ``PeriodDtype``) to integer dtypes, use ``values.view(...)`` instead (:issue:`38544`)
--
+- Deprecated keyword ``try_cast`` in :meth:`Series.where`, :meth:`Series.mask`, :meth:`DataFrame.where`, :meth:`DataFrame.mask`; cast results manually if desired (:issue:`38836`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a7dfdb3cfbd97..c3db8ef58deb6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8781,7 +8781,6 @@ def _where(
axis=None,
level=None,
errors="raise",
- try_cast=False,
):
"""
Equivalent to public method `where`, except that `other` is not
@@ -8932,7 +8931,6 @@ def _where(
cond=cond,
align=align,
errors=errors,
- try_cast=try_cast,
axis=block_axis,
)
result = self._constructor(new_data)
@@ -8954,7 +8952,7 @@ def where(
axis=None,
level=None,
errors="raise",
- try_cast=False,
+ try_cast=lib.no_default,
):
"""
Replace values where the condition is {cond_rev}.
@@ -8986,9 +8984,12 @@ def where(
- 'raise' : allow exceptions to be raised.
- 'ignore' : suppress exceptions. On error return original object.
- try_cast : bool, default False
+ try_cast : bool, default None
Try to cast the result back to the input type (if possible).
+ .. deprecated:: 1.3.0
+ Manually cast back if necessary.
+
Returns
-------
Same type as caller or None if ``inplace=True``.
@@ -9077,9 +9078,16 @@ def where(
4 True True
"""
other = com.apply_if_callable(other, self)
- return self._where(
- cond, other, inplace, axis, level, errors=errors, try_cast=try_cast
- )
+
+ if try_cast is not lib.no_default:
+ warnings.warn(
+ "try_cast keyword is deprecated and will be removed in a "
+ "future version",
+ FutureWarning,
+ stacklevel=2,
+ )
+
+ return self._where(cond, other, inplace, axis, level, errors=errors)
@final
@doc(
@@ -9098,12 +9106,20 @@ def mask(
axis=None,
level=None,
errors="raise",
- try_cast=False,
+ try_cast=lib.no_default,
):
inplace = validate_bool_kwarg(inplace, "inplace")
cond = com.apply_if_callable(cond, self)
+ if try_cast is not lib.no_default:
+ warnings.warn(
+ "try_cast keyword is deprecated and will be removed in a "
+ "future version",
+ FutureWarning,
+ stacklevel=2,
+ )
+
# see gh-21891
if not hasattr(cond, "__invert__"):
cond = np.array(cond)
@@ -9114,7 +9130,6 @@ def mask(
inplace=inplace,
axis=axis,
level=level,
- try_cast=try_cast,
errors=errors,
)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 38976ee632419..5363ea2f031b1 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1290,9 +1290,7 @@ def _maybe_reshape_where_args(self, values, other, cond, axis):
return other, cond
- def where(
- self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0
- ) -> List["Block"]:
+ def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
"""
evaluate the block; return result block(s) from the result
@@ -1303,7 +1301,6 @@ def where(
errors : str, {'raise', 'ignore'}, default 'raise'
- ``raise`` : allow exceptions to be raised
- ``ignore`` : suppress exceptions. On error return original object
- try_cast: bool, default False
axis : int, default 0
Returns
@@ -1342,9 +1339,7 @@ def where(
# we cannot coerce, return a compat dtype
# we are explicitly ignoring errors
block = self.coerce_to_target_dtype(other)
- blocks = block.where(
- orig_other, cond, errors=errors, try_cast=try_cast, axis=axis
- )
+ blocks = block.where(orig_other, cond, errors=errors, axis=axis)
return self._maybe_downcast(blocks, "infer")
if not (
@@ -1825,9 +1820,7 @@ def shift(
)
]
- def where(
- self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0
- ) -> List["Block"]:
+ def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
cond = _extract_bool_array(cond)
assert not isinstance(other, (ABCIndex, ABCSeries, ABCDataFrame))
@@ -2075,9 +2068,7 @@ def to_native_types(self, na_rep="NaT", **kwargs):
result = arr._format_native_types(na_rep=na_rep, **kwargs)
return self.make_block(result)
- def where(
- self, other, cond, errors="raise", try_cast: bool = False, axis: int = 0
- ) -> List["Block"]:
+ def where(self, other, cond, errors="raise", axis: int = 0) -> List["Block"]:
# TODO(EA2D): reshape unnecessary with 2D EAs
arr = self.array_values().reshape(self.shape)
@@ -2086,9 +2077,7 @@ def where(
try:
res_values = arr.T.where(cond, other).T
except (ValueError, TypeError):
- return super().where(
- other, cond, errors=errors, try_cast=try_cast, axis=axis
- )
+ return super().where(other, cond, errors=errors, axis=axis)
# TODO(EA2D): reshape not needed with 2D EAs
res_values = res_values.reshape(self.values.shape)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 7dde952636a79..d44a3df45587a 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -542,9 +542,7 @@ def get_axe(block, qs, axes):
def isna(self, func) -> "BlockManager":
return self.apply("apply", func=func)
- def where(
- self, other, cond, align: bool, errors: str, try_cast: bool, axis: int
- ) -> "BlockManager":
+ def where(self, other, cond, align: bool, errors: str, axis: int) -> "BlockManager":
if align:
align_keys = ["other", "cond"]
else:
@@ -557,7 +555,6 @@ def where(
other=other,
cond=cond,
errors=errors,
- try_cast=try_cast,
axis=axis,
)
diff --git a/pandas/tests/frame/indexing/test_mask.py b/pandas/tests/frame/indexing/test_mask.py
index 23f3a18881782..8050769f56f6c 100644
--- a/pandas/tests/frame/indexing/test_mask.py
+++ b/pandas/tests/frame/indexing/test_mask.py
@@ -83,3 +83,16 @@ def test_mask_dtype_conversion(self):
expected = bools.astype(float).mask(mask)
result = bools.mask(mask)
tm.assert_frame_equal(result, expected)
+
+
+def test_mask_try_cast_deprecated(frame_or_series):
+
+ obj = DataFrame(np.random.randn(4, 3))
+ if frame_or_series is not DataFrame:
+ obj = obj[0]
+
+ mask = obj > 0
+
+ with tm.assert_produces_warning(FutureWarning):
+ # try_cast keyword deprecated
+ obj.mask(mask, -1, try_cast=True)
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index 87d2fd37ab023..2f098426efaf9 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -672,3 +672,15 @@ def test_where_ea_other(self):
expected["B"] = expected["B"].astype(object)
result = df.where(mask, ser2, axis=1)
tm.assert_frame_equal(result, expected)
+
+
+def test_where_try_cast_deprecated(frame_or_series):
+ obj = DataFrame(np.random.randn(4, 3))
+ if frame_or_series is not DataFrame:
+ obj = obj[0]
+
+ mask = obj > 0
+
+ with tm.assert_produces_warning(FutureWarning):
+ # try_cast keyword deprecated
+ obj.where(mask, -1, try_cast=False)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
It is silently ignored ATM. | https://api.github.com/repos/pandas-dev/pandas/pulls/38836 | 2020-12-30T23:23:04Z | 2020-12-31T15:51:41Z | 2020-12-31T15:51:41Z | 2020-12-31T15:56:06Z |
BUG: astype_nansafe with copy=False | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index b6d5493aefaa9..d6269fabf0f00 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -296,7 +296,7 @@ Sparse
^^^^^^
- Bug in :meth:`DataFrame.sparse.to_coo` raising ``KeyError`` with columns that are a numeric :class:`Index` without a 0 (:issue:`18414`)
--
+- Bug in :meth:`SparseArray.astype` with ``copy=False`` producing incorrect results when going from integer dtype to floating dtype (:issue:`34456`)
-
ExtensionArray
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 08e193acdf5ea..0915043f8fd46 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1133,11 +1133,11 @@ def astype_nansafe(
)
raise ValueError(msg)
- if copy or is_object_dtype(arr) or is_object_dtype(dtype):
+ if copy or is_object_dtype(arr.dtype) or is_object_dtype(dtype):
# Explicit copy, or required since NumPy can't view from / to object.
return arr.astype(dtype, copy=True)
- return arr.view(dtype)
+ return arr.astype(dtype, copy=copy)
def soft_convert_objects(
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 46edde62b510e..7f1b0f49ebd1e 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -563,6 +563,14 @@ def test_astype_nan_raises(self):
with pytest.raises(ValueError, match="Cannot convert non-finite"):
arr.astype(int)
+ def test_astype_copy_false(self):
+ # GH#34456 bug caused by using .view instead of .astype in astype_nansafe
+ arr = SparseArray([1, 2, 3])
+
+ result = arr.astype(float, copy=False)
+ expected = SparseArray([1.0, 2.0, 3.0], fill_value=0.0)
+ tm.assert_sp_array_equal(result, expected)
+
def test_set_fill_value(self):
arr = SparseArray([1.0, np.nan, 2.0], fill_value=np.nan)
arr.fill_value = 2
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 54bac7deded6c..8df61394e8e7e 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -724,6 +724,17 @@ def test_astype_nansafe(val, typ):
astype_nansafe(arr, dtype=typ)
+def test_astype_nansafe_copy_false(any_int_dtype):
+ # GH#34457 use astype, not view
+ arr = np.array([1, 2, 3], dtype=any_int_dtype)
+
+ dtype = np.dtype("float64")
+ result = astype_nansafe(arr, dtype, copy=False)
+
+ expected = np.array([1.0, 2.0, 3.0], dtype=dtype)
+ tm.assert_numpy_array_equal(result, expected)
+
+
@pytest.mark.parametrize("from_type", [np.datetime64, np.timedelta64])
@pytest.mark.parametrize(
"to_type",
| - [x] closes #34456
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Opens up possibility of addressing #23092 | https://api.github.com/repos/pandas-dev/pandas/pulls/38835 | 2020-12-30T23:13:03Z | 2020-12-31T01:49:35Z | 2020-12-31T01:49:34Z | 2020-12-31T01:57:55Z |
BUG: IntervalIndex.intersection returning duplicates | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 83bff6d7bfb2d..19abd082604b6 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -226,7 +226,7 @@ Strings
Interval
^^^^^^^^
- Bug in :meth:`IntervalIndex.intersection` and :meth:`IntervalIndex.symmetric_difference` always returning object-dtype when operating with :class:`CategoricalIndex` (:issue:`38653`, :issue:`38741`)
--
+- Bug in :meth:`IntervalIndex.intersection` returning duplicates when at least one of both Indexes has duplicates which are present in the other (:issue:`38743`)
-
Indexing
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 824d78d1a8d05..054b21d2857ff 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -40,7 +40,7 @@
)
from pandas.core.dtypes.dtypes import IntervalDtype
-from pandas.core.algorithms import take_1d
+from pandas.core.algorithms import take_1d, unique
from pandas.core.arrays.interval import IntervalArray, _interval_shared_docs
import pandas.core.common as com
from pandas.core.indexers import is_valid_positional_slice
@@ -964,6 +964,7 @@ def _intersection_unique(self, other: "IntervalIndex") -> "IntervalIndex":
match = (lindexer == rindexer) & (lindexer != -1)
indexer = lindexer.take(match.nonzero()[0])
+ indexer = unique(indexer)
return self.take(indexer)
diff --git a/pandas/tests/indexes/interval/test_setops.py b/pandas/tests/indexes/interval/test_setops.py
index 4b7901407d94a..b5d69758ab01f 100644
--- a/pandas/tests/indexes/interval/test_setops.py
+++ b/pandas/tests/indexes/interval/test_setops.py
@@ -78,13 +78,6 @@ def test_intersection(self, closed, sort):
result = index.intersection(other)
tm.assert_index_equal(result, expected)
- # GH 26225: duplicate element
- index = IntervalIndex.from_tuples([(1, 2), (1, 2), (2, 3), (3, 4)])
- other = IntervalIndex.from_tuples([(1, 2), (2, 3)])
- expected = IntervalIndex.from_tuples([(1, 2), (1, 2), (2, 3)])
- result = index.intersection(other)
- tm.assert_index_equal(result, expected)
-
# GH 26225
index = IntervalIndex.from_tuples([(0, 3), (0, 2)])
other = IntervalIndex.from_tuples([(0, 2), (1, 3)])
@@ -118,6 +111,14 @@ def test_intersection_empty_result(self, closed, sort):
result = index.intersection(other, sort=sort)
tm.assert_index_equal(result, expected)
+ def test_intersection_duplicates(self):
+ # GH#38743
+ index = IntervalIndex.from_tuples([(1, 2), (1, 2), (2, 3), (3, 4)])
+ other = IntervalIndex.from_tuples([(1, 2), (2, 3)])
+ expected = IntervalIndex.from_tuples([(1, 2), (2, 3)])
+ result = index.intersection(other)
+ tm.assert_index_equal(result, expected)
+
def test_difference(self, closed, sort):
index = IntervalIndex.from_arrays([1, 0, 3, 2], [1, 2, 3, 4], closed=closed)
result = index.difference(index[:1], sort=sort)
diff --git a/pandas/tests/indexes/test_setops.py b/pandas/tests/indexes/test_setops.py
index 1035ac1f0e60b..912743e45975a 100644
--- a/pandas/tests/indexes/test_setops.py
+++ b/pandas/tests/indexes/test_setops.py
@@ -466,3 +466,19 @@ def test_setop_with_categorical(index, sort, method):
result = getattr(index, method)(other[:5], sort=sort)
expected = getattr(index, method)(index[:5], sort=sort)
tm.assert_index_equal(result, expected)
+
+
+def test_intersection_duplicates_all_indexes(index):
+ # GH#38743
+ if index.empty:
+ # No duplicates in empty indexes
+ return
+
+ def check_intersection_commutative(left, right):
+ assert left.intersection(right).equals(right.intersection(left))
+
+ idx = index
+ idx_non_unique = idx[[0, 0, 1, 2]]
+
+ check_intersection_commutative(idx, idx_non_unique)
+ assert idx.intersection(idx_non_unique).is_unique
| - [x] closes #38743
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Moved the wrong test code into a new test with adjusted behavior
| https://api.github.com/repos/pandas-dev/pandas/pulls/38834 | 2020-12-30T22:46:08Z | 2020-12-31T01:50:22Z | 2020-12-31T01:50:22Z | 2021-01-01T21:33:37Z |
REF: move putmask internals in array_algos.putmask | diff --git a/pandas/core/array_algos/putmask.py b/pandas/core/array_algos/putmask.py
index 32c84b6eb234f..2a1b6f784a1f2 100644
--- a/pandas/core/array_algos/putmask.py
+++ b/pandas/core/array_algos/putmask.py
@@ -120,3 +120,37 @@ def _putmask_preserve(new_values: np.ndarray, new, mask: np.ndarray):
except (IndexError, ValueError):
new_values[mask] = new
return new_values
+
+
+def putmask_without_repeat(values: np.ndarray, mask: np.ndarray, new: Any) -> None:
+ """
+ np.putmask will truncate or repeat if `new` is a listlike with
+ len(new) != len(values). We require an exact match.
+
+ Parameters
+ ----------
+ values : np.ndarray
+ mask : np.ndarray[bool]
+ new : Any
+ """
+ if getattr(new, "ndim", 0) >= 1:
+ new = new.astype(values.dtype, copy=False)
+
+ # TODO: this prob needs some better checking for 2D cases
+ nlocs = mask.sum()
+ if nlocs > 0 and is_list_like(new) and getattr(new, "ndim", 1) == 1:
+ if nlocs == len(new):
+ # GH#30567
+ # If length of ``new`` is less than the length of ``values``,
+ # `np.putmask` would first repeat the ``new`` array and then
+ # assign the masked values hence produces incorrect result.
+ # `np.place` on the other hand uses the ``new`` values at it is
+ # to place in the masked locations of ``values``
+ np.place(values, mask, new)
+ # i.e. values[mask] = new
+ elif mask.shape[-1] == len(new) or len(new) == 1:
+ np.putmask(values, mask, new)
+ else:
+ raise ValueError("cannot assign mismatch length to masked array")
+ else:
+ np.putmask(values, mask, new)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 76b30dc17711e..38976ee632419 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -54,7 +54,11 @@
from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
import pandas.core.algorithms as algos
-from pandas.core.array_algos.putmask import putmask_inplace, putmask_smart
+from pandas.core.array_algos.putmask import (
+ putmask_inplace,
+ putmask_smart,
+ putmask_without_repeat,
+)
from pandas.core.array_algos.replace import compare_or_regex_search, replace_regex
from pandas.core.array_algos.transforms import shift
from pandas.core.arrays import (
@@ -1030,38 +1034,7 @@ def putmask(self, mask, new, axis: int = 0) -> List["Block"]:
if transpose:
new_values = new_values.T
- # If the default repeat behavior in np.putmask would go in the
- # wrong direction, then explicitly repeat and reshape new instead
- if getattr(new, "ndim", 0) >= 1:
- new = new.astype(new_values.dtype, copy=False)
-
- # we require exact matches between the len of the
- # values we are setting (or is compat). np.putmask
- # doesn't check this and will simply truncate / pad
- # the output, but we want sane error messages
- #
- # TODO: this prob needs some better checking
- # for 2D cases
- if (
- is_list_like(new)
- and np.any(mask[mask])
- and getattr(new, "ndim", 1) == 1
- ):
- if mask[mask].shape[-1] == len(new):
- # GH 30567
- # If length of ``new`` is less than the length of ``new_values``,
- # `np.putmask` would first repeat the ``new`` array and then
- # assign the masked values hence produces incorrect result.
- # `np.place` on the other hand uses the ``new`` values at it is
- # to place in the masked locations of ``new_values``
- np.place(new_values, mask, new)
- # i.e. new_values[mask] = new
- elif mask.shape[-1] == len(new) or len(new) == 1:
- np.putmask(new_values, mask, new)
- else:
- raise ValueError("cannot assign mismatch length to masked array")
- else:
- np.putmask(new_values, mask, new)
+ putmask_without_repeat(new_values, mask, new)
# maybe upcast me
elif mask.any():
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38833 | 2020-12-30T22:25:53Z | 2020-12-31T01:49:08Z | 2020-12-31T01:49:08Z | 2020-12-31T01:59:44Z |
Backport PR #38819 on branch 1.2.x (REGR: read_excel does not work for most file handles) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 31c5b770b1f35..5fdc06f1fc6a3 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 221e8b9ccfb14..5be8dbf152309 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1051,16 +1051,11 @@ def __init__(
xlrd_version = LooseVersion(xlrd.__version__)
- if isinstance(path_or_buffer, (BufferedIOBase, RawIOBase, bytes)):
- ext = inspect_excel_format(
- content=path_or_buffer, storage_options=storage_options
- )
- elif xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
+ if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
ext = "xls"
else:
- # path_or_buffer is path-like, use stringified path
ext = inspect_excel_format(
- path=str(self._io), storage_options=storage_options
+ content=path_or_buffer, storage_options=storage_options
)
if engine is None:
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index df1250cee8b00..8b1a96f694e71 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -657,6 +657,22 @@ def test_read_from_s3_url(self, read_ext, s3_resource, s3so):
local_table = pd.read_excel("test1" + read_ext)
tm.assert_frame_equal(url_table, local_table)
+ def test_read_from_s3_object(self, read_ext, s3_resource, s3so):
+ # GH 38788
+ # Bucket "pandas-test" created in tests/io/conftest.py
+ with open("test1" + read_ext, "rb") as f:
+ s3_resource.Bucket("pandas-test").put_object(Key="test1" + read_ext, Body=f)
+
+ import s3fs
+
+ s3 = s3fs.S3FileSystem(**s3so)
+
+ with s3.open("s3://pandas-test/test1" + read_ext) as f:
+ url_table = pd.read_excel(f)
+
+ local_table = pd.read_excel("test1" + read_ext)
+ tm.assert_frame_equal(url_table, local_table)
+
@pytest.mark.slow
def test_read_from_file_url(self, read_ext, datapath):
| Backport PR #38819: REGR: read_excel does not work for most file handles | https://api.github.com/repos/pandas-dev/pandas/pulls/38832 | 2020-12-30T21:12:57Z | 2020-12-30T23:18:37Z | 2020-12-30T23:18:37Z | 2020-12-30T23:18:37Z |
Parametrized groupby allowlist test | diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 34729c771eac9..57ccf6ebd24bd 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -341,18 +341,9 @@ def test_groupby_function_rename(mframe):
assert f.__name__ == name
-@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
-def test_groupby_selection_with_methods(df):
- # some methods which require DatetimeIndex
- rng = date_range("2014", periods=len(df))
- df.index = rng
-
- g = df.groupby(["A"])[["C"]]
- g_exp = df[["C"]].groupby(df["A"])
- # TODO check groupby with > 1 col ?
-
- # methods which are called as .foo()
- methods = [
+@pytest.mark.parametrize(
+ "method",
+ [
"count",
"corr",
"cummax",
@@ -370,20 +361,45 @@ def test_groupby_selection_with_methods(df):
"ffill",
"bfill",
"pct_change",
- ]
+ ],
+)
+def test_groupby_selection_with_methods(df, method):
+ # some methods which require DatetimeIndex
+ rng = date_range("2014", periods=len(df))
+ df.index = rng
- for m in methods:
- res = getattr(g, m)()
- exp = getattr(g_exp, m)()
+ g = df.groupby(["A"])[["C"]]
+ g_exp = df[["C"]].groupby(df["A"])
+ # TODO check groupby with > 1 col ?
- # should always be frames!
- tm.assert_frame_equal(res, exp)
+ res = getattr(g, method)()
+ exp = getattr(g_exp, method)()
+
+ # should always be frames!
+ tm.assert_frame_equal(res, exp)
+
+
+@pytest.mark.filterwarnings("ignore:tshift is deprecated:FutureWarning")
+def test_groupby_selection_tshift_raises(df):
+ rng = date_range("2014", periods=len(df))
+ df.index = rng
+
+ g = df.groupby(["A"])[["C"]]
# check that the index cache is cleared
with pytest.raises(ValueError, match="Freq was not set in the index"):
# GH#35937
g.tshift()
+
+def test_groupby_selection_other_methods(df):
+ # some methods which require DatetimeIndex
+ rng = date_range("2014", periods=len(df))
+ df.index = rng
+
+ g = df.groupby(["A"])[["C"]]
+ g_exp = df[["C"]].groupby(df["A"])
+
# methods which aren't just .foo()
tm.assert_frame_equal(g.fillna(0), g_exp.fillna(0))
tm.assert_frame_equal(g.dtypes, g_exp.dtypes)
| Still exploring a few things to remove setting the .data pointer in Cython - this test is liable to fail but very tough to debug with current loop
@jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/38829 | 2020-12-30T19:52:09Z | 2020-12-31T01:48:53Z | 2020-12-31T01:48:53Z | 2023-04-12T20:17:38Z |
Backport PR #38789 on branch 1.2.x (BUG: Fix precise_xstrtod segfault on long exponent) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 0bc01c683e0ad..31c5b770b1f35 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/src/parser/tokenizer.c b/pandas/_libs/src/parser/tokenizer.c
index 88144330c1fe9..4ddbd6cf3ae60 100644
--- a/pandas/_libs/src/parser/tokenizer.c
+++ b/pandas/_libs/src/parser/tokenizer.c
@@ -1733,7 +1733,7 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
// Process string of digits.
num_digits = 0;
n = 0;
- while (isdigit_ascii(*p)) {
+ while (num_digits < max_digits && isdigit_ascii(*p)) {
n = n * 10 + (*p - '0');
num_digits++;
p++;
@@ -1754,10 +1754,13 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
} else if (exponent > 0) {
number *= e[exponent];
} else if (exponent < -308) { // Subnormal
- if (exponent < -616) // Prevent invalid array access.
+ if (exponent < -616) { // Prevent invalid array access.
number = 0.;
- number /= e[-308 - exponent];
- number /= e[308];
+ } else {
+ number /= e[-308 - exponent];
+ number /= e[308];
+ }
+
} else {
number /= e[-exponent];
}
diff --git a/pandas/tests/io/parser/conftest.py b/pandas/tests/io/parser/conftest.py
index e8893b4c02238..ec098353960d7 100644
--- a/pandas/tests/io/parser/conftest.py
+++ b/pandas/tests/io/parser/conftest.py
@@ -97,6 +97,33 @@ def python_parser_only(request):
return request.param
+def _get_all_parser_float_precision_combinations():
+ """
+ Return all allowable parser and float precision
+ combinations and corresponding ids.
+ """
+ params = []
+ ids = []
+ for parser, parser_id in zip(_all_parsers, _all_parser_ids):
+ for precision in parser.float_precision_choices:
+ params.append((parser, precision))
+ ids.append(f"{parser_id}-{precision}")
+
+ return {"params": params, "ids": ids}
+
+
+@pytest.fixture(
+ params=_get_all_parser_float_precision_combinations()["params"],
+ ids=_get_all_parser_float_precision_combinations()["ids"],
+)
+def all_parsers_all_precisions(request):
+ """
+ Fixture for all allowable combinations of parser
+ and float precision
+ """
+ return request.param
+
+
_utf_values = [8, 16, 32]
_encoding_seps = ["", "-", "_"]
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index c8ed0d75b13a2..d42bd7a004584 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -15,6 +15,7 @@
import pytest
from pandas._libs.tslib import Timestamp
+from pandas.compat import is_platform_linux
from pandas.errors import DtypeWarning, EmptyDataError, ParserError
import pandas.util._test_decorators as td
@@ -1258,15 +1259,14 @@ def test_float_parser(all_parsers):
tm.assert_frame_equal(result, expected)
-def test_scientific_no_exponent(all_parsers):
+def test_scientific_no_exponent(all_parsers_all_precisions):
# see gh-12215
df = DataFrame.from_dict({"w": ["2e"], "x": ["3E"], "y": ["42e"], "z": ["632E"]})
data = df.to_csv(index=False)
- parser = all_parsers
+ parser, precision = all_parsers_all_precisions
- for precision in parser.float_precision_choices:
- df_roundtrip = parser.read_csv(StringIO(data), float_precision=precision)
- tm.assert_frame_equal(df_roundtrip, df)
+ df_roundtrip = parser.read_csv(StringIO(data), float_precision=precision)
+ tm.assert_frame_equal(df_roundtrip, df)
@pytest.mark.parametrize("conv", [None, np.int64, np.uint64])
@@ -1350,6 +1350,35 @@ def test_numeric_range_too_wide(all_parsers, exp_data):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("neg_exp", [-617, -100000, -99999999999999999])
+def test_very_negative_exponent(all_parsers_all_precisions, neg_exp):
+ # GH#38753
+ parser, precision = all_parsers_all_precisions
+ data = f"data\n10E{neg_exp}"
+ result = parser.read_csv(StringIO(data), float_precision=precision)
+ expected = DataFrame({"data": [0.0]})
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("exp", [999999999999999999, -999999999999999999])
+def test_too_many_exponent_digits(all_parsers_all_precisions, exp, request):
+ # GH#38753
+ parser, precision = all_parsers_all_precisions
+ data = f"data\n10E{exp}"
+ result = parser.read_csv(StringIO(data), float_precision=precision)
+ if precision == "round_trip":
+ if exp == 999999999999999999 and is_platform_linux():
+ mark = pytest.mark.xfail(reason="GH38794, on Linux gives object result")
+ request.node.add_marker(mark)
+
+ value = np.inf if exp > 0 else 0.0
+ expected = DataFrame({"data": [value]})
+ else:
+ expected = DataFrame({"data": [f"10E{exp}"]})
+
+ tm.assert_frame_equal(result, expected)
+
+
@pytest.mark.parametrize("iterator", [True, False])
def test_empty_with_nrows_chunksize(all_parsers, iterator):
# see gh-9535
| Backport PR #38789: BUG: Fix precise_xstrtod segfault on long exponent | https://api.github.com/repos/pandas-dev/pandas/pulls/38828 | 2020-12-30T18:41:31Z | 2020-12-30T20:28:25Z | 2020-12-30T20:28:25Z | 2020-12-30T20:28:25Z |
CLN: add typing to dtype arg in selection of files in core/arrays (GH38808) | diff --git a/pandas/_typing.py b/pandas/_typing.py
index 64452bf337361..0b50dd69f7abb 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -94,8 +94,9 @@
Axes = Collection
# dtypes
+NpDtype = Union[str, np.dtype]
Dtype = Union[
- "ExtensionDtype", str, np.dtype, Type[Union[str, float, int, complex, bool, object]]
+ "ExtensionDtype", NpDtype, Type[Union[str, float, int, complex, bool, object]]
]
# DtypeArg specifies all allowable dtypes in a functions its dtype argument
DtypeArg = Union[Dtype, Dict[Label, Dtype]]
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 918d96cd03112..9a8b37e0785e0 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -25,7 +25,7 @@
import numpy as np
from pandas._libs import lib
-from pandas._typing import ArrayLike, Shape
+from pandas._typing import ArrayLike, Dtype, Shape
from pandas.compat import set_function_name
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
@@ -189,7 +189,7 @@ class ExtensionArray:
# ------------------------------------------------------------------------
@classmethod
- def _from_sequence(cls, scalars, *, dtype=None, copy=False):
+ def _from_sequence(cls, scalars, *, dtype: Optional[Dtype] = None, copy=False):
"""
Construct a new ExtensionArray from a sequence of scalars.
@@ -211,7 +211,9 @@ def _from_sequence(cls, scalars, *, dtype=None, copy=False):
raise AbstractMethodError(cls)
@classmethod
- def _from_sequence_of_strings(cls, strings, *, dtype=None, copy=False):
+ def _from_sequence_of_strings(
+ cls, strings, *, dtype: Optional[Dtype] = None, copy=False
+ ):
"""
Construct a new ExtensionArray from a sequence of strings.
@@ -391,7 +393,10 @@ def __ne__(self, other: Any) -> ArrayLike:
return ~(self == other)
def to_numpy(
- self, dtype=None, copy: bool = False, na_value=lib.no_default
+ self,
+ dtype: Optional[Dtype] = None,
+ copy: bool = False,
+ na_value=lib.no_default,
) -> np.ndarray:
"""
Convert to a NumPy ndarray.
@@ -1065,7 +1070,7 @@ def copy(self: ExtensionArrayT) -> ExtensionArrayT:
"""
raise AbstractMethodError(self)
- def view(self, dtype=None) -> ArrayLike:
+ def view(self, dtype: Optional[Dtype] = None) -> ArrayLike:
"""
Return a view on the array.
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
index ea2ca1f70d414..bbbc0911b4846 100644
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -1,11 +1,11 @@
import numbers
-from typing import TYPE_CHECKING, List, Tuple, Type, Union
+from typing import TYPE_CHECKING, List, Optional, Tuple, Type, Union
import warnings
import numpy as np
from pandas._libs import lib, missing as libmissing
-from pandas._typing import ArrayLike
+from pandas._typing import ArrayLike, Dtype
from pandas.compat.numpy import function as nv
from pandas.core.dtypes.common import (
@@ -273,7 +273,7 @@ def dtype(self) -> BooleanDtype:
@classmethod
def _from_sequence(
- cls, scalars, *, dtype=None, copy: bool = False
+ cls, scalars, *, dtype: Optional[Dtype] = None, copy: bool = False
) -> "BooleanArray":
if dtype:
assert dtype == "boolean"
@@ -282,7 +282,7 @@ def _from_sequence(
@classmethod
def _from_sequence_of_strings(
- cls, strings: List[str], *, dtype=None, copy: bool = False
+ cls, strings: List[str], *, dtype: Optional[Dtype] = None, copy: bool = False
) -> "BooleanArray":
def map_string(s):
if isna(s):
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 982349ea345ca..8b350fef27fb1 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -7,6 +7,7 @@
Dict,
Hashable,
List,
+ Optional,
Sequence,
Type,
TypeVar,
@@ -21,7 +22,7 @@
from pandas._libs import NaT, algos as libalgos, hashtable as htable
from pandas._libs.lib import no_default
-from pandas._typing import ArrayLike, Dtype, Ordered, Scalar
+from pandas._typing import ArrayLike, Dtype, NpDtype, Ordered, Scalar
from pandas.compat.numpy import function as nv
from pandas.util._decorators import cache_readonly, deprecate_kwarg
from pandas.util._validators import validate_bool_kwarg, validate_fillna_kwargs
@@ -318,7 +319,7 @@ def __init__(
values,
categories=None,
ordered=None,
- dtype=None,
+ dtype: Optional[Dtype] = None,
fastpath=False,
copy: bool = True,
):
@@ -423,7 +424,7 @@ def _constructor(self) -> Type["Categorical"]:
return Categorical
@classmethod
- def _from_sequence(cls, scalars, *, dtype=None, copy=False):
+ def _from_sequence(cls, scalars, *, dtype: Optional[Dtype] = None, copy=False):
return Categorical(scalars, dtype=dtype, copy=copy)
def astype(self, dtype: Dtype, copy: bool = True) -> ArrayLike:
@@ -558,7 +559,9 @@ def _from_inferred_categories(
return cls(codes, dtype=dtype, fastpath=True)
@classmethod
- def from_codes(cls, codes, categories=None, ordered=None, dtype=None):
+ def from_codes(
+ cls, codes, categories=None, ordered=None, dtype: Optional[Dtype] = None
+ ):
"""
Make a Categorical type from codes and categories or dtype.
@@ -1294,7 +1297,7 @@ def _validate_fill_value(self, fill_value):
# -------------------------------------------------------------
- def __array__(self, dtype=None) -> np.ndarray:
+ def __array__(self, dtype: Optional[NpDtype] = None) -> np.ndarray:
"""
The numpy array interface.
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index c5946fa4ddc46..b31bc0934fe60 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -36,7 +36,7 @@
integer_op_not_supported,
round_nsint64,
)
-from pandas._typing import DatetimeLikeScalar, DtypeObj
+from pandas._typing import DatetimeLikeScalar, Dtype, DtypeObj, NpDtype
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError, NullFrequencyError, PerformanceWarning
from pandas.util._decorators import Appender, Substitution, cache_readonly
@@ -107,7 +107,7 @@ class DatetimeLikeArrayMixin(OpsMixin, NDArrayBackedExtensionArray):
_recognized_scalars: Tuple[Type, ...]
_data: np.ndarray
- def __init__(self, data, dtype=None, freq=None, copy=False):
+ def __init__(self, data, dtype: Optional[Dtype] = None, freq=None, copy=False):
raise AbstractMethodError(self)
@classmethod
@@ -115,7 +115,7 @@ def _simple_new(
cls: Type[DatetimeLikeArrayT],
values: np.ndarray,
freq: Optional[BaseOffset] = None,
- dtype=None,
+ dtype: Optional[Dtype] = None,
) -> DatetimeLikeArrayT:
raise AbstractMethodError(cls)
@@ -265,7 +265,7 @@ def _formatter(self, boxed=False):
# ----------------------------------------------------------------
# Array-Like / EA-Interface Methods
- def __array__(self, dtype=None) -> np.ndarray:
+ def __array__(self, dtype: Optional[NpDtype] = None) -> np.ndarray:
# used for Timedelta/DatetimeArray, overwritten by PeriodArray
if is_object_dtype(dtype):
return np.array(list(self), dtype=object)
@@ -383,7 +383,7 @@ def astype(self, dtype, copy=True):
else:
return np.asarray(self, dtype=dtype)
- def view(self, dtype=None):
+ def view(self, dtype: Optional[Dtype] = None):
if dtype is None or dtype is self.dtype:
return type(self)(self._ndarray, dtype=self.dtype)
return self._ndarray.view(dtype=dtype)
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index d01f84b224a89..f8378fb7d1500 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas._libs import iNaT, lib, missing as libmissing
-from pandas._typing import ArrayLike, DtypeObj
+from pandas._typing import ArrayLike, Dtype, DtypeObj
from pandas.compat.numpy import function as nv
from pandas.util._decorators import cache_readonly
@@ -304,14 +304,14 @@ def __abs__(self):
@classmethod
def _from_sequence(
- cls, scalars, *, dtype=None, copy: bool = False
+ cls, scalars, *, dtype: Optional[Dtype] = None, copy: bool = False
) -> "IntegerArray":
values, mask = coerce_to_array(scalars, dtype=dtype, copy=copy)
return IntegerArray(values, mask)
@classmethod
def _from_sequence_of_strings(
- cls, strings, *, dtype=None, copy: bool = False
+ cls, strings, *, dtype: Optional[Dtype] = None, copy: bool = False
) -> "IntegerArray":
scalars = to_numeric(strings, errors="raise")
return cls._from_sequence(scalars, dtype=dtype, copy=copy)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index ea97ec387f192..e0e40a666896d 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -26,7 +26,7 @@
get_period_field_arr,
period_asfreq_arr,
)
-from pandas._typing import AnyArrayLike
+from pandas._typing import AnyArrayLike, Dtype
from pandas.util._decorators import cache_readonly, doc
from pandas.core.dtypes.common import (
@@ -198,10 +198,10 @@ def _from_sequence(
cls: Type["PeriodArray"],
scalars: Union[Sequence[Optional[Period]], AnyArrayLike],
*,
- dtype: Optional[PeriodDtype] = None,
+ dtype: Optional[Dtype] = None,
copy: bool = False,
) -> "PeriodArray":
- if dtype:
+ if dtype and isinstance(dtype, PeriodDtype):
freq = dtype.freq
else:
freq = None
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 4cfba314c719c..123196f43ef2a 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -22,7 +22,7 @@
from pandas._libs.interval import Interval
from pandas._libs.tslibs import NaT, Period, Timestamp, dtypes, timezones, to_offset
from pandas._libs.tslibs.offsets import BaseOffset
-from pandas._typing import DtypeObj, Ordered
+from pandas._typing import Dtype, DtypeObj, Ordered
from pandas.core.dtypes.base import ExtensionDtype, register_extension_dtype
from pandas.core.dtypes.generic import ABCCategoricalIndex, ABCIndex
@@ -185,7 +185,7 @@ def _from_values_or_dtype(
values=None,
categories=None,
ordered: Optional[bool] = None,
- dtype: Optional["CategoricalDtype"] = None,
+ dtype: Optional[Dtype] = None,
) -> "CategoricalDtype":
"""
Construct dtype from the input parameters used in :class:`Categorical`.
@@ -272,7 +272,7 @@ def _from_values_or_dtype(
# ordered=None.
dtype = CategoricalDtype(categories, ordered)
- return dtype
+ return cast(CategoricalDtype, dtype)
@classmethod
def construct_from_string(cls, string: str_type) -> "CategoricalDtype":
| Follow on PR for #38808
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38826 | 2020-12-30T18:27:55Z | 2020-12-31T03:08:30Z | 2020-12-31T03:08:30Z | 2020-12-31T03:08:33Z |
TST: GH30999 Change all pytest.raises in pandas/tests/indexing to tm.external_error_raised | diff --git a/pandas/tests/indexing/multiindex/test_partial.py b/pandas/tests/indexing/multiindex/test_partial.py
index 9c356b81b85db..c203d986efd23 100644
--- a/pandas/tests/indexing/multiindex/test_partial.py
+++ b/pandas/tests/indexing/multiindex/test_partial.py
@@ -178,9 +178,9 @@ def test_partial_loc_missing(self, multiindex_year_month_day_dataframe_random_da
# assert (self.ymd.loc[2000]['A'] == 0).all()
# Pretty sure the second (and maybe even the first) is already wrong.
- with pytest.raises(Exception):
+ with pytest.raises(KeyError, match="6"):
ymd.loc[(2000, 6)]
- with pytest.raises(Exception):
+ with pytest.raises(KeyError, match="(2000, 6)"):
ymd.loc[(2000, 6), 0]
# ---------------------------------------------------------------------
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index 15f58006426f4..41f967ce32796 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -331,7 +331,8 @@ def test_setitem_index_float64(self, val, exp_dtype, request):
if exp_dtype is IndexError:
# float + int -> int
temp = obj.copy()
- with pytest.raises(exp_dtype):
+ msg = "index 5 is out of bounds for axis 0 with size 4"
+ with pytest.raises(exp_dtype, match=msg):
temp[5] = 5
mark = pytest.mark.xfail(reason="TODO_GH12747 The result must be float")
request.node.add_marker(mark)
| Addresses xref #30999 for pandas/tests/indexing by changing three bare `pytest.raise` instances to `tm.external_error_raised`
This is the last of the simpler PRs for #30999 - after that it gets more complex.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38825 | 2020-12-30T18:18:50Z | 2020-12-31T16:40:14Z | 2020-12-31T16:40:14Z | 2020-12-31T16:40:17Z |
REGR: read_excel does not work for most file handles | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 3ecea674fd34c..df7a8b8775501 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 221e8b9ccfb14..5be8dbf152309 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1051,16 +1051,11 @@ def __init__(
xlrd_version = LooseVersion(xlrd.__version__)
- if isinstance(path_or_buffer, (BufferedIOBase, RawIOBase, bytes)):
- ext = inspect_excel_format(
- content=path_or_buffer, storage_options=storage_options
- )
- elif xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
+ if xlrd_version is not None and isinstance(path_or_buffer, xlrd.Book):
ext = "xls"
else:
- # path_or_buffer is path-like, use stringified path
ext = inspect_excel_format(
- path=str(self._io), storage_options=storage_options
+ content=path_or_buffer, storage_options=storage_options
)
if engine is None:
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index df1250cee8b00..8b1a96f694e71 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -657,6 +657,22 @@ def test_read_from_s3_url(self, read_ext, s3_resource, s3so):
local_table = pd.read_excel("test1" + read_ext)
tm.assert_frame_equal(url_table, local_table)
+ def test_read_from_s3_object(self, read_ext, s3_resource, s3so):
+ # GH 38788
+ # Bucket "pandas-test" created in tests/io/conftest.py
+ with open("test1" + read_ext, "rb") as f:
+ s3_resource.Bucket("pandas-test").put_object(Key="test1" + read_ext, Body=f)
+
+ import s3fs
+
+ s3 = s3fs.S3FileSystem(**s3so)
+
+ with s3.open("s3://pandas-test/test1" + read_ext) as f:
+ url_table = pd.read_excel(f)
+
+ local_table = pd.read_excel("test1" + read_ext)
+ tm.assert_frame_equal(url_table, local_table)
+
@pytest.mark.slow
def test_read_from_file_url(self, read_ext, datapath):
| - [x] closes #38788
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38819 | 2020-12-30T16:11:15Z | 2020-12-30T21:12:30Z | 2020-12-30T21:12:30Z | 2020-12-31T16:24:45Z |
TST/CLN: Remove duplicate abc tests | diff --git a/pandas/tests/dtypes/test_generic.py b/pandas/tests/dtypes/test_generic.py
index 1d0c871eaa0a8..3e95a1f2f50ac 100644
--- a/pandas/tests/dtypes/test_generic.py
+++ b/pandas/tests/dtypes/test_generic.py
@@ -22,28 +22,6 @@ class TestABCClasses:
datetime_array = pd.core.arrays.DatetimeArray(datetime_index)
timedelta_array = pd.core.arrays.TimedeltaArray(timedelta_index)
- def test_abc_types(self):
- assert isinstance(pd.Int64Index([1, 2, 3]), gt.ABCInt64Index)
- assert isinstance(pd.UInt64Index([1, 2, 3]), gt.ABCUInt64Index)
- assert isinstance(pd.Float64Index([1, 2, 3]), gt.ABCFloat64Index)
- assert isinstance(self.multi_index, gt.ABCMultiIndex)
- assert isinstance(self.datetime_index, gt.ABCDatetimeIndex)
- assert isinstance(self.timedelta_index, gt.ABCTimedeltaIndex)
- assert isinstance(self.period_index, gt.ABCPeriodIndex)
- assert isinstance(self.categorical_df.index, gt.ABCCategoricalIndex)
- assert isinstance(pd.Index(["a", "b", "c"]), gt.ABCIndex)
- assert isinstance(pd.Int64Index([1, 2, 3]), gt.ABCIndex)
- assert isinstance(pd.Series([1, 2, 3]), gt.ABCSeries)
- assert isinstance(self.df, gt.ABCDataFrame)
- assert isinstance(self.sparse_array, gt.ABCExtensionArray)
- assert isinstance(self.categorical, gt.ABCCategorical)
-
- assert isinstance(self.datetime_array, gt.ABCDatetimeArray)
- assert not isinstance(self.datetime_index, gt.ABCDatetimeArray)
-
- assert isinstance(self.timedelta_array, gt.ABCTimedeltaArray)
- assert not isinstance(self.timedelta_index, gt.ABCTimedeltaArray)
-
abc_pairs = [
("ABCInt64Index", pd.Int64Index([1, 2, 3])),
("ABCUInt64Index", pd.UInt64Index([1, 2, 3])),
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
From #38588, this test is a proper subset of the tests immediately below them. | https://api.github.com/repos/pandas-dev/pandas/pulls/38818 | 2020-12-30T15:55:30Z | 2020-12-30T18:43:16Z | 2020-12-30T18:43:16Z | 2020-12-30T18:43:34Z |
MAINT: regex char class improve | diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 538a52d84b73a..ac9b160ab0968 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -1066,7 +1066,7 @@ def test_replace_compiled_regex(self):
values = Series(["fooBAD__barBAD", np.nan])
# test with compiled regex
- pat = re.compile(r"BAD[_]*")
+ pat = re.compile(r"BAD_*")
result = values.str.replace(pat, "", regex=True)
exp = Series(["foobar", np.nan])
tm.assert_series_equal(result, exp)
@@ -1095,7 +1095,7 @@ def test_replace_compiled_regex(self):
# case and flags provided to str.replace will have no effect
# and will produce warnings
values = Series(["fooBAD__barBAD__bad", np.nan])
- pat = re.compile(r"BAD[_]*")
+ pat = re.compile(r"BAD_*")
with pytest.raises(ValueError, match="case and flags cannot be"):
result = values.str.replace(pat, "", flags=re.IGNORECASE)
| * remove superfluous regex character classes
from the codebase (those that contain a single
character incur the overhead of a class with
none of the advantages of a class)
* for more details, see similar change in NumPy:
https://github.com/numpy/numpy/pull/18083
* check performed with some simple [scraping code](https://github.com/tylerjereddy/regex-improve) | https://api.github.com/repos/pandas-dev/pandas/pulls/38817 | 2020-12-30T15:44:33Z | 2020-12-30T18:42:03Z | 2020-12-30T18:42:03Z | 2020-12-30T18:42:07Z |
REGR: groupby.sem with nuisance columns | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index df7a8b8775501..acbfaa5f4d3bf 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- Fixed regression in :meth:`.GroupBy.sem` where the presence of non-numeric columns would cause an error instead of being dropped (:issue:`38774`)
- :func:`read_excel` does not work for non-rawbyte file handles (issue:`38788`)
- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index ff6ff98fb7840..aef4c036abc65 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1620,12 +1620,11 @@ def sem(self, ddof: int = 1):
if result.ndim == 1:
result /= np.sqrt(self.count())
else:
- cols = result.columns.get_indexer_for(
- result.columns.difference(self.exclusions).unique()
- )
- result.iloc[:, cols] = result.iloc[:, cols] / np.sqrt(
- self.count().iloc[:, cols]
- )
+ cols = result.columns.difference(self.exclusions).unique()
+ counts = self.count()
+ result_ilocs = result.columns.get_indexer_for(cols)
+ count_ilocs = counts.columns.get_indexer_for(cols)
+ result.iloc[:, result_ilocs] /= np.sqrt(counts.iloc[:, count_ilocs])
return result
@final
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index f8a9412d3036d..e5021b7b4dd5f 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -842,6 +842,14 @@ def test_omit_nuisance(df):
grouped.agg(lambda x: x.sum(0, numeric_only=False))
+def test_omit_nuisance_sem(df):
+ # GH 38774 - sem should work with nuisance columns
+ grouped = df.groupby("A")
+ result = grouped.sem()
+ expected = df.loc[:, ["A", "C", "D"]].groupby("A").sem()
+ tm.assert_frame_equal(result, expected)
+
+
def test_omit_nuisance_python_multiple(three_group):
grouped = three_group.groupby(["A", "B"])
| - [x] closes #38774
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
We were using the same integer locs in the result of `std` and `count` when computing the quotient for `sem`. However, `std` will drop nuisance columns and `count` will not.
I could only find two tests involving nuisance columns, and these only test sum/mean. Created #38815 for a followup. | https://api.github.com/repos/pandas-dev/pandas/pulls/38816 | 2020-12-30T15:40:27Z | 2020-12-30T23:38:49Z | 2020-12-30T23:38:49Z | 2020-12-31T13:33:07Z |
CLN: Add typing for dtype argument in io directory (GH38808) | diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 221e8b9ccfb14..d54426a437843 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -12,7 +12,7 @@
from pandas._config import config
from pandas._libs.parsers import STR_NA_VALUES
-from pandas._typing import Buffer, FilePathOrBuffer, StorageOptions
+from pandas._typing import Buffer, DtypeArg, FilePathOrBuffer, StorageOptions
from pandas.compat._optional import import_optional_dependency
from pandas.errors import EmptyDataError
from pandas.util._decorators import Appender, deprecate_nonkeyword_arguments, doc
@@ -309,7 +309,7 @@ def read_excel(
index_col=None,
usecols=None,
squeeze=False,
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
engine=None,
converters=None,
true_values=None,
@@ -433,7 +433,7 @@ def parse(
index_col=None,
usecols=None,
squeeze=False,
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
true_values=None,
false_values=None,
skiprows=None,
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index e1ac7b1b02f21..dd1c012252683 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -11,6 +11,8 @@
from pandas._libs.tslibs import iNaT
from pandas._typing import (
CompressionOptions,
+ DtypeArg,
+ FrameOrSeriesUnion,
IndexLabel,
JSONSerializable,
StorageOptions,
@@ -296,7 +298,7 @@ def read_json(
path_or_buf=None,
orient=None,
typ="frame",
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
convert_axes=None,
convert_dates=True,
keep_default_dates: bool = True,
@@ -775,7 +777,7 @@ def __init__(
self,
json,
orient,
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
convert_axes=True,
convert_dates=True,
keep_default_dates=False,
@@ -809,7 +811,7 @@ def __init__(
self.convert_dates = convert_dates
self.date_unit = date_unit
self.keep_default_dates = keep_default_dates
- self.obj = None
+ self.obj: Optional[FrameOrSeriesUnion] = None
def check_keys_split(self, decoded):
"""
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index d670821c98520..68c0bbf0787e6 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -31,7 +31,7 @@
import pandas._libs.parsers as parsers
from pandas._libs.parsers import STR_NA_VALUES
from pandas._libs.tslibs import parsing
-from pandas._typing import FilePathOrBuffer, StorageOptions, Union
+from pandas._typing import DtypeArg, FilePathOrBuffer, StorageOptions, Union
from pandas.errors import (
AbstractMethodError,
EmptyDataError,
@@ -546,7 +546,7 @@ def read_csv(
prefix=None,
mangle_dupe_cols=True,
# General Parsing Configuration
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
engine=None,
converters=None,
true_values=None,
@@ -626,7 +626,7 @@ def read_table(
prefix=None,
mangle_dupe_cols=True,
# General Parsing Configuration
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
engine=None,
converters=None,
true_values=None,
@@ -3502,25 +3502,22 @@ def _clean_index_names(columns, index_col, unnamed_cols):
return index_names, columns, index_col
-def _get_empty_meta(columns, index_col, index_names, dtype=None):
+def _get_empty_meta(columns, index_col, index_names, dtype: Optional[DtypeArg] = None):
columns = list(columns)
# Convert `dtype` to a defaultdict of some kind.
# This will enable us to write `dtype[col_name]`
# without worrying about KeyError issues later on.
- if not isinstance(dtype, dict):
+ if not is_dict_like(dtype):
# if dtype == None, default will be object.
default_dtype = dtype or object
dtype = defaultdict(lambda: default_dtype)
else:
- # Save a copy of the dictionary.
- _dtype = dtype.copy()
- dtype = defaultdict(lambda: object)
-
- # Convert column indexes to column names.
- for k, v in _dtype.items():
- col = columns[k] if is_integer(k) else k
- dtype[col] = v
+ dtype = cast(dict, dtype)
+ dtype = defaultdict(
+ lambda: object,
+ {columns[k] if is_integer(k) else k: v for k, v in dtype.items()},
+ )
# Even though we have no data, the "index" of the empty DataFrame
# could for example still be an empty MultiIndex. Thus, we need to
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index d5c5e8edb9efe..341a8a9f90b96 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -29,7 +29,14 @@
from pandas._libs import lib, writers as libwriters
from pandas._libs.tslibs import timezones
-from pandas._typing import ArrayLike, FrameOrSeries, FrameOrSeriesUnion, Label, Shape
+from pandas._typing import (
+ ArrayLike,
+ DtypeArg,
+ FrameOrSeries,
+ FrameOrSeriesUnion,
+ Label,
+ Shape,
+)
from pandas.compat._optional import import_optional_dependency
from pandas.compat.pickle_compat import patch_pickle
from pandas.errors import PerformanceWarning
@@ -2259,7 +2266,7 @@ def __init__(
table=None,
meta=None,
metadata=None,
- dtype=None,
+ dtype: Optional[DtypeArg] = None,
data=None,
):
super().__init__(
| incremental PR for issue #38808
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38814 | 2020-12-30T15:25:05Z | 2020-12-30T21:24:22Z | 2020-12-30T21:24:22Z | 2020-12-30T21:24:26Z |
Backport PR #38723 on branch 1.2.x (BUG: inconsistency between frame.any/all with dt64 vs dt64tz) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 374ac737298d3..0bc01c683e0ad 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -18,6 +18,8 @@ Fixed regressions
- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
+- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be9864731842d..2f2f8efc0c360 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1621,6 +1621,17 @@ def floor(self, freq, ambiguous="raise", nonexistent="raise"):
def ceil(self, freq, ambiguous="raise", nonexistent="raise"):
return self._round(freq, RoundTo.PLUS_INFTY, ambiguous, nonexistent)
+ # --------------------------------------------------------------
+ # Reductions
+
+ def any(self, *, axis: Optional[int] = None, skipna: bool = True):
+ # GH#34479 discussion of desired behavior long-term
+ return nanops.nanany(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())
+
+ def all(self, *, axis: Optional[int] = None, skipna: bool = True):
+ # GH#34479 discussion of desired behavior long-term
+ return nanops.nanall(self._ndarray, axis=axis, skipna=skipna, mask=self.isna())
+
# --------------------------------------------------------------
# Frequency Methods
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index d33d91f2cefca..d843d4b0e9504 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1091,9 +1091,13 @@ def test_any_all_bool_only(self):
(np.all, {"A": Series([0, 1], dtype=int)}, False),
(np.any, {"A": Series([0, 1], dtype=int)}, True),
pytest.param(np.all, {"A": Series([0, 1], dtype="M8[ns]")}, False),
+ pytest.param(np.all, {"A": Series([0, 1], dtype="M8[ns, UTC]")}, False),
pytest.param(np.any, {"A": Series([0, 1], dtype="M8[ns]")}, True),
+ pytest.param(np.any, {"A": Series([0, 1], dtype="M8[ns, UTC]")}, True),
pytest.param(np.all, {"A": Series([1, 2], dtype="M8[ns]")}, True),
+ pytest.param(np.all, {"A": Series([1, 2], dtype="M8[ns, UTC]")}, True),
pytest.param(np.any, {"A": Series([1, 2], dtype="M8[ns]")}, True),
+ pytest.param(np.any, {"A": Series([1, 2], dtype="M8[ns, UTC]")}, True),
pytest.param(np.all, {"A": Series([0, 1], dtype="m8[ns]")}, False),
pytest.param(np.any, {"A": Series([0, 1], dtype="m8[ns]")}, True),
pytest.param(np.all, {"A": Series([1, 2], dtype="m8[ns]")}, True),
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 8c2297699807d..94afa204db891 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -17,6 +17,7 @@
Timedelta,
TimedeltaIndex,
Timestamp,
+ date_range,
isna,
timedelta_range,
to_timedelta,
@@ -923,6 +924,48 @@ def test_any_axis1_bool_only(self):
expected = Series([True, False])
tm.assert_series_equal(result, expected)
+ def test_any_all_datetimelike(self):
+ # GH#38723 these may not be the desired long-term behavior (GH#34479)
+ # but in the interim should be internally consistent
+ dta = date_range("1995-01-02", periods=3)._data
+ ser = Series(dta)
+ df = DataFrame(ser)
+
+ assert dta.all()
+ assert dta.any()
+
+ assert ser.all()
+ assert ser.any()
+
+ assert df.any().all()
+ assert df.all().all()
+
+ dta = dta.tz_localize("UTC")
+ ser = Series(dta)
+ df = DataFrame(ser)
+
+ assert dta.all()
+ assert dta.any()
+
+ assert ser.all()
+ assert ser.any()
+
+ assert df.any().all()
+ assert df.all().all()
+
+ tda = dta - dta[0]
+ ser = Series(tda)
+ df = DataFrame(ser)
+
+ assert tda.any()
+ assert not tda.all()
+
+ assert ser.any()
+ assert not ser.all()
+
+ assert df.any().all()
+ assert not df.all().any()
+
def test_timedelta64_analytics(self):
# index min/max
| Backport PR #38723: BUG: inconsistency between frame.any/all with dt64 vs dt64tz | https://api.github.com/repos/pandas-dev/pandas/pulls/38807 | 2020-12-30T13:24:07Z | 2020-12-30T14:45:19Z | 2020-12-30T14:45:19Z | 2020-12-30T14:45:19Z |
DOC: fix sphinx directive error in 1.2.1 release notes | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 804886fb987ad..ac5c83f7c368e 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -15,9 +15,9 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
-- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
+- Fixed regression in :meth:`to_csv` that created corrupted zip files when there were more rows than ``chunksize`` (:issue:`38714`)
- Fixed a regression in ``groupby().rolling()`` where :class:`MultiIndex` levels were dropped (:issue:`38523`)
-- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
+- Fixed regression in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
.. ---------------------------------------------------------------------------
| https://api.github.com/repos/pandas-dev/pandas/pulls/38806 | 2020-12-30T12:12:59Z | 2020-12-31T10:28:24Z | 2020-12-31T10:28:24Z | 2020-12-31T10:37:20Z | |
TST: GH30999 add match=msg to pytest.raises in modules with one simple instance each | diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index ce9d5b1dca505..e1f339e360c77 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -131,7 +131,7 @@ def foo():
inference.is_list_like([])
foo()
- with pytest.raises(RecursionError):
+ with tm.external_error_raised(RecursionError):
foo()
diff --git a/pandas/tests/libs/test_hashtable.py b/pandas/tests/libs/test_hashtable.py
index a6fd421911d3e..84398b3e63d31 100644
--- a/pandas/tests/libs/test_hashtable.py
+++ b/pandas/tests/libs/test_hashtable.py
@@ -69,9 +69,8 @@ def test_get_set_contains_len(self, table_type, dtype):
assert table.get_item(index + 1) == 41
assert index + 2 not in table
- with pytest.raises(KeyError) as excinfo:
+ with pytest.raises(KeyError, match=str(index + 2)):
table.get_item(index + 2)
- assert str(index + 2) in str(excinfo.value)
def test_map(self, table_type, dtype):
# PyObjectHashTable has no map-method
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index da5bb0eb59f70..86955ac4e4d22 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -151,7 +151,7 @@ def test_groupby_with_origin():
count_ts = ts.groupby(simple_grouper).agg("count")
count_ts = count_ts[middle:end]
count_ts2 = ts2.groupby(simple_grouper).agg("count")
- with pytest.raises(AssertionError):
+ with pytest.raises(AssertionError, match="Index are different"):
tm.assert_index_equal(count_ts.index, count_ts2.index)
# test origin on 1970-01-01 00:00:00
| This pull request partially addresses xref #30999 to remove bare pytest.raises by adding match=msg. It doesn't close that issue as I have only addressed the following three modules:
pandas/tests/dtypes/test_inference.py
pandas/tests/libs/test_hashtable.py
pandas/tests/resample/test_resampler_grouper.py
They are scattered around the test directory but each of them only had one instance of a bare pytest.raises which could be fixed with a simple addition of match=msg or change to tm.external_error_raised
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38805 | 2020-12-30T11:46:47Z | 2020-12-30T13:30:14Z | 2020-12-30T13:30:14Z | 2020-12-30T13:30:18Z |
TST: GH30999 add match=msg to all pytest.raises in pandas/tests/window | diff --git a/pandas/tests/window/moments/test_moments_ewm.py b/pandas/tests/window/moments/test_moments_ewm.py
index eceba7f143ab9..70706c027a21e 100644
--- a/pandas/tests/window/moments/test_moments_ewm.py
+++ b/pandas/tests/window/moments/test_moments_ewm.py
@@ -218,11 +218,12 @@ def test_ewma_halflife_arg(series):
msg = "comass, span, halflife, and alpha are mutually exclusive"
with pytest.raises(ValueError, match=msg):
series.ewm(span=20, halflife=50)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg):
series.ewm(com=9.5, halflife=50)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg):
series.ewm(com=9.5, span=20, halflife=50)
- with pytest.raises(ValueError):
+ msg = "Must pass one of comass, span, halflife, or alpha"
+ with pytest.raises(ValueError, match=msg):
series.ewm()
diff --git a/pandas/tests/window/test_apply.py b/pandas/tests/window/test_apply.py
index 076578f4dc3c4..b47cd71beb6a8 100644
--- a/pandas/tests/window/test_apply.py
+++ b/pandas/tests/window/test_apply.py
@@ -44,7 +44,7 @@ def f(x):
expected = df.iloc[2:].reindex_like(df)
tm.assert_frame_equal(result, expected)
- with pytest.raises(AttributeError):
+ with tm.external_error_raised(AttributeError):
df.rolling(window).apply(f, raw=True)
| This pull request partially addresses xref #30999 to remove bare `pytest.raises` by adding `match=msg`. It doesn't close that issue as I have only addressed test modules in the pandas/tests/window/ directory.
I have added `match=msg` to 3 instances of `pytest.raises` and converted another to a `tm.external_error_raised`. Nothing complicated or controversial, imo.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38804 | 2020-12-30T11:27:00Z | 2020-12-30T13:47:31Z | 2020-12-30T13:47:31Z | 2020-12-30T13:47:34Z |
BUG: avoid attribute error with pyarrow >=0.16.0 and <1.0.0 | diff --git a/ci/deps/actions-37-locale.yaml b/ci/deps/actions-37-locale.yaml
index 4f9918ca2f0c0..b18ce37d05ca0 100644
--- a/ci/deps/actions-37-locale.yaml
+++ b/ci/deps/actions-37-locale.yaml
@@ -30,7 +30,7 @@ dependencies:
- openpyxl
- pandas-gbq
- google-cloud-bigquery>=1.27.2 # GH 36436
- - pyarrow>=0.17
+ - pyarrow=0.17 # GH 38803
- pytables>=3.5.1
- scipy
- xarray=0.12.3
diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 804886fb987ad..7d2468400f74e 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -32,6 +32,7 @@ I/O
- Bumped minimum fastparquet version to 0.4.0 to avoid ``AttributeError`` from numba (:issue:`38344`)
- Bumped minimum pymysql version to 0.8.1 to avoid test failures (:issue:`38344`)
+- Fixed ``AttributeError`` with PyArrow versions [0.16.0, 1.0.0) (:issue:`38801`)
-
-
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 3a351bf497662..065d9deac5df6 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -29,13 +29,12 @@
except ImportError:
pa = None
else:
- # our min supported version of pyarrow, 0.15.1, does not have a compute
- # module
- try:
+ # PyArrow backed StringArrays are available starting at 1.0.0, but this
+ # file is imported from even if pyarrow is < 1.0.0, before pyarrow.compute
+ # and its compute functions existed. GH38801
+ if LooseVersion(pa.__version__) >= "1.0.0":
import pyarrow.compute as pc
- except ImportError:
- pass
- else:
+
ARROW_CMP_FUNCS = {
"eq": pc.equal,
"ne": pc.not_equal,
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 822b412916726..98863c5fe1549 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -896,7 +896,7 @@ def test_timezone_aware_index(self, pa, timezone_aware_date_list):
# this use-case sets the resolution to 1 minute
check_round_trip(df, pa, check_dtype=False)
- @td.skip_if_no("pyarrow", min_version="0.17")
+ @td.skip_if_no("pyarrow", min_version="1.0.0")
def test_filter_row_groups(self, pa):
# https://github.com/pandas-dev/pandas/issues/26551
df = pd.DataFrame({"a": list(range(0, 3))})
| Problem: The minimum pyarrow [listed](https://pandas.pydata.org/docs/dev/whatsnew/v1.2.0.html#increased-minimum-versions-for-dependencies) as an optional dependency is 0.15.1. Pyarrow added the compute module that is imported in the change here with pyarrow [0.16.0](https://github.com/apache/arrow/commit/27dded680e84f1a628de1bddff1f4eb62fbc5887#diff-3d08757408024228c4443730cc3536ab39c9436cd2e4cb63e5da34c69c18962f) but attributes imported by pandas are not available in that module until [1.0.0](https://github.com/apache/arrow/commit/dcd17bf36e0f7b18e8b8f466ed2cb3eb396955d8#diff-3d08757408024228c4443730cc3536ab39c9436cd2e4cb63e5da34c69c18962f) thus anyone using pyarrow [0.16.0, 1.0.0) will get an Attribute error.
This PR adds as try/expect around accessing the attributes such that they are only accessed if available (i.e. pyarrow >=1.0.0)
- [X] closes #38801
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38803 | 2020-12-30T10:47:20Z | 2021-01-05T12:23:29Z | 2021-01-05T12:23:28Z | 2021-01-05T13:23:45Z |
TST: GH30999 Add match=msg to all pytest.raises in pandas/tests/reshape | diff --git a/pandas/tests/reshape/test_get_dummies.py b/pandas/tests/reshape/test_get_dummies.py
index a32adeb612e7c..42907b3b4e23f 100644
--- a/pandas/tests/reshape/test_get_dummies.py
+++ b/pandas/tests/reshape/test_get_dummies.py
@@ -1,3 +1,5 @@
+import re
+
import numpy as np
import pytest
@@ -30,7 +32,8 @@ def effective_dtype(self, dtype):
return dtype
def test_get_dummies_raises_on_dtype_object(self, df):
- with pytest.raises(ValueError):
+ msg = "dtype=object is not a valid dtype for get_dummies"
+ with pytest.raises(ValueError, match=msg):
get_dummies(df, dtype="object")
def test_get_dummies_basic(self, sparse, dtype):
@@ -296,11 +299,19 @@ def test_dataframe_dummies_prefix_sep(self, df, sparse):
tm.assert_frame_equal(result, expected)
def test_dataframe_dummies_prefix_bad_length(self, df, sparse):
- with pytest.raises(ValueError):
+ msg = re.escape(
+ "Length of 'prefix' (1) did not match the length of the columns being "
+ "encoded (2)"
+ )
+ with pytest.raises(ValueError, match=msg):
get_dummies(df, prefix=["too few"], sparse=sparse)
def test_dataframe_dummies_prefix_sep_bad_length(self, df, sparse):
- with pytest.raises(ValueError):
+ msg = re.escape(
+ "Length of 'prefix_sep' (1) did not match the length of the columns being "
+ "encoded (2)"
+ )
+ with pytest.raises(ValueError, match=msg):
get_dummies(df, prefix_sep=["bad"], sparse=sparse)
def test_dataframe_dummies_prefix_dict(self, sparse):
diff --git a/pandas/tests/reshape/test_union_categoricals.py b/pandas/tests/reshape/test_union_categoricals.py
index b44f4844b8e2d..8c0c0a1f22760 100644
--- a/pandas/tests/reshape/test_union_categoricals.py
+++ b/pandas/tests/reshape/test_union_categoricals.py
@@ -275,7 +275,8 @@ def test_union_categoricals_sort(self):
c1 = Categorical(["b", "a"], categories=["b", "a", "c"], ordered=True)
c2 = Categorical(["a", "c"], categories=["b", "a", "c"], ordered=True)
- with pytest.raises(TypeError):
+ msg = "Cannot use sort_categories=True with ordered Categoricals"
+ with pytest.raises(TypeError, match=msg):
union_categoricals([c1, c2], sort_categories=True)
def test_union_categoricals_sort_false(self):
@@ -344,5 +345,6 @@ def test_union_categorical_unwrap(self):
result = union_categoricals([c1, c2])
tm.assert_categorical_equal(result, expected)
- with pytest.raises(TypeError):
+ msg = "all components to combine must be Categorical"
+ with pytest.raises(TypeError, match=msg):
union_categoricals([c1, ["a", "b", "c"]])
| This pull request partially addresses xref #30999 to remove bare pytest.raises by adding match=msg. It doesn't close that issue as I have only addressed test modules in the pandas/tests/reshape/ directory.
This one was super simple. Fixed 5 bare pytest.raises and didn't do anything complicated or controversial.
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38800 | 2020-12-30T10:21:53Z | 2020-12-30T13:33:38Z | 2020-12-30T13:33:38Z | 2020-12-30T13:33:42Z |
TST: Add hook for Disallow bare pytest.raises | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index d78c2bacc4e44..2dade8afbf91f 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,4 +1,5 @@
minimum_pre_commit_version: 2.9.2
+exclude: ^LICENSES/|\.(html|csv|svg)$
repos:
- repo: https://github.com/python/black
rev: 20.8b1
@@ -121,6 +122,13 @@ repos:
entry: python scripts/validate_unwanted_patterns.py --validation-type="private_function_across_module"
types: [python]
exclude: ^(asv_bench|pandas/tests|doc)/
+ - id: unwanted-patterns-bare-pytest-raises
+ name: Check for use of bare pytest raises
+ language: python
+ entry: python scripts/validate_unwanted_patterns.py --validation-type="bare_pytest_raises"
+ types: [python]
+ files: ^pandas/tests/
+ exclude: ^pandas/tests/(computation|extension|io)/
- id: inconsistent-namespace-usage
name: 'Check for inconsistent use of pandas namespace in tests'
entry: python scripts/check_for_inconsistent_pandas_namespace.py
@@ -137,7 +145,7 @@ repos:
name: Check for use of foo.__class__ instead of type(foo)
entry: \.__class__
language: pygrep
- files: \.(py|pyx)$
+ types_or: [python, cython]
- id: unwanted-typing
name: Check for use of comment-based annotation syntax and missing error codes
entry: |
@@ -165,9 +173,8 @@ repos:
rev: v3.4.0
hooks:
- id: end-of-file-fixer
- exclude: ^LICENSES/|\.(html|csv|txt|svg|py)$
+ exclude: \.txt$
- id: trailing-whitespace
- exclude: \.(html|svg)$
- repo: https://github.com/codespell-project/codespell
rev: v2.0.0
hooks:
| xref #30999
Adding a hook with lots of excluded folders (for now), this can be made stricter as we go along, there's not that much left | https://api.github.com/repos/pandas-dev/pandas/pulls/38799 | 2020-12-30T08:51:07Z | 2021-01-03T21:05:46Z | 2021-01-03T21:05:46Z | 2021-01-05T16:56:39Z |
BUG: GH38672 SeriesGroupBy.value_counts for categorical | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 5197fd2b23dab..ba3af0a6accb3 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -284,7 +284,7 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
--
+- Bug in :meth:`SeriesGroupBy.value_counts` where unobserved categories in a grouped categorical series were not tallied (:issue:`38672`)
-
Reshaping
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 16b00735cf694..f2899a7ca704b 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -42,6 +42,7 @@
ensure_int64,
ensure_platform_int,
is_bool,
+ is_categorical_dtype,
is_integer_dtype,
is_interval_dtype,
is_numeric_dtype,
@@ -681,9 +682,10 @@ def value_counts(
from pandas.core.reshape.merge import get_join_indexers
from pandas.core.reshape.tile import cut
- if bins is not None and not np.iterable(bins):
- # scalar bins cannot be done at top level
- # in a backward compatible way
+ ids, _, _ = self.grouper.group_info
+ val = self.obj._values
+
+ def apply_series_value_counts():
return self.apply(
Series.value_counts,
normalize=normalize,
@@ -692,8 +694,14 @@ def value_counts(
bins=bins,
)
- ids, _, _ = self.grouper.group_info
- val = self.obj._values
+ if bins is not None:
+ if not np.iterable(bins):
+ # scalar bins cannot be done at top level
+ # in a backward compatible way
+ return apply_series_value_counts()
+ elif is_categorical_dtype(val):
+ # GH38672
+ return apply_series_value_counts()
# groupby removes null keys from groupings
mask = ids != -1
diff --git a/pandas/tests/groupby/test_value_counts.py b/pandas/tests/groupby/test_value_counts.py
index c5d454baa7e7b..afb648d8527ca 100644
--- a/pandas/tests/groupby/test_value_counts.py
+++ b/pandas/tests/groupby/test_value_counts.py
@@ -9,7 +9,16 @@
import numpy as np
import pytest
-from pandas import DataFrame, Grouper, MultiIndex, Series, date_range, to_datetime
+from pandas import (
+ Categorical,
+ CategoricalIndex,
+ DataFrame,
+ Grouper,
+ MultiIndex,
+ Series,
+ date_range,
+ to_datetime,
+)
import pandas._testing as tm
@@ -111,3 +120,30 @@ def test_series_groupby_value_counts_with_grouper():
expected.index.names = result.index.names
tm.assert_series_equal(result, expected)
+
+
+def test_series_groupby_value_counts_on_categorical():
+ # GH38672
+
+ s = Series(Categorical(["a"], categories=["a", "b"]))
+ result = s.groupby([0]).value_counts()
+
+ expected = Series(
+ data=[1, 0],
+ index=MultiIndex.from_arrays(
+ [
+ [0, 0],
+ CategoricalIndex(
+ ["a", "b"], categories=["a", "b"], ordered=False, dtype="category"
+ ),
+ ]
+ ),
+ name=0,
+ )
+
+ # Expected:
+ # 0 a 1
+ # b 0
+ # Name: 0, dtype: int64
+
+ tm.assert_series_equal(result, expected)
| Unobserved categories in Series were being dropped in value_counts, which was inconsistent with Series.value_counts
- [x] closes #38672
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38796 | 2020-12-30T03:06:51Z | 2020-12-31T16:41:17Z | 2020-12-31T16:41:17Z | 2020-12-31T22:09:33Z |
REF: implement array_algos.putmask | diff --git a/pandas/core/array_algos/putmask.py b/pandas/core/array_algos/putmask.py
new file mode 100644
index 0000000000000..32c84b6eb234f
--- /dev/null
+++ b/pandas/core/array_algos/putmask.py
@@ -0,0 +1,122 @@
+"""
+EA-compatible analogue to to np.putmask
+"""
+from typing import Any
+import warnings
+
+import numpy as np
+
+from pandas._libs import lib
+from pandas._typing import ArrayLike
+
+from pandas.core.dtypes.cast import convert_scalar_for_putitemlike, maybe_promote
+from pandas.core.dtypes.common import is_float_dtype, is_integer_dtype, is_list_like
+from pandas.core.dtypes.missing import isna_compat
+
+
+def putmask_inplace(values: ArrayLike, mask: np.ndarray, value: Any) -> None:
+ """
+ ExtensionArray-compatible implementation of np.putmask. The main
+ difference is we do not handle repeating or truncating like numpy.
+
+ Parameters
+ ----------
+ mask : np.ndarray[bool]
+ We assume _extract_bool_array has already been called.
+ value : Any
+ """
+
+ if lib.is_scalar(value) and isinstance(values, np.ndarray):
+ value = convert_scalar_for_putitemlike(value, values.dtype)
+
+ if not isinstance(values, np.ndarray) or (
+ values.dtype == object and not lib.is_scalar(value)
+ ):
+ # GH#19266 using np.putmask gives unexpected results with listlike value
+ if is_list_like(value) and len(value) == len(values):
+ values[mask] = value[mask]
+ else:
+ values[mask] = value
+ else:
+ # GH#37833 np.putmask is more performant than __setitem__
+ np.putmask(values, mask, value)
+
+
+def putmask_smart(values: np.ndarray, mask: np.ndarray, new) -> np.ndarray:
+ """
+ Return a new ndarray, try to preserve dtype if possible.
+
+ Parameters
+ ----------
+ values : np.ndarray
+ `values`, updated in-place.
+ mask : np.ndarray[bool]
+ Applies to both sides (array like).
+ new : `new values` either scalar or an array like aligned with `values`
+
+ Returns
+ -------
+ values : ndarray with updated values
+ this *may* be a copy of the original
+
+ See Also
+ --------
+ ndarray.putmask
+ """
+ # we cannot use np.asarray() here as we cannot have conversions
+ # that numpy does when numeric are mixed with strings
+
+ # n should be the length of the mask or a scalar here
+ if not is_list_like(new):
+ new = np.repeat(new, len(mask))
+
+ # see if we are only masking values that if putted
+ # will work in the current dtype
+ try:
+ nn = new[mask]
+ except TypeError:
+ # TypeError: only integer scalar arrays can be converted to a scalar index
+ pass
+ else:
+ # make sure that we have a nullable type if we have nulls
+ if not isna_compat(values, nn[0]):
+ pass
+ elif not (is_float_dtype(nn.dtype) or is_integer_dtype(nn.dtype)):
+ # only compare integers/floats
+ pass
+ elif not (is_float_dtype(values.dtype) or is_integer_dtype(values.dtype)):
+ # only compare integers/floats
+ pass
+ else:
+
+ # we ignore ComplexWarning here
+ with warnings.catch_warnings(record=True):
+ warnings.simplefilter("ignore", np.ComplexWarning)
+ nn_at = nn.astype(values.dtype)
+
+ comp = nn == nn_at
+ if is_list_like(comp) and comp.all():
+ nv = values.copy()
+ nv[mask] = nn_at
+ return nv
+
+ new = np.asarray(new)
+
+ if values.dtype.kind == new.dtype.kind:
+ # preserves dtype if possible
+ return _putmask_preserve(values, new, mask)
+
+ # change the dtype if needed
+ dtype, _ = maybe_promote(new.dtype)
+
+ values = values.astype(dtype)
+
+ return _putmask_preserve(values, new, mask)
+
+
+def _putmask_preserve(new_values: np.ndarray, new, mask: np.ndarray):
+ try:
+ new_values[mask] = new[mask]
+ except (IndexError, ValueError):
+ new_values[mask] = new
+ return new_values
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 0eca13329f4a6..76b30dc17711e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1,7 +1,6 @@
import inspect
import re
from typing import TYPE_CHECKING, Any, List, Optional, Type, Union, cast
-import warnings
import numpy as np
@@ -42,9 +41,7 @@
is_dtype_equal,
is_extension_array_dtype,
is_float,
- is_float_dtype,
is_integer,
- is_integer_dtype,
is_list_like,
is_object_dtype,
is_re,
@@ -54,9 +51,10 @@
)
from pandas.core.dtypes.dtypes import CategoricalDtype, ExtensionDtype
from pandas.core.dtypes.generic import ABCDataFrame, ABCIndex, ABCPandasArray, ABCSeries
-from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna, isna_compat
+from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
import pandas.core.algorithms as algos
+from pandas.core.array_algos.putmask import putmask_inplace, putmask_smart
from pandas.core.array_algos.replace import compare_or_regex_search, replace_regex
from pandas.core.array_algos.transforms import shift
from pandas.core.arrays import (
@@ -437,7 +435,7 @@ def fillna(
if self._can_hold_element(value):
nb = self if inplace else self.copy()
- nb._putmask_simple(mask, value)
+ putmask_inplace(nb.values, mask, value)
# TODO: should be nb._maybe_downcast?
return self._maybe_downcast([nb], downcast)
@@ -762,7 +760,7 @@ def replace(
)
blk = self if inplace else self.copy()
- blk._putmask_simple(mask, value)
+ putmask_inplace(blk.values, mask, value)
blocks = blk.convert(numeric=False, copy=not inplace)
return blocks
@@ -991,35 +989,6 @@ def setitem(self, indexer, value):
block = self.make_block(values)
return block
- def _putmask_simple(self, mask: np.ndarray, value: Any):
- """
- Like putmask but
-
- a) we do not cast on failure
- b) we do not handle repeating or truncating like numpy.
-
- Parameters
- ----------
- mask : np.ndarray[bool]
- We assume _extract_bool_array has already been called.
- value : Any
- We assume self._can_hold_element(value)
- """
- values = self.values
-
- if lib.is_scalar(value) and isinstance(values, np.ndarray):
- value = convert_scalar_for_putitemlike(value, values.dtype)
-
- if self.is_extension or (self.is_object and not lib.is_scalar(value)):
- # GH#19266 using np.putmask gives unexpected results with listlike value
- if is_list_like(value) and len(value) == len(values):
- values[mask] = value[mask]
- else:
- values[mask] = value
- else:
- # GH#37833 np.putmask is more performant than __setitem__
- np.putmask(values, mask, value)
-
def putmask(self, mask, new, axis: int = 0) -> List["Block"]:
"""
putmask the data to the block; it is possible that we may create a
@@ -1121,7 +1090,7 @@ def f(mask, val, idx):
# we need to explicitly astype here to make a copy
n = n.astype(dtype)
- nv = _putmask_smart(val, mask, n)
+ nv = putmask_smart(val, mask, n)
return nv
new_blocks = self.split_and_operate(mask, f, True)
@@ -1560,7 +1529,7 @@ def _replace_coerce(
nb = self.coerce_to_target_dtype(value)
if nb is self and not inplace:
nb = nb.copy()
- nb._putmask_simple(mask, value)
+ putmask_inplace(nb.values, mask, value)
return [nb]
else:
regex = _should_use_regex(regex, to_replace)
@@ -2665,86 +2634,6 @@ def safe_reshape(arr, new_shape: Shape):
return arr
-def _putmask_smart(v: np.ndarray, mask: np.ndarray, n) -> np.ndarray:
- """
- Return a new ndarray, try to preserve dtype if possible.
-
- Parameters
- ----------
- v : np.ndarray
- `values`, updated in-place.
- mask : np.ndarray[bool]
- Applies to both sides (array like).
- n : `new values` either scalar or an array like aligned with `values`
-
- Returns
- -------
- values : ndarray with updated values
- this *may* be a copy of the original
-
- See Also
- --------
- ndarray.putmask
- """
- # we cannot use np.asarray() here as we cannot have conversions
- # that numpy does when numeric are mixed with strings
-
- # n should be the length of the mask or a scalar here
- if not is_list_like(n):
- n = np.repeat(n, len(mask))
-
- # see if we are only masking values that if putted
- # will work in the current dtype
- try:
- nn = n[mask]
- except TypeError:
- # TypeError: only integer scalar arrays can be converted to a scalar index
- pass
- else:
- # make sure that we have a nullable type
- # if we have nulls
- if not isna_compat(v, nn[0]):
- pass
- elif not (is_float_dtype(nn.dtype) or is_integer_dtype(nn.dtype)):
- # only compare integers/floats
- pass
- elif not (is_float_dtype(v.dtype) or is_integer_dtype(v.dtype)):
- # only compare integers/floats
- pass
- else:
-
- # we ignore ComplexWarning here
- with warnings.catch_warnings(record=True):
- warnings.simplefilter("ignore", np.ComplexWarning)
- nn_at = nn.astype(v.dtype)
-
- comp = nn == nn_at
- if is_list_like(comp) and comp.all():
- nv = v.copy()
- nv[mask] = nn_at
- return nv
-
- n = np.asarray(n)
-
- def _putmask_preserve(nv, n):
- try:
- nv[mask] = n[mask]
- except (IndexError, ValueError):
- nv[mask] = n
- return nv
-
- # preserves dtype if possible
- if v.dtype.kind == n.dtype.kind:
- return _putmask_preserve(v, n)
-
- # change the dtype if needed
- dtype, _ = maybe_promote(n.dtype)
-
- v = v.astype(dtype)
-
- return _putmask_preserve(v, n)
-
-
def _extract_bool_array(mask: ArrayLike) -> np.ndarray:
"""
If we have a SparseArray or BooleanArray, convert it to ndarray[bool].
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38793 | 2020-12-30T02:37:19Z | 2020-12-30T19:58:14Z | 2020-12-30T19:58:14Z | 2020-12-30T20:03:18Z |
BUG: DataFrame(dt64data, dtype=td64) corner cases | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index f66098633b45e..d2269b8ef78e1 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -190,7 +190,8 @@ Datetimelike
^^^^^^^^^^^^
- Bug in :class:`DataFrame` and :class:`Series` constructors sometimes dropping nanoseconds from :class:`Timestamp` (resp. :class:`Timedelta`) ``data``, with ``dtype=datetime64[ns]`` (resp. ``timedelta64[ns]``) (:issue:`38032`)
- Bug in :meth:`DataFrame.first` and :meth:`Series.first` returning two months for offset one month when first day is last calendar day (:issue:`29623`)
-- Bug in constructing a :class:`DataFrame` or :class:`Series` with mismatched ``datetime64`` data and ``timedelta64`` dtype, or vice-versa, failing to raise ``TypeError`` (:issue:`38575`, :issue:`38764`)
+- Bug in constructing a :class:`DataFrame` or :class:`Series` with mismatched ``datetime64`` data and ``timedelta64`` dtype, or vice-versa, failing to raise ``TypeError`` (:issue:`38575`, :issue:`38764`, :issue:`38792`)
+- Bug in constructing a :class:`Series` or :class:`DataFrame` with a ``datetime`` object out of bounds for ``datetime64[ns]`` dtype (:issue:`38792`)
- Bug in :meth:`DatetimeIndex.intersection`, :meth:`DatetimeIndex.symmetric_difference`, :meth:`PeriodIndex.intersection`, :meth:`PeriodIndex.symmetric_difference` always returning object-dtype when operating with :class:`CategoricalIndex` (:issue:`38741`)
- Bug in :meth:`Series.where` incorrectly casting ``datetime64`` values to ``int64`` (:issue:`37682`)
-
diff --git a/pandas/core/construction.py b/pandas/core/construction.py
index 261b13e52777b..54a6f47ae1b38 100644
--- a/pandas/core/construction.py
+++ b/pandas/core/construction.py
@@ -615,9 +615,12 @@ def _try_cast(arr, dtype: Optional[DtypeObj], copy: bool, raise_cast_failure: bo
except OutOfBoundsDatetime:
# in case of out of bound datetime64 -> always raise
raise
- except (ValueError, TypeError):
+ except (ValueError, TypeError) as err:
if dtype is not None and raise_cast_failure:
raise
+ elif "Cannot cast" in str(err):
+ # via _disallow_mismatched_datetimelike
+ raise
else:
subarr = np.array(arr, dtype=object, copy=copy)
return subarr
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 0915043f8fd46..1cfa9957874ac 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -170,7 +170,7 @@ def maybe_unbox_datetimelike(value: Scalar, dtype: DtypeObj) -> Scalar:
return value
-def _disallow_mismatched_datetimelike(value: DtypeObj, dtype: DtypeObj):
+def _disallow_mismatched_datetimelike(value, dtype: DtypeObj):
"""
numpy allows np.array(dt64values, dtype="timedelta64[ns]") and
vice-versa, but we do not want to allow this, so we need to
@@ -725,7 +725,11 @@ def infer_dtype_from_scalar(val, pandas_dtype: bool = False) -> Tuple[DtypeObj,
dtype = np.dtype(object)
elif isinstance(val, (np.datetime64, datetime)):
- val = Timestamp(val)
+ try:
+ val = Timestamp(val)
+ except OutOfBoundsDatetime:
+ return np.dtype(object), val
+
if val is NaT or val.tz is None:
dtype = np.dtype("M8[ns]")
else:
@@ -1472,6 +1476,8 @@ def maybe_cast_to_datetime(value, dtype: Optional[DtypeObj]):
# we have an array of datetime or timedeltas & nulls
elif np.prod(value.shape) or not is_dtype_equal(value.dtype, dtype):
+ _disallow_mismatched_datetimelike(value, dtype)
+
try:
if is_datetime64:
value = to_datetime(value, errors="raise")
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 94b2431650359..dcade94e0186c 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2987,12 +2987,11 @@ def test_from_timedelta64_scalar_object(self, constructor, request):
def test_from_scalar_datetimelike_mismatched(self, constructor, cls, request):
node = request.node
params = node.callspec.params
- if params["frame_or_series"] is DataFrame and params["constructor"] is not None:
+ if params["frame_or_series"] is DataFrame and params["constructor"] is dict:
mark = pytest.mark.xfail(
reason="DataFrame incorrectly allows mismatched datetimelike"
)
node.add_marker(mark)
-
scalar = cls("NaT", "ns")
dtype = {np.datetime64: "m8[ns]", np.timedelta64: "M8[ns]"}[cls]
@@ -3002,3 +3001,9 @@ def test_from_scalar_datetimelike_mismatched(self, constructor, cls, request):
scalar = cls(4, "ns")
with pytest.raises(TypeError, match="Cannot cast"):
constructor(scalar, dtype=dtype)
+
+ def test_from_out_of_bounds_datetime(self, constructor):
+ scalar = datetime(9999, 1, 1)
+ result = constructor(scalar)
+
+ assert type(get1(result)) is datetime
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38792 | 2020-12-30T02:29:39Z | 2020-12-31T23:49:08Z | 2020-12-31T23:49:08Z | 2021-01-01T02:04:12Z |
BUG: Categorical with non-nano dt64 | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 1bb5556663c29..4b29663adda23 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -30,6 +30,7 @@
coerce_indexer_dtype,
maybe_cast_to_extension_array,
maybe_infer_to_datetimelike,
+ sanitize_to_nanoseconds,
)
from pandas.core.dtypes.common import (
ensure_int64,
@@ -366,6 +367,9 @@ def __init__(
values = [values[idx] for idx in np.where(~null_mask)[0]]
values = sanitize_array(values, None, dtype=sanitize_dtype)
+ else:
+ values = sanitize_to_nanoseconds(values)
+
if dtype.categories is None:
try:
codes, categories = factorize(values, sort=True)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 25259093f9fba..08e193acdf5ea 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1521,13 +1521,7 @@ def maybe_cast_to_datetime(value, dtype: Optional[DtypeObj]):
# catch a datetime/timedelta that is not of ns variety
# and no coercion specified
if is_array and value.dtype.kind in ["M", "m"]:
- dtype = value.dtype
-
- if dtype.kind == "M" and dtype != DT64NS_DTYPE:
- value = conversion.ensure_datetime64ns(value)
-
- elif dtype.kind == "m" and dtype != TD64NS_DTYPE:
- value = conversion.ensure_timedelta64ns(value)
+ value = sanitize_to_nanoseconds(value)
# only do this if we have an array and the dtype of the array is not
# setup already we are not an integer/object, so don't bother with this
@@ -1543,6 +1537,20 @@ def maybe_cast_to_datetime(value, dtype: Optional[DtypeObj]):
return value
+def sanitize_to_nanoseconds(values: np.ndarray) -> np.ndarray:
+ """
+ Safely convert non-nanosecond datetime64 or timedelta64 values to nanosecond.
+ """
+ dtype = values.dtype
+ if dtype.kind == "M" and dtype != DT64NS_DTYPE:
+ values = conversion.ensure_datetime64ns(values)
+
+ elif dtype.kind == "m" and dtype != TD64NS_DTYPE:
+ values = conversion.ensure_timedelta64ns(values)
+
+ return values
+
+
def find_common_type(types: List[DtypeObj]) -> DtypeObj:
"""
Find a common data type among the given dtypes.
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index 924a20c7e6490..556f8c24f2ab1 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -3,6 +3,8 @@
import numpy as np
import pytest
+from pandas.compat import IS64, is_platform_windows
+
from pandas.core.dtypes.common import is_float_dtype, is_integer_dtype
from pandas.core.dtypes.dtypes import CategoricalDtype
@@ -723,3 +725,14 @@ def test_from_sequence_copy(self):
result = Categorical._from_sequence(cat, dtype=None, copy=True)
assert not np.shares_memory(result._codes, cat._codes)
+
+ @pytest.mark.xfail(
+ not IS64 or is_platform_windows(),
+ reason="Incorrectly raising in ensure_datetime64ns",
+ )
+ def test_constructor_datetime64_non_nano(self):
+ categories = np.arange(10).view("M8[D]")
+ values = categories[::2].copy()
+
+ cat = Categorical(values, categories=categories)
+ assert (cat == values).all()
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 6eb0e09f12658..fe4bcb44d5e61 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -67,72 +67,124 @@ def test_drop_duplicates_no_duplicates(any_numpy_dtype, keep, values):
class TestSeriesDropDuplicates:
- @pytest.mark.parametrize(
- "dtype",
- ["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"],
+ @pytest.fixture(
+ params=["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"]
)
- def test_drop_duplicates_categorical_non_bool(self, dtype, ordered):
- cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype))
+ def dtype(self, request):
+ return request.param
+ @pytest.fixture
+ def cat_series1(self, dtype, ordered):
# Test case 1
+ cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype))
+
input1 = np.array([1, 2, 3, 3], dtype=np.dtype(dtype))
- tc1 = Series(Categorical(input1, categories=cat_array, ordered=ordered))
- if dtype == "datetime64[D]":
- # pre-empty flaky xfail, tc1 values are seemingly-random
- if not (np.array(tc1) == input1).all():
- pytest.xfail(reason="GH#7996")
+ cat = Categorical(input1, categories=cat_array, ordered=ordered)
+ tc1 = Series(cat)
+ return tc1
+
+ def test_drop_duplicates_categorical_non_bool(self, cat_series1):
+ tc1 = cat_series1
expected = Series([False, False, False, True])
- tm.assert_series_equal(tc1.duplicated(), expected)
- tm.assert_series_equal(tc1.drop_duplicates(), tc1[~expected])
+
+ result = tc1.duplicated()
+ tm.assert_series_equal(result, expected)
+
+ result = tc1.drop_duplicates()
+ tm.assert_series_equal(result, tc1[~expected])
+
sc = tc1.copy()
return_value = sc.drop_duplicates(inplace=True)
assert return_value is None
tm.assert_series_equal(sc, tc1[~expected])
+ def test_drop_duplicates_categorical_non_bool_keeplast(self, cat_series1):
+ tc1 = cat_series1
+
expected = Series([False, False, True, False])
- tm.assert_series_equal(tc1.duplicated(keep="last"), expected)
- tm.assert_series_equal(tc1.drop_duplicates(keep="last"), tc1[~expected])
+
+ result = tc1.duplicated(keep="last")
+ tm.assert_series_equal(result, expected)
+
+ result = tc1.drop_duplicates(keep="last")
+ tm.assert_series_equal(result, tc1[~expected])
+
sc = tc1.copy()
return_value = sc.drop_duplicates(keep="last", inplace=True)
assert return_value is None
tm.assert_series_equal(sc, tc1[~expected])
+ def test_drop_duplicates_categorical_non_bool_keepfalse(self, cat_series1):
+ tc1 = cat_series1
+
expected = Series([False, False, True, True])
- tm.assert_series_equal(tc1.duplicated(keep=False), expected)
- tm.assert_series_equal(tc1.drop_duplicates(keep=False), tc1[~expected])
+
+ result = tc1.duplicated(keep=False)
+ tm.assert_series_equal(result, expected)
+
+ result = tc1.drop_duplicates(keep=False)
+ tm.assert_series_equal(result, tc1[~expected])
+
sc = tc1.copy()
return_value = sc.drop_duplicates(keep=False, inplace=True)
assert return_value is None
tm.assert_series_equal(sc, tc1[~expected])
- # Test case 2
+ @pytest.fixture
+ def cat_series2(self, dtype, ordered):
+ # Test case 2; TODO: better name
+ cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype))
+
input2 = np.array([1, 2, 3, 5, 3, 2, 4], dtype=np.dtype(dtype))
- tc2 = Series(Categorical(input2, categories=cat_array, ordered=ordered))
- if dtype == "datetime64[D]":
- # pre-empty flaky xfail, tc2 values are seemingly-random
- if not (np.array(tc2) == input2).all():
- pytest.xfail(reason="GH#7996")
+ cat = Categorical(input2, categories=cat_array, ordered=ordered)
+ tc2 = Series(cat)
+ return tc2
+
+ def test_drop_duplicates_categorical_non_bool2(self, cat_series2):
+ # Test case 2; TODO: better name
+ tc2 = cat_series2
expected = Series([False, False, False, False, True, True, False])
- tm.assert_series_equal(tc2.duplicated(), expected)
- tm.assert_series_equal(tc2.drop_duplicates(), tc2[~expected])
+
+ result = tc2.duplicated()
+ tm.assert_series_equal(result, expected)
+
+ result = tc2.drop_duplicates()
+ tm.assert_series_equal(result, tc2[~expected])
+
sc = tc2.copy()
return_value = sc.drop_duplicates(inplace=True)
assert return_value is None
tm.assert_series_equal(sc, tc2[~expected])
+ def test_drop_duplicates_categorical_non_bool2_keeplast(self, cat_series2):
+ tc2 = cat_series2
+
expected = Series([False, True, True, False, False, False, False])
- tm.assert_series_equal(tc2.duplicated(keep="last"), expected)
- tm.assert_series_equal(tc2.drop_duplicates(keep="last"), tc2[~expected])
+
+ result = tc2.duplicated(keep="last")
+ tm.assert_series_equal(result, expected)
+
+ result = tc2.drop_duplicates(keep="last")
+ tm.assert_series_equal(result, tc2[~expected])
+
sc = tc2.copy()
return_value = sc.drop_duplicates(keep="last", inplace=True)
assert return_value is None
tm.assert_series_equal(sc, tc2[~expected])
+ def test_drop_duplicates_categorical_non_bool2_keepfalse(self, cat_series2):
+ tc2 = cat_series2
+
expected = Series([False, True, True, False, True, True, False])
- tm.assert_series_equal(tc2.duplicated(keep=False), expected)
- tm.assert_series_equal(tc2.drop_duplicates(keep=False), tc2[~expected])
+
+ result = tc2.duplicated(keep=False)
+ tm.assert_series_equal(result, expected)
+
+ result = tc2.drop_duplicates(keep=False)
+ tm.assert_series_equal(result, tc2[~expected])
+
sc = tc2.copy()
return_value = sc.drop_duplicates(keep=False, inplace=True)
assert return_value is None
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Fixes the flaky xfails in test_drop_duplicates | https://api.github.com/repos/pandas-dev/pandas/pulls/38791 | 2020-12-30T01:49:07Z | 2020-12-30T13:35:00Z | 2020-12-30T13:35:00Z | 2020-12-30T15:42:28Z |
Backport PR #38649 on branch 1.2.x (BUG: Fix regression for groupby.indices in case of unused categories) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 649b17e255f3d..374ac737298d3 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -17,7 +17,7 @@ Fixed regressions
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
--
+- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 40ef7199406fe..17584ffc5b1bf 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -582,13 +582,8 @@ def indices(self):
if isinstance(self.grouper, ops.BaseGrouper):
return self.grouper.indices
- # Return a dictionary of {group label: [indices belonging to the group label]}
- # respecting whether sort was specified
- codes, uniques = algorithms.factorize(self.grouper, sort=self.sort)
- return {
- category: np.flatnonzero(codes == i)
- for i, category in enumerate(Index(uniques))
- }
+ values = Categorical(self.grouper)
+ return values._reverse_indexer()
@property
def codes(self) -> np.ndarray:
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 7724e3930f7df..e2ba2768a885a 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -53,6 +53,7 @@
is_timedelta64_dtype,
needs_i8_conversion,
)
+from pandas.core.dtypes.generic import ABCCategoricalIndex
from pandas.core.dtypes.missing import isna, maybe_fill
import pandas.core.algorithms as algorithms
@@ -244,6 +245,11 @@ def apply(self, f: F, data: FrameOrSeries, axis: int = 0):
@cache_readonly
def indices(self):
""" dict {group name -> group indices} """
+ if len(self.groupings) == 1 and isinstance(
+ self.result_index, ABCCategoricalIndex
+ ):
+ # This shows unused categories in indices GH#38642
+ return self.groupings[0].indices
codes_list = [ping.codes for ping in self.groupings]
keys = [ping.group_index for ping in self.groupings]
return get_indexer_dict(codes_list, keys)
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 8cf77ca6335f4..f0bc58cbf07bf 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -1678,3 +1678,23 @@ def test_df_groupby_first_on_categorical_col_grouped_on_2_categoricals(
df_grp = df.groupby(["a", "b"], observed=observed)
result = getattr(df_grp, func)()
tm.assert_frame_equal(result, expected)
+
+
+def test_groupby_categorical_indices_unused_categories():
+ # GH#38642
+ df = DataFrame(
+ {
+ "key": Categorical(["b", "b", "a"], categories=["a", "b", "c"]),
+ "col": range(3),
+ }
+ )
+ grouped = df.groupby("key", sort=False)
+ result = grouped.indices
+ expected = {
+ "b": np.array([0, 1], dtype="int64"),
+ "a": np.array([2], dtype="int64"),
+ "c": np.array([], dtype="int64"),
+ }
+ assert result.keys() == expected.keys()
+ for key in result.keys():
+ tm.assert_numpy_array_equal(result[key], expected[key])
| Backport PR #38649: BUG: Fix regression for groupby.indices in case of unused categories | https://api.github.com/repos/pandas-dev/pandas/pulls/38790 | 2020-12-29T23:16:53Z | 2020-12-30T11:11:20Z | 2020-12-30T11:11:20Z | 2020-12-30T11:11:20Z |
BUG: Fix precise_xstrtod segfault on long exponent | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 4c444ea1020dd..3ecea674fd34c 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
+- Bug in :meth:`read_csv` with ``float_precision="high"`` caused segfault or wrong parsing of long exponent strings (:issue:`38753`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/src/parser/tokenizer.c b/pandas/_libs/src/parser/tokenizer.c
index 965fece370721..1b229171ea879 100644
--- a/pandas/_libs/src/parser/tokenizer.c
+++ b/pandas/_libs/src/parser/tokenizer.c
@@ -1726,7 +1726,7 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
// Process string of digits.
num_digits = 0;
n = 0;
- while (isdigit_ascii(*p)) {
+ while (num_digits < max_digits && isdigit_ascii(*p)) {
n = n * 10 + (*p - '0');
num_digits++;
p++;
@@ -1747,10 +1747,13 @@ double precise_xstrtod(const char *str, char **endptr, char decimal,
} else if (exponent > 0) {
number *= e[exponent];
} else if (exponent < -308) { // Subnormal
- if (exponent < -616) // Prevent invalid array access.
+ if (exponent < -616) { // Prevent invalid array access.
number = 0.;
- number /= e[-308 - exponent];
- number /= e[308];
+ } else {
+ number /= e[-308 - exponent];
+ number /= e[308];
+ }
+
} else {
number /= e[-exponent];
}
diff --git a/pandas/tests/io/parser/conftest.py b/pandas/tests/io/parser/conftest.py
index e8893b4c02238..ec098353960d7 100644
--- a/pandas/tests/io/parser/conftest.py
+++ b/pandas/tests/io/parser/conftest.py
@@ -97,6 +97,33 @@ def python_parser_only(request):
return request.param
+def _get_all_parser_float_precision_combinations():
+ """
+ Return all allowable parser and float precision
+ combinations and corresponding ids.
+ """
+ params = []
+ ids = []
+ for parser, parser_id in zip(_all_parsers, _all_parser_ids):
+ for precision in parser.float_precision_choices:
+ params.append((parser, precision))
+ ids.append(f"{parser_id}-{precision}")
+
+ return {"params": params, "ids": ids}
+
+
+@pytest.fixture(
+ params=_get_all_parser_float_precision_combinations()["params"],
+ ids=_get_all_parser_float_precision_combinations()["ids"],
+)
+def all_parsers_all_precisions(request):
+ """
+ Fixture for all allowable combinations of parser
+ and float precision
+ """
+ return request.param
+
+
_utf_values = [8, 16, 32]
_encoding_seps = ["", "-", "_"]
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index ce3557e098bfd..31f1581a6184b 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -15,6 +15,7 @@
import pytest
from pandas._libs.tslib import Timestamp
+from pandas.compat import is_platform_linux
from pandas.errors import DtypeWarning, EmptyDataError, ParserError
import pandas.util._test_decorators as td
@@ -1259,15 +1260,14 @@ def test_float_parser(all_parsers):
tm.assert_frame_equal(result, expected)
-def test_scientific_no_exponent(all_parsers):
+def test_scientific_no_exponent(all_parsers_all_precisions):
# see gh-12215
df = DataFrame.from_dict({"w": ["2e"], "x": ["3E"], "y": ["42e"], "z": ["632E"]})
data = df.to_csv(index=False)
- parser = all_parsers
+ parser, precision = all_parsers_all_precisions
- for precision in parser.float_precision_choices:
- df_roundtrip = parser.read_csv(StringIO(data), float_precision=precision)
- tm.assert_frame_equal(df_roundtrip, df)
+ df_roundtrip = parser.read_csv(StringIO(data), float_precision=precision)
+ tm.assert_frame_equal(df_roundtrip, df)
@pytest.mark.parametrize("conv", [None, np.int64, np.uint64])
@@ -1351,6 +1351,35 @@ def test_numeric_range_too_wide(all_parsers, exp_data):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("neg_exp", [-617, -100000, -99999999999999999])
+def test_very_negative_exponent(all_parsers_all_precisions, neg_exp):
+ # GH#38753
+ parser, precision = all_parsers_all_precisions
+ data = f"data\n10E{neg_exp}"
+ result = parser.read_csv(StringIO(data), float_precision=precision)
+ expected = DataFrame({"data": [0.0]})
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("exp", [999999999999999999, -999999999999999999])
+def test_too_many_exponent_digits(all_parsers_all_precisions, exp, request):
+ # GH#38753
+ parser, precision = all_parsers_all_precisions
+ data = f"data\n10E{exp}"
+ result = parser.read_csv(StringIO(data), float_precision=precision)
+ if precision == "round_trip":
+ if exp == 999999999999999999 and is_platform_linux():
+ mark = pytest.mark.xfail(reason="GH38794, on Linux gives object result")
+ request.node.add_marker(mark)
+
+ value = np.inf if exp > 0 else 0.0
+ expected = DataFrame({"data": [value]})
+ else:
+ expected = DataFrame({"data": [f"10E{exp}"]})
+
+ tm.assert_frame_equal(result, expected)
+
+
@pytest.mark.parametrize("iterator", [True, False])
def test_empty_with_nrows_chunksize(all_parsers, iterator):
# see gh-9535
| - [x] closes #38753
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
@snowman2 can you check if this branch returns behavior to what you had before 1.2? | https://api.github.com/repos/pandas-dev/pandas/pulls/38789 | 2020-12-29T21:00:46Z | 2020-12-30T18:41:00Z | 2020-12-30T18:40:59Z | 2020-12-30T18:54:26Z |
Backport PR #38737 on branch 1.2.x (BUG/REG: RollingGroupby MultiIndex levels dropped) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index f275cfa895269..c83a2ff7c1d22 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -16,6 +16,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
- Fixed regression in :meth:`to_csv` that created corrupted zip files when there were more rows than ``chunksize`` (:issue:`38714`)
+- Fixed a regression in ``groupby().rolling()`` where :class:`MultiIndex` levels were dropped (:issue:`38523`)
- Fixed regression in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
- Fixed regression in :meth:`DataFrame.groupby()` with :class:`Categorical` grouping column not showing unused categories for ``grouped.indices`` (:issue:`38642`)
- Fixed regression in :meth:`DataFrame.any` and :meth:`DataFrame.all` not returning a result for tz-aware ``datetime64`` columns (:issue:`38723`)
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 3aeb3b664b27f..4007ef50932fc 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -108,7 +108,7 @@
Note this does not influence the order of observations within each
group. Groupby preserves the order of rows within each group.
group_keys : bool, default True
- When calling apply, add group keys to index to identify pieces.
+ When calling ``groupby().apply()``, add group keys to index to identify pieces.
squeeze : bool, default False
Reduce the dimensionality of the return type if possible,
otherwise return a consistent type.
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index e6185f8ae0679..e50a907901dc7 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -767,28 +767,22 @@ def _apply(
numba_cache_key,
**kwargs,
)
- # Reconstruct the resulting MultiIndex from tuples
+ # Reconstruct the resulting MultiIndex
# 1st set of levels = group by labels
- # 2nd set of levels = original index
- # Ignore 2nd set of levels if a group by label include an index level
- result_index_names = [
- grouping.name for grouping in self._groupby.grouper._groupings
- ]
- grouped_object_index = None
+ # 2nd set of levels = original DataFrame/Series index
+ grouped_object_index = self.obj.index
+ grouped_index_name = [*grouped_object_index.names]
+ groupby_keys = [grouping.name for grouping in self._groupby.grouper._groupings]
+ result_index_names = groupby_keys + grouped_index_name
- column_keys = [
+ drop_columns = [
key
- for key in result_index_names
+ for key in groupby_keys
if key not in self.obj.index.names or key is None
]
-
- if len(column_keys) == len(result_index_names):
- grouped_object_index = self.obj.index
- grouped_index_name = [*grouped_object_index.names]
- result_index_names += grouped_index_name
- else:
- # Our result will have still kept the column in the result
- result = result.drop(columns=column_keys, errors="ignore")
+ if len(drop_columns) != len(groupby_keys):
+ # Our result will have kept groupby columns which should be dropped
+ result = result.drop(columns=drop_columns, errors="ignore")
codes = self._groupby.grouper.codes
levels = self._groupby.grouper.levels
diff --git a/pandas/tests/window/test_groupby.py b/pandas/tests/window/test_groupby.py
index b89fb35ac3a70..f915da3330ba7 100644
--- a/pandas/tests/window/test_groupby.py
+++ b/pandas/tests/window/test_groupby.py
@@ -556,23 +556,31 @@ def test_groupby_rolling_nans_in_index(self, rollings, key):
with pytest.raises(ValueError, match=f"{key} must be monotonic"):
df.groupby("c").rolling("60min", **rollings)
- def test_groupby_rolling_group_keys(self):
+ @pytest.mark.parametrize("group_keys", [True, False])
+ def test_groupby_rolling_group_keys(self, group_keys):
# GH 37641
+ # GH 38523: GH 37641 actually was not a bug.
+ # group_keys only applies to groupby.apply directly
arrays = [["val1", "val1", "val2"], ["val1", "val1", "val2"]]
index = MultiIndex.from_arrays(arrays, names=("idx1", "idx2"))
s = Series([1, 2, 3], index=index)
- result = s.groupby(["idx1", "idx2"], group_keys=False).rolling(1).mean()
+ result = s.groupby(["idx1", "idx2"], group_keys=group_keys).rolling(1).mean()
expected = Series(
[1.0, 2.0, 3.0],
index=MultiIndex.from_tuples(
- [("val1", "val1"), ("val1", "val1"), ("val2", "val2")],
- names=["idx1", "idx2"],
+ [
+ ("val1", "val1", "val1", "val1"),
+ ("val1", "val1", "val1", "val1"),
+ ("val2", "val2", "val2", "val2"),
+ ],
+ names=["idx1", "idx2", "idx1", "idx2"],
),
)
tm.assert_series_equal(result, expected)
def test_groupby_rolling_index_level_and_column_label(self):
+ # The groupby keys should not appear as a resulting column
arrays = [["val1", "val1", "val2"], ["val1", "val1", "val2"]]
index = MultiIndex.from_arrays(arrays, names=("idx1", "idx2"))
@@ -581,7 +589,12 @@ def test_groupby_rolling_index_level_and_column_label(self):
expected = DataFrame(
{"B": [0.0, 1.0, 2.0]},
index=MultiIndex.from_tuples(
- [("val1", 1), ("val1", 1), ("val2", 2)], names=["idx1", "A"]
+ [
+ ("val1", 1, "val1", "val1"),
+ ("val1", 1, "val1", "val1"),
+ ("val2", 2, "val2", "val2"),
+ ],
+ names=["idx1", "A", "idx1", "idx2"],
),
)
tm.assert_frame_equal(result, expected)
@@ -640,6 +653,30 @@ def test_groupby_rolling_resulting_multiindex(self):
)
tm.assert_index_equal(result.index, expected_index)
+ def test_groupby_level(self):
+ # GH 38523
+ arrays = [
+ ["Falcon", "Falcon", "Parrot", "Parrot"],
+ ["Captive", "Wild", "Captive", "Wild"],
+ ]
+ index = MultiIndex.from_arrays(arrays, names=("Animal", "Type"))
+ df = DataFrame({"Max Speed": [390.0, 350.0, 30.0, 20.0]}, index=index)
+ result = df.groupby(level=0)["Max Speed"].rolling(2).sum()
+ expected = Series(
+ [np.nan, 740.0, np.nan, 50.0],
+ index=MultiIndex.from_tuples(
+ [
+ ("Falcon", "Falcon", "Captive"),
+ ("Falcon", "Falcon", "Wild"),
+ ("Parrot", "Parrot", "Captive"),
+ ("Parrot", "Parrot", "Wild"),
+ ],
+ names=["Animal", "Animal", "Type"],
+ ),
+ name="Max Speed",
+ )
+ tm.assert_series_equal(result, expected)
+
class TestExpanding:
def setup_method(self):
| Backport PR #38737: BUG/REG: RollingGroupby MultiIndex levels dropped | https://api.github.com/repos/pandas-dev/pandas/pulls/38784 | 2020-12-29T18:56:53Z | 2020-12-31T14:38:28Z | 2020-12-31T14:38:28Z | 2020-12-31T14:38:28Z |
TYP: ExtensionIndex | diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index b6938931e86af..4d4df96953196 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -325,7 +325,7 @@ def __repr__(self) -> str:
# ------------------------------------------------------------------------
# __array_function__ methods
- def putmask(self, mask, value):
+ def putmask(self: NDArrayBackedExtensionArrayT, mask: np.ndarray, value) -> None:
"""
Analogue to np.putmask(self, mask, value)
@@ -343,7 +343,9 @@ def putmask(self, mask, value):
np.putmask(self._ndarray, mask, value)
- def where(self, mask, value):
+ def where(
+ self: NDArrayBackedExtensionArrayT, mask: np.ndarray, value
+ ) -> NDArrayBackedExtensionArrayT:
"""
Analogue to np.where(mask, self, value)
@@ -361,3 +363,7 @@ def where(self, mask, value):
res_values = np.where(mask, self._ndarray, value)
return self._from_backing_data(res_values)
+
+ def delete(self: NDArrayBackedExtensionArrayT, loc) -> NDArrayBackedExtensionArrayT:
+ res_values = np.delete(self._ndarray, loc)
+ return self._from_backing_data(res_values)
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 661adde44089c..f597175bf2ae2 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -333,7 +333,7 @@ class NDArrayBackedExtensionIndex(ExtensionIndex):
def _get_engine_target(self) -> np.ndarray:
return self._data._ndarray
- def delete(self, loc):
+ def delete(self: _T, loc) -> _T:
"""
Make new Index with passed location(-s) deleted
@@ -341,11 +341,10 @@ def delete(self, loc):
-------
new_index : Index
"""
- new_vals = np.delete(self._data._ndarray, loc)
- arr = self._data._from_backing_data(new_vals)
+ arr = self._data.delete(loc)
return type(self)._simple_new(arr, name=self.name)
- def insert(self, loc: int, item):
+ def insert(self: _T, loc: int, item) -> _T:
"""
Make new Index inserting new item at location. Follows
Python list.append semantics for negative values.
@@ -371,7 +370,7 @@ def insert(self, loc: int, item):
return type(self)._simple_new(new_arr, name=self.name)
@doc(Index.where)
- def where(self, cond, other=None):
+ def where(self: _T, cond: np.ndarray, other=None) -> _T:
res_values = self._data.where(cond, other)
return type(self)._simple_new(res_values, name=self.name)
| https://api.github.com/repos/pandas-dev/pandas/pulls/38783 | 2020-12-29T17:44:25Z | 2020-12-29T20:29:47Z | 2020-12-29T20:29:47Z | 2020-12-29T20:33:09Z | |
BUG: inspect.getmembers(Series) | diff --git a/doc/source/development/extending.rst b/doc/source/development/extending.rst
index d4219296f5795..9a8a95bec66ad 100644
--- a/doc/source/development/extending.rst
+++ b/doc/source/development/extending.rst
@@ -329,21 +329,11 @@ Each data structure has several *constructor properties* for returning a new
data structure as the result of an operation. By overriding these properties,
you can retain subclasses through ``pandas`` data manipulations.
-There are 3 constructor properties to be defined:
+There are 3 possible constructor properties to be defined on a subclass:
-* ``_constructor``: Used when a manipulation result has the same dimensions as the original.
-* ``_constructor_sliced``: Used when a manipulation result has one lower dimension(s) as the original, such as ``DataFrame`` single columns slicing.
-* ``_constructor_expanddim``: Used when a manipulation result has one higher dimension as the original, such as ``Series.to_frame()``.
-
-Following table shows how ``pandas`` data structures define constructor properties by default.
-
-=========================== ======================= =============
-Property Attributes ``Series`` ``DataFrame``
-=========================== ======================= =============
-``_constructor`` ``Series`` ``DataFrame``
-``_constructor_sliced`` ``NotImplementedError`` ``Series``
-``_constructor_expanddim`` ``DataFrame`` ``NotImplementedError``
-=========================== ======================= =============
+* ``DataFrame/Series._constructor``: Used when a manipulation result has the same dimension as the original.
+* ``DataFrame._constructor_sliced``: Used when a ``DataFrame`` (sub-)class manipulation result should be a ``Series`` (sub-)class.
+* ``Series._constructor_expanddim``: Used when a ``Series`` (sub-)class manipulation result should be a ``DataFrame`` (sub-)class, e.g. ``Series.to_frame()``.
Below example shows how to define ``SubclassedSeries`` and ``SubclassedDataFrame`` overriding constructor properties.
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 17d8c79994dbe..a497f2167ca5d 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -219,7 +219,7 @@ See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for mor
Other API changes
^^^^^^^^^^^^^^^^^
- Partially initialized :class:`CategoricalDtype` (i.e. those with ``categories=None`` objects will no longer compare as equal to fully initialized dtype objects.
--
+- Accessing ``_constructor_expanddim`` on a :class:`DataFrame` and ``_constructor_sliced`` on a :class:`Series` now raise an ``AttributeError``. Previously a ``NotImplementedError`` was raised (:issue:`38782`)
-
.. ---------------------------------------------------------------------------
@@ -445,6 +445,7 @@ Other
- Bug in :class:`Index` constructor sometimes silently ignorning a specified ``dtype`` (:issue:`38879`)
- Bug in constructing a :class:`Series` from a list and a :class:`PandasDtype` (:issue:`39357`)
- Bug in :class:`Styler` which caused CSS to duplicate on multiple renders. (:issue:`39395`)
+- ``inspect.getmembers(Series)`` no longer raises an ``AbstractMethodError`` (:issue:`38782`)
- :meth:`Index.where` behavior now mirrors :meth:`Index.putmask` behavior, i.e. ``index.where(mask, other)`` matches ``index.putmask(~mask, other)`` (:issue:`39412`)
- Bug in :func:`pandas.testing.assert_series_equal`, :func:`pandas.testing.assert_frame_equal`, :func:`pandas.testing.assert_index_equal` and :func:`pandas.testing.assert_extension_array_equal` incorrectly raising when an attribute has an unrecognized NA type (:issue:`39461`)
- Bug in :class:`Styler` where ``subset`` arg in methods raised an error for some valid multiindex slices (:issue:`33562`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6357b8feb348b..7136e42c4f638 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -490,23 +490,14 @@ class DataFrame(NDFrame, OpsMixin):
_internal_names_set = {"columns", "index"} | NDFrame._internal_names_set
_typ = "dataframe"
_HANDLED_TYPES = (Series, Index, ExtensionArray, np.ndarray)
+ _accessors: Set[str] = {"sparse"}
+ _hidden_attrs: FrozenSet[str] = NDFrame._hidden_attrs | frozenset([])
@property
def _constructor(self) -> Type[DataFrame]:
return DataFrame
_constructor_sliced: Type[Series] = Series
- _hidden_attrs: FrozenSet[str] = NDFrame._hidden_attrs | frozenset([])
- _accessors: Set[str] = {"sparse"}
-
- @property
- def _constructor_expanddim(self):
- # GH#31549 raising NotImplementedError on a property causes trouble
- # for `inspect`
- def constructor(*args, **kwargs):
- raise NotImplementedError("Not supported for DataFrames!")
-
- return constructor
# ----------------------------------------------------------------------
# Constructors
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 96b35f1aaab9c..330c945ef19cb 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -375,22 +375,6 @@ def _constructor(self: FrameOrSeries) -> Type[FrameOrSeries]:
"""
raise AbstractMethodError(self)
- @property
- def _constructor_sliced(self):
- """
- Used when a manipulation result has one lower dimension(s) as the
- original, such as DataFrame single columns slicing.
- """
- raise AbstractMethodError(self)
-
- @property
- def _constructor_expanddim(self):
- """
- Used when a manipulation result has one higher dimension as the
- original, such as Series.to_frame()
- """
- raise NotImplementedError
-
# ----------------------------------------------------------------------
# Internals
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8bd325beede65..cb161a1a717cc 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -403,6 +403,10 @@ def _constructor(self) -> Type[Series]:
@property
def _constructor_expanddim(self) -> Type[DataFrame]:
+ """
+ Used when a manipulation result has one higher dimension as the
+ original, such as Series.to_frame()
+ """
from pandas.core.frame import DataFrame
return DataFrame
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 29a2d9c17202e..6b8284908213a 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -321,12 +321,14 @@ def test_set_flags(self, allows_duplicate_labels, frame_or_series):
result.iloc[key] = 10
assert obj.iloc[key] == 0
- def test_constructor_expanddim_lookup(self):
- # GH#33628 accessing _constructor_expanddim should not
- # raise NotImplementedError
+ def test_constructor_expanddim(self):
+ # GH#33628 accessing _constructor_expanddim should not raise NotImplementedError
+ # GH38782 pandas has no container higher than DataFrame (two-dim), so
+ # DataFrame._constructor_expand_dim, doesn't make sense, so is removed.
df = DataFrame()
- with pytest.raises(NotImplementedError, match="Not supported for DataFrames!"):
+ msg = "'DataFrame' object has no attribute '_constructor_expanddim'"
+ with pytest.raises(AttributeError, match=msg):
df._constructor_expanddim(np.arange(27).reshape(3, 3, 3))
@skip_if_no("jinja2")
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 2f255d92d86e3..4dd91b942474a 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -1,8 +1,11 @@
+import inspect
import pydoc
import numpy as np
import pytest
+from pandas.util._test_decorators import skip_if_no
+
import pandas as pd
from pandas import DataFrame, Index, Series, date_range
import pandas._testing as tm
@@ -167,3 +170,10 @@ def test_attrs(self):
s.attrs["version"] = 1
result = s + 1
assert result.attrs == {"version": 1}
+
+ @skip_if_no("jinja2")
+ def test_inspect_getmembers(self):
+ # GH38782
+ ser = Series()
+ with tm.assert_produces_warning(None):
+ inspect.getmembers(ser)
| Make `inspect.getmembers(Series)` work, previously raised an `AbstractMethodError`.
xref: #38740
| https://api.github.com/repos/pandas-dev/pandas/pulls/38782 | 2020-12-29T16:33:01Z | 2021-02-07T16:44:02Z | 2021-02-07T16:44:02Z | 2021-02-08T17:23:38Z |
Center rolling window for time offset | diff --git a/doc/source/user_guide/window.rst b/doc/source/user_guide/window.rst
index be9c04ae5d4f3..5efb3f40f5018 100644
--- a/doc/source/user_guide/window.rst
+++ b/doc/source/user_guide/window.rst
@@ -157,6 +157,18 @@ By default the labels are set to the right edge of the window, but a
s.rolling(window=5, center=True).mean()
+This can also be applied to datetime-like indices.
+.. versionadded:: 1.3
+.. ipython:: python
+
+ df = pd.DataFrame(
+ {"A": [0, 1, 2, 3, 4]}, index=pd.date_range("2020", periods=5, freq="1D")
+ )
+ df
+ df.rolling("2D", center=False).mean()
+ df.rolling("2D", center=True).mean()
+
+
.. _window.endpoints:
Rolling window endpoints
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 63902b53ea36d..44266c55c70eb 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -134,6 +134,23 @@ a copy will no longer be made (:issue:`32960`)
The default behavior when not passing ``copy`` will remain unchanged, i.e.
a copy will be made.
+Centered Datetime-Like Rolling Windows
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+When performing rolling calculations on :class:`DataFrame` and :class:`Series`
+objects with a datetime-like index, a centered datetime-like window can now be
+used (:issue:`38780`).
+For example:
+
+.. ipython:: python
+
+ df = pd.DataFrame(
+ {"A": [0, 1, 2, 3, 4]}, index=pd.date_range("2020", periods=5, freq="1D")
+ )
+ df
+ df.rolling("2D", center=True).mean()
+
+
.. _whatsnew_130.enhancements.other:
Other enhancements
diff --git a/pandas/_libs/window/indexers.pyx b/pandas/_libs/window/indexers.pyx
index 67b196b7cb179..5e2b137db64a6 100644
--- a/pandas/_libs/window/indexers.pyx
+++ b/pandas/_libs/window/indexers.pyx
@@ -14,7 +14,7 @@ def calculate_variable_window_bounds(
int64_t num_values,
int64_t window_size,
object min_periods, # unused but here to match get_window_bounds signature
- object center, # unused but here to match get_window_bounds signature
+ bint center,
object closed,
const int64_t[:] index
):
@@ -32,8 +32,8 @@ def calculate_variable_window_bounds(
min_periods : object
ignored, exists for compatibility
- center : object
- ignored, exists for compatibility
+ center : bint
+ center the rolling window on the current observation
closed : str
string of side of the window that should be closed
@@ -46,7 +46,8 @@ def calculate_variable_window_bounds(
(ndarray[int64], ndarray[int64])
"""
cdef:
- bint left_closed = False, right_closed = False
+ bint left_closed = False
+ bint right_closed = False
ndarray[int64_t, ndim=1] start, end
int64_t start_bound, end_bound, index_growth_sign = 1
Py_ssize_t i, j
@@ -77,14 +78,27 @@ def calculate_variable_window_bounds(
# right endpoint is open
else:
end[0] = 0
+ if center:
+ for j in range(0, num_values + 1):
+ if (index[j] == index[0] + index_growth_sign * window_size / 2 and
+ right_closed):
+ end[0] = j + 1
+ break
+ elif index[j] >= index[0] + index_growth_sign * window_size / 2:
+ end[0] = j
+ break
with nogil:
# start is start of slice interval (including)
# end is end of slice interval (not including)
for i in range(1, num_values):
- end_bound = index[i]
- start_bound = index[i] - index_growth_sign * window_size
+ if center:
+ end_bound = index[i] + index_growth_sign * window_size / 2
+ start_bound = index[i] - index_growth_sign * window_size / 2
+ else:
+ end_bound = index[i]
+ start_bound = index[i] - index_growth_sign * window_size
# left endpoint is closed
if left_closed:
@@ -98,14 +112,27 @@ def calculate_variable_window_bounds(
start[i] = j
break
+ # for centered window advance the end bound until we are
+ # outside the constraint
+ if center:
+ for j in range(end[i - 1], num_values + 1):
+ if j == num_values:
+ end[i] = j
+ elif ((index[j] - end_bound) * index_growth_sign == 0 and
+ right_closed):
+ end[i] = j + 1
+ break
+ elif (index[j] - end_bound) * index_growth_sign >= 0:
+ end[i] = j
+ break
# end bound is previous end
# or current index
- if (index[end[i - 1]] - end_bound) * index_growth_sign <= 0:
+ elif (index[end[i - 1]] - end_bound) * index_growth_sign <= 0:
end[i] = i + 1
else:
end[i] = end[i - 1]
# right endpoint is open
- if not right_closed:
+ if not right_closed and not center:
end[i] -= 1
return start, end
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index b482934dd25d2..f11544cb62e97 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -378,7 +378,9 @@ def _get_window_indexer(self) -> BaseIndexer:
return self.window
if self._win_freq_i8 is not None:
return VariableWindowIndexer(
- index_array=self._index_array, window_size=self._win_freq_i8
+ index_array=self._index_array,
+ window_size=self._win_freq_i8,
+ center=self.center,
)
return FixedWindowIndexer(window_size=self.window)
@@ -1470,13 +1472,6 @@ def validate(self):
self._validate_monotonic()
- # we don't allow center
- if self.center:
- raise NotImplementedError(
- "center is not implemented for "
- "datetimelike and offset based windows"
- )
-
# this will raise ValueError on non-fixed freqs
try:
freq = to_offset(self.window)
diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py
index cfd09d0842418..9abae632e5da3 100644
--- a/pandas/tests/window/test_rolling.py
+++ b/pandas/tests/window/test_rolling.py
@@ -80,7 +80,8 @@ def test_constructor_with_timedelta_window(window):
# GH 15440
n = 10
df = DataFrame(
- {"value": np.arange(n)}, index=date_range("2015-12-24", periods=n, freq="D")
+ {"value": np.arange(n)},
+ index=date_range("2015-12-24", periods=n, freq="D"),
)
expected_data = np.append([0.0, 1.0], np.arange(3.0, 27.0, 3))
@@ -99,7 +100,8 @@ def test_constructor_timedelta_window_and_minperiods(window, raw):
# GH 15305
n = 10
df = DataFrame(
- {"value": np.arange(n)}, index=date_range("2017-08-08", periods=n, freq="D")
+ {"value": np.arange(n)},
+ index=date_range("2017-08-08", periods=n, freq="D"),
)
expected = DataFrame(
{"value": np.append([np.NaN, 1.0], np.arange(3.0, 27.0, 3))},
@@ -130,15 +132,105 @@ def test_closed_fixed(closed, arithmetic_win_operators):
df_fixed = DataFrame({"A": [0, 1, 2, 3, 4]})
df_time = DataFrame({"A": [0, 1, 2, 3, 4]}, index=date_range("2020", periods=5))
- result = getattr(df_fixed.rolling(2, closed=closed, min_periods=1), func_name)()
- expected = getattr(df_time.rolling("2D", closed=closed), func_name)().reset_index(
- drop=True
- )
+ result = getattr(
+ df_fixed.rolling(2, closed=closed, min_periods=1),
+ func_name,
+ )()
+ expected = getattr(
+ df_time.rolling("2D", closed=closed, min_periods=1),
+ func_name,
+ )().reset_index(drop=True)
tm.assert_frame_equal(result, expected)
-def test_closed_fixed_binary_col():
+@pytest.mark.parametrize(
+ "closed, window_selections",
+ [
+ (
+ "both",
+ [
+ [True, True, False, False, False],
+ [True, True, True, False, False],
+ [False, True, True, True, False],
+ [False, False, True, True, True],
+ [False, False, False, True, True],
+ ],
+ ),
+ (
+ "left",
+ [
+ [True, False, False, False, False],
+ [True, True, False, False, False],
+ [False, True, True, False, False],
+ [False, False, True, True, False],
+ [False, False, False, True, True],
+ ],
+ ),
+ (
+ "right",
+ [
+ [True, True, False, False, False],
+ [False, True, True, False, False],
+ [False, False, True, True, False],
+ [False, False, False, True, True],
+ [False, False, False, False, True],
+ ],
+ ),
+ (
+ "neither",
+ [
+ [True, False, False, False, False],
+ [False, True, False, False, False],
+ [False, False, True, False, False],
+ [False, False, False, True, False],
+ [False, False, False, False, True],
+ ],
+ ),
+ ],
+)
+def test_datetimelike_centered_selections(
+ closed, window_selections, arithmetic_win_operators
+):
+ # GH 34315
+ func_name = arithmetic_win_operators
+ df_time = DataFrame(
+ {"A": [0.0, 1.0, 2.0, 3.0, 4.0]}, index=date_range("2020", periods=5)
+ )
+
+ expected = DataFrame(
+ {"A": [getattr(df_time["A"].iloc[s], func_name)() for s in window_selections]},
+ index=date_range("2020", periods=5),
+ )
+
+ if func_name == "sem":
+ kwargs = {"ddof": 0}
+ else:
+ kwargs = {}
+
+ result = getattr(
+ df_time.rolling("2D", closed=closed, min_periods=1, center=True),
+ func_name,
+ )(**kwargs)
+
+ tm.assert_frame_equal(result, expected, check_dtype=False)
+
+
+def test_even_number_window_alignment():
+ # see discussion in GH 38780
+ s = Series(range(3), index=date_range(start="2020-01-01", freq="D", periods=3))
+
+ # behavior of index- and datetime-based windows differs here!
+ # s.rolling(window=2, min_periods=1, center=True).mean()
+
+ result = s.rolling(window="2D", min_periods=1, center=True).mean()
+
+ expected = Series([0.5, 1.5, 2], index=s.index)
+
+ tm.assert_series_equal(result, expected)
+
+
+def test_closed_fixed_binary_col(center):
# GH 34315
data = [0, 1, 1, 0, 0, 1, 0, 1]
df = DataFrame(
@@ -146,13 +238,19 @@ def test_closed_fixed_binary_col():
index=date_range(start="2020-01-01", freq="min", periods=len(data)),
)
- rolling = df.rolling(window=len(df), closed="left", min_periods=1)
- result = rolling.mean()
+ if center:
+ expected_data = [2 / 3, 0.5, 0.4, 0.5, 0.428571, 0.5, 0.571429, 0.5]
+ else:
+ expected_data = [np.nan, 0, 0.5, 2 / 3, 0.5, 0.4, 0.5, 0.428571]
+
expected = DataFrame(
- [np.nan, 0, 0.5, 2 / 3, 0.5, 0.4, 0.5, 0.428571],
+ expected_data,
columns=["binary_col"],
- index=date_range(start="2020-01-01", freq="min", periods=len(data)),
+ index=date_range(start="2020-01-01", freq="min", periods=len(expected_data)),
)
+
+ rolling = df.rolling(window=len(df), closed="left", min_periods=1, center=center)
+ result = rolling.mean()
tm.assert_frame_equal(result, expected)
@@ -180,7 +278,8 @@ def test_closed_one_entry(func):
def test_closed_one_entry_groupby(func):
# GH24718
ser = DataFrame(
- data={"A": [1, 1, 2], "B": [3, 2, 1]}, index=date_range("2000", periods=3)
+ data={"A": [1, 1, 2], "B": [3, 2, 1]},
+ index=date_range("2000", periods=3),
)
result = getattr(
ser.groupby("A", sort=False)["B"].rolling("10D", closed="left"), func
@@ -207,7 +306,8 @@ def test_closed_one_entry_groupby(func):
def test_closed_min_max_datetime(input_dtype, func, closed, expected):
# see gh-21704
ser = Series(
- data=np.arange(10).astype(input_dtype), index=date_range("2000", periods=10)
+ data=np.arange(10).astype(input_dtype),
+ index=date_range("2000", periods=10),
)
result = getattr(ser.rolling("3D", closed=closed), func)()
@@ -397,7 +497,96 @@ def test_rolling_datetime(axis_frame, tz_naive_fixture):
tm.assert_frame_equal(result, expected)
-def test_rolling_window_as_string(using_array_manager):
+@pytest.mark.parametrize(
+ "center, expected_data",
+ [
+ (
+ True,
+ (
+ [88.0] * 7
+ + [97.0] * 9
+ + [98.0]
+ + [99.0] * 21
+ + [95.0] * 16
+ + [93.0] * 5
+ + [89.0] * 5
+ + [96.0] * 21
+ + [94.0] * 14
+ + [90.0] * 13
+ + [88.0] * 2
+ + [90.0] * 9
+ + [96.0] * 21
+ + [95.0] * 6
+ + [91.0]
+ + [87.0] * 6
+ + [92.0] * 21
+ + [83.0] * 2
+ + [86.0] * 10
+ + [87.0] * 5
+ + [98.0] * 21
+ + [97.0] * 14
+ + [93.0] * 7
+ + [87.0] * 4
+ + [86.0] * 4
+ + [95.0] * 21
+ + [85.0] * 14
+ + [83.0] * 2
+ + [76.0] * 5
+ + [81.0] * 2
+ + [98.0] * 21
+ + [95.0] * 14
+ + [91.0] * 7
+ + [86.0]
+ + [93.0] * 3
+ + [95.0] * 29
+ + [77.0] * 2
+ ),
+ ),
+ (
+ False,
+ (
+ [np.nan] * 2
+ + [88.0] * 16
+ + [97.0] * 9
+ + [98.0]
+ + [99.0] * 21
+ + [95.0] * 16
+ + [93.0] * 5
+ + [89.0] * 5
+ + [96.0] * 21
+ + [94.0] * 14
+ + [90.0] * 13
+ + [88.0] * 2
+ + [90.0] * 9
+ + [96.0] * 21
+ + [95.0] * 6
+ + [91.0]
+ + [87.0] * 6
+ + [92.0] * 21
+ + [83.0] * 2
+ + [86.0] * 10
+ + [87.0] * 5
+ + [98.0] * 21
+ + [97.0] * 14
+ + [93.0] * 7
+ + [87.0] * 4
+ + [86.0] * 4
+ + [95.0] * 21
+ + [85.0] * 14
+ + [83.0] * 2
+ + [76.0] * 5
+ + [81.0] * 2
+ + [98.0] * 21
+ + [95.0] * 14
+ + [91.0] * 7
+ + [86.0]
+ + [93.0] * 3
+ + [95.0] * 20
+ ),
+ ),
+ ],
+)
+def test_rolling_window_as_string(center, expected_data, using_array_manager):
# see gh-22590
date_today = datetime.now()
days = date_range(date_today, date_today + timedelta(365), freq="D")
@@ -408,53 +597,15 @@ def test_rolling_window_as_string(using_array_manager):
df = DataFrame({"DateCol": days, "metric": data})
df.set_index("DateCol", inplace=True)
- result = df.rolling(window="21D", min_periods=2, closed="left")["metric"].agg("max")
-
- expData = (
- [np.nan] * 2
- + [88.0] * 16
- + [97.0] * 9
- + [98.0]
- + [99.0] * 21
- + [95.0] * 16
- + [93.0] * 5
- + [89.0] * 5
- + [96.0] * 21
- + [94.0] * 14
- + [90.0] * 13
- + [88.0] * 2
- + [90.0] * 9
- + [96.0] * 21
- + [95.0] * 6
- + [91.0]
- + [87.0] * 6
- + [92.0] * 21
- + [83.0] * 2
- + [86.0] * 10
- + [87.0] * 5
- + [98.0] * 21
- + [97.0] * 14
- + [93.0] * 7
- + [87.0] * 4
- + [86.0] * 4
- + [95.0] * 21
- + [85.0] * 14
- + [83.0] * 2
- + [76.0] * 5
- + [81.0] * 2
- + [98.0] * 21
- + [95.0] * 14
- + [91.0] * 7
- + [86.0]
- + [93.0] * 3
- + [95.0] * 20
- )
+ result = df.rolling(window="21D", min_periods=2, closed="left", center=center)[
+ "metric"
+ ].agg("max")
index = days.rename("DateCol")
if not using_array_manager:
# INFO(ArrayManager) preserves the frequence of the index
index = index._with_freq(None)
- expected = Series(expData, index=index, name="metric")
+ expected = Series(expected_data, index=index, name="metric")
tm.assert_series_equal(result, expected)
@@ -622,8 +773,18 @@ def test_iter_rolling_on_dataframe(expected, window):
3,
1,
),
- (Series([1, 2, 3]), [([1], [0]), ([1, 2], [0, 1]), ([2, 3], [1, 2])], 2, 1),
- (Series([1, 2, 3]), [([1], [0]), ([1, 2], [0, 1]), ([2, 3], [1, 2])], 2, 2),
+ (
+ Series([1, 2, 3]),
+ [([1], [0]), ([1, 2], [0, 1]), ([2, 3], [1, 2])],
+ 2,
+ 1,
+ ),
+ (
+ Series([1, 2, 3]),
+ [([1], [0]), ([1, 2], [0, 1]), ([2, 3], [1, 2])],
+ 2,
+ 2,
+ ),
(Series([1, 2, 3]), [([1], [0]), ([2], [1]), ([3], [2])], 1, 0),
(Series([1, 2, 3]), [([1], [0]), ([2], [1]), ([3], [2])], 1, 1),
(Series([1, 2]), [([1], [0]), ([1, 2], [0, 1])], 2, 0),
@@ -795,7 +956,18 @@ def test_rolling_numerical_too_large_numbers():
ds[2] = -9e33
result = ds.rolling(5).mean()
expected = Series(
- [np.nan, np.nan, np.nan, np.nan, -1.8e33, -1.8e33, -1.8e33, 5.0, 6.0, 7.0],
+ [
+ np.nan,
+ np.nan,
+ np.nan,
+ np.nan,
+ -1.8e33,
+ -1.8e33,
+ -1.8e33,
+ 5.0,
+ 6.0,
+ 7.0,
+ ],
index=dates,
)
tm.assert_series_equal(result, expected)
@@ -811,7 +983,8 @@ def test_rolling_mixed_dtypes_axis_1(func, value):
df["c"] = 1.0
result = getattr(df.rolling(window=2, min_periods=1, axis=1), func)()
expected = DataFrame(
- {"a": [1.0, 1.0], "b": [value, value], "c": [value, value]}, index=[1, 2]
+ {"a": [1.0, 1.0], "b": [value, value], "c": [value, value]},
+ index=[1, 2],
)
tm.assert_frame_equal(result, expected)
@@ -895,7 +1068,7 @@ def test_rolling_sem(frame_or_series):
result = obj.rolling(2, min_periods=1).sem()
if isinstance(result, DataFrame):
result = Series(result[0].values)
- expected = Series([np.nan] + [0.707107] * 2)
+ expected = Series([np.nan] + [0.7071067811865476] * 2)
tm.assert_series_equal(result, expected)
@@ -1009,8 +1182,14 @@ def test_rolling_decreasing_indices(method):
318.0,
],
),
- ("mean", [float("nan"), 7.5, float("nan"), 21.5, 6.0, 9.166667, 13.0, 17.5]),
- ("sum", [float("nan"), 30.0, float("nan"), 86.0, 30.0, 55.0, 91.0, 140.0]),
+ (
+ "mean",
+ [float("nan"), 7.5, float("nan"), 21.5, 6.0, 9.166667, 13.0, 17.5],
+ ),
+ (
+ "sum",
+ [float("nan"), 30.0, float("nan"), 86.0, 30.0, 55.0, 91.0, 140.0],
+ ),
(
"skew",
[
@@ -1074,7 +1253,10 @@ def get_window_bounds(self, num_values, min_periods, center, closed):
@pytest.mark.parametrize(
("index", "window"),
- [([0, 1, 2, 3, 4], 2), (date_range("2001-01-01", freq="D", periods=5), "2D")],
+ [
+ ([0, 1, 2, 3, 4], 2),
+ (date_range("2001-01-01", freq="D", periods=5), "2D"),
+ ],
)
def test_rolling_corr_timedelta_index(index, window):
# GH: 31286
diff --git a/pandas/tests/window/test_timeseries_window.py b/pandas/tests/window/test_timeseries_window.py
index 0782ef2f4ce7b..7cd319480083b 100644
--- a/pandas/tests/window/test_timeseries_window.py
+++ b/pandas/tests/window/test_timeseries_window.py
@@ -83,12 +83,6 @@ def test_invalid_minp(self, minp):
with pytest.raises(ValueError, match=msg):
self.regular.rolling(window="1D", min_periods=minp)
- def test_invalid_center_datetimelike(self):
- # center is not implemented
- msg = "center is not implemented for datetimelike and offset based windows"
- with pytest.raises(NotImplementedError, match=msg):
- self.regular.rolling(window="1D", center=True)
-
def test_on(self):
df = self.regular
| - [x] closes #20012
- [x] tests added / passed
- [x] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Added center functionality for `VariableWindowIndexer`. Note - I am unsure if the NotImplementedError in lines 1966-1969 in rolling.py still correctly raises an error for offset based windows.
Finalizes previous PR https://github.com/pandas-dev/pandas/pull/36097 | https://api.github.com/repos/pandas-dev/pandas/pulls/38780 | 2020-12-29T14:59:14Z | 2021-04-09T16:28:12Z | 2021-04-09T16:28:12Z | 2021-04-13T12:11:23Z |
Backport PR #38766 on branch 1.2.x (BLD: fix build failure py3.9.1 on OSX) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 37562be17f02e..649b17e255f3d 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -41,7 +41,7 @@ I/O
Other
~~~~~
--
+- Fixed build failure on MacOS 11 in Python 3.9.1 (:issue:`38766`)
-
.. ---------------------------------------------------------------------------
diff --git a/setup.py b/setup.py
index a25fe95e025b3..f9c4a1158fee0 100755
--- a/setup.py
+++ b/setup.py
@@ -435,7 +435,7 @@ def run(self):
"MACOSX_DEPLOYMENT_TARGET", current_system
)
if (
- LooseVersion(python_target) < "10.9"
+ LooseVersion(str(python_target)) < "10.9"
and LooseVersion(current_system) >= "10.9"
):
os.environ["MACOSX_DEPLOYMENT_TARGET"] = "10.9"
| Backport PR #38766: BLD: fix build failure py3.9.1 on OSX | https://api.github.com/repos/pandas-dev/pandas/pulls/38777 | 2020-12-29T14:06:02Z | 2020-12-29T15:35:02Z | 2020-12-29T15:35:02Z | 2020-12-29T15:35:02Z |
CI,STYLE: add spell check? | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 1fd95b8103a41..d78c2bacc4e44 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -1,4 +1,4 @@
-minimum_pre_commit_version: '2.9.2'
+minimum_pre_commit_version: 2.9.2
repos:
- repo: https://github.com/python/black
rev: 20.8b1
@@ -168,3 +168,9 @@ repos:
exclude: ^LICENSES/|\.(html|csv|txt|svg|py)$
- id: trailing-whitespace
exclude: \.(html|svg)$
+- repo: https://github.com/codespell-project/codespell
+ rev: v2.0.0
+ hooks:
+ - id: codespell
+ types_or: [python, rst, markdown]
+ files: ^pandas/core/
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index b6938931e86af..2f292c8db025e 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -275,7 +275,7 @@ def fillna(
if method is not None:
func = missing.get_fill_func(method)
new_values = func(self._ndarray.copy(), limit=limit, mask=mask)
- # TODO: PandasArray didnt used to copy, need tests for this
+ # TODO: PandasArray didn't used to copy, need tests for this
new_values = self._from_backing_data(new_values)
else:
# fill with value
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index b2050bf54cad6..088c2fd89c244 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -741,7 +741,7 @@ def isin(self, values) -> np.ndarray:
return np.zeros(self.shape, dtype=bool)
if not isinstance(values, type(self)):
- inferrable = [
+ inferable = [
"timedelta",
"timedelta64",
"datetime",
@@ -751,7 +751,7 @@ def isin(self, values) -> np.ndarray:
]
if values.dtype == object:
inferred = lib.infer_dtype(values, skipna=False)
- if inferred not in inferrable:
+ if inferred not in inferable:
if inferred == "string":
pass
diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py
index 1e9024e32c5b7..1ac23d7893fbf 100644
--- a/pandas/core/arrays/floating.py
+++ b/pandas/core/arrays/floating.py
@@ -175,7 +175,7 @@ class FloatingArray(NumericArray):
.. warning::
FloatingArray is currently experimental, and its API or internal
- implementation may change without warning. Expecially the behaviour
+ implementation may change without warning. Especially the behaviour
regarding NaN (distinct from NA missing values) is subject to change.
We represent a FloatingArray with 2 numpy arrays:
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 575ae7531de2c..fa648157d7678 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -975,7 +975,7 @@ def _concat_same_type(
else:
# when concatenating block indices, we don't claim that you'll
- # get an identical index as concating the values and then
+ # get an identical index as concatenating the values and then
# creating a new index. We don't want to spend the time trying
# to merge blocks across arrays in `to_concat`, so the resulting
# BlockIndex may have more blocks.
diff --git a/pandas/core/arrays/sparse/dtype.py b/pandas/core/arrays/sparse/dtype.py
index c0662911d40da..14bdd063fa41a 100644
--- a/pandas/core/arrays/sparse/dtype.py
+++ b/pandas/core/arrays/sparse/dtype.py
@@ -371,7 +371,7 @@ def _get_common_dtype(self, dtypes: List[DtypeObj]) -> Optional[DtypeObj]:
fill_value = fill_values[0]
# np.nan isn't a singleton, so we may end up with multiple
- # NaNs here, so we ignore tha all NA case too.
+ # NaNs here, so we ignore the all NA case too.
if not (len(set(fill_values)) == 1 or isna(fill_values).all()):
warnings.warn(
"Concatenating sparse arrays with multiple fill "
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 184fbc050036b..3a351bf497662 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -467,7 +467,7 @@ def __setitem__(self, key: Union[int, np.ndarray], value: Any) -> None:
elif not isinstance(value, str):
raise ValueError("Scalar must be NA or str")
- # Slice data and insert inbetween
+ # Slice data and insert in-between
new_data = [
*self._data[0:key].chunks,
pa.array([value], type=pa.string()),
@@ -616,7 +616,7 @@ def value_counts(self, dropna: bool = True) -> Series:
# Index cannot hold ExtensionArrays yet
index = Index(type(self)(vc.field(0)).astype(object))
- # No missings, so we can adhere to the interface and return a numpy array.
+ # No missing values so we can adhere to the interface and return a numpy array.
counts = np.array(vc.field(1))
if dropna and self._data.null_count > 0:
diff --git a/pandas/core/computation/parsing.py b/pandas/core/computation/parsing.py
index a1bebc92046ae..ef79c2b77e4e5 100644
--- a/pandas/core/computation/parsing.py
+++ b/pandas/core/computation/parsing.py
@@ -35,7 +35,7 @@ def create_valid_python_identifier(name: str) -> str:
# Create a dict with the special characters and their replacement string.
# EXACT_TOKEN_TYPES contains these special characters
- # toke.tok_name contains a readable description of the replacement string.
+ # token.tok_name contains a readable description of the replacement string.
special_characters_replacements = {
char: f"_{token.tok_name[tokval]}_"
# The ignore here is because of a bug in mypy that is resolved in 0.740
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b34d23bee8b8a..5d9313148fb3d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5559,7 +5559,7 @@ def _is_mixed_type(self) -> bool_t:
return False
if self._mgr.any_extension_types:
- # Even if they have the same dtype, we cant consolidate them,
+ # Even if they have the same dtype, we can't consolidate them,
# so we pretend this is "mixed'"
return True
@@ -10626,7 +10626,7 @@ def _add_numeric_operations(cls):
"""
Add the operations to the cls; evaluate the doc strings again
"""
- axis_descr, name1, name2 = _doc_parms(cls)
+ axis_descr, name1, name2 = _doc_params(cls)
@doc(
_bool_doc,
@@ -11186,8 +11186,8 @@ def last_valid_index(self):
return self._find_valid_index("last")
-def _doc_parms(cls):
- """Return a tuple of the doc parms."""
+def _doc_params(cls):
+ """Return a tuple of the doc params."""
axis_descr = (
f"{{{', '.join(f'{a} ({i})' for i, a in enumerate(cls._AXIS_ORDERS))}}}"
)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index d1a4fc6fc74e5..06d01d46b64f7 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -495,7 +495,7 @@ def _ea_wrap_cython_operation(
If we have an ExtensionArray, unwrap, call _cython_operation, and
re-wrap if appropriate.
"""
- # TODO: general case implementation overrideable by EAs.
+ # TODO: general case implementation overridable by EAs.
orig_values = values
if is_datetime64tz_dtype(values.dtype) or is_period_dtype(values.dtype):
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 275c977e9b37b..eaeaf103c17ab 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4842,7 +4842,7 @@ def argsort(self, *args, **kwargs) -> np.ndarray:
>>> idx[order]
Index(['a', 'b', 'c', 'd'], dtype='object')
"""
- # This works for either ndarray or EA, is overriden
+ # This works for either ndarray or EA, is overridden
# by RangeIndex, MultIIndex
return self._data.argsort(*args, **kwargs)
@@ -4974,7 +4974,7 @@ def get_indexer_non_unique(self, target):
return self._get_indexer_non_comparable(target, method=None, unique=False)
if not is_dtype_equal(self.dtype, target.dtype):
- # TODO: if object, could use infer_dtype to pre-empt costly
+ # TODO: if object, could use infer_dtype to preempt costly
# conversion if still non-comparable?
dtype = find_common_type([self.dtype, target.dtype])
if (
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index d673d1b43f729..249e9707be328 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -164,12 +164,12 @@ def equals(self, other: object) -> bool:
return False
elif not isinstance(other, type(self)):
should_try = False
- inferrable = self._data._infer_matches
+ inferable = self._data._infer_matches
if other.dtype == object:
- should_try = other.inferred_type in inferrable
+ should_try = other.inferred_type in inferable
elif is_categorical_dtype(other.dtype):
other = cast("CategoricalIndex", other)
- should_try = other.categories.inferred_type in inferrable
+ should_try = other.categories.inferred_type in inferable
if should_try:
try:
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 5e5280934dff4..029c4a30a6b22 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -643,7 +643,7 @@ def difference(self, other, sort=None):
if len(overlap) == len(self):
return self[:0].rename(res_name)
if not isinstance(overlap, RangeIndex):
- # We wont end up with RangeIndex, so fall back
+ # We won't end up with RangeIndex, so fall back
return super().difference(other, sort=sort)
if overlap.step != first.step:
# In some cases we might be able to get a RangeIndex back,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ea1b8259eeadd..3ec8355c89aab 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1816,7 +1816,7 @@ def _slice(self, slicer):
# return same dims as we currently have
if not isinstance(slicer, tuple) and self.ndim == 2:
# reached via getitem_block via _slice_take_blocks_ax0
- # TODO(EA2D): wont be necessary with 2D EAs
+ # TODO(EA2D): won't be necessary with 2D EAs
slicer = (slicer, slice(None))
if isinstance(slicer, tuple) and len(slicer) == 2:
@@ -1826,7 +1826,7 @@ def _slice(self, slicer):
"invalid slicing for a 1-ndim ExtensionArray", first
)
# GH#32959 only full-slicers along fake-dim0 are valid
- # TODO(EA2D): wont be necessary with 2D EAs
+ # TODO(EA2D): won't be necessary with 2D EAs
new_locs = self.mgr_locs[first]
if len(new_locs):
# effectively slice(None)
@@ -2289,7 +2289,7 @@ def _check_ndim(self, values, ndim):
"""
ndim inference and validation.
- This is overriden by the DatetimeTZBlock to check the case of 2D
+ This is overridden by the DatetimeTZBlock to check the case of 2D
data (values.ndim == 2), which should only be allowed if ndim is
also 2.
The case of 1D array is still allowed with both ndim of 1 or 2, as
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index d9db728f66754..d59cfc436f13d 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -746,7 +746,7 @@ def _convert_object_array(
content: List[Scalar], dtype: Optional[DtypeObj] = None
) -> List[Scalar]:
"""
- Internal function ot convert object array.
+ Internal function to convert object array.
Parameters
----------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 3ccdd287dd502..7dde952636a79 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1355,7 +1355,7 @@ def _slice_take_blocks_ax0(
blk = self.blocks[0]
if sl_type in ("slice", "mask"):
- # GH#32959 EABlock would fail since we cant make 0-width
+ # GH#32959 EABlock would fail since we can't make 0-width
# TODO(EA2D): special casing unnecessary with 2D EAs
if sllen == 0:
return []
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 88662a4fabed8..9c37a0f2b521f 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1221,33 +1221,33 @@ def nankurt(
with np.errstate(invalid="ignore", divide="ignore"):
adj = 3 * (count - 1) ** 2 / ((count - 2) * (count - 3))
- numer = count * (count + 1) * (count - 1) * m4
- denom = (count - 2) * (count - 3) * m2 ** 2
+ numerator = count * (count + 1) * (count - 1) * m4
+ denominator = (count - 2) * (count - 3) * m2 ** 2
# floating point error
#
# #18044 in _libs/windows.pyx calc_kurt follow this behavior
# to fix the fperr to treat denom <1e-14 as zero
- numer = _zero_out_fperr(numer)
- denom = _zero_out_fperr(denom)
+ numerator = _zero_out_fperr(numerator)
+ denominator = _zero_out_fperr(denominator)
- if not isinstance(denom, np.ndarray):
+ if not isinstance(denominator, np.ndarray):
# if ``denom`` is a scalar, check these corner cases first before
# doing division
if count < 4:
return np.nan
- if denom == 0:
+ if denominator == 0:
return 0
with np.errstate(invalid="ignore", divide="ignore"):
- result = numer / denom - adj
+ result = numerator / denominator - adj
dtype = values.dtype
if is_float_dtype(dtype):
result = result.astype(dtype)
if isinstance(result, np.ndarray):
- result = np.where(denom == 0, 0, result)
+ result = np.where(denominator == 0, 0, result)
result[count < 4] = np.nan
return result
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 40496a5b8671b..8e4b000a56a3d 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -740,8 +740,8 @@ def _build_names_mapper(
A row or column name is replaced if it is duplicate among the rows of the inputs,
among the columns of the inputs or between the rows and the columns.
- Paramters
- ---------
+ Parameters
+ ----------
rownames: list[str]
colnames: list[str]
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 053c960cc5cbd..70a9367dc2150 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -1864,7 +1864,7 @@ def _get_corr(a, b):
window=window, min_periods=self.min_periods, center=self.center
)
# GH 31286: Through using var instead of std we can avoid numerical
- # issues when the result of var is withing floating proint precision
+ # issues when the result of var is within floating proint precision
# while std is not.
return a.cov(b, **kwargs) / (a.var(**kwargs) * b.var(**kwargs)) ** 0.5
diff --git a/setup.cfg b/setup.cfg
index a91cd18694c33..56b2fa190ac99 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -63,6 +63,9 @@ filterwarnings =
error:The SparseArray:FutureWarning
junit_family=xunit2
+[codespell]
+ignore-words-list=ba,blocs,coo,datas,fo,hist,nd,ser
+
[coverage:run]
branch = False
omit =
| IIRC it was suggested in a comment some time ago to use a spell checker, so I thought I'd make a little PR showing how `codespell` would work (for now just applied to `pandas/core`).
Is this something we'd want? False positives can be ignored in `setup.cfg` in the `[codespell]` section | https://api.github.com/repos/pandas-dev/pandas/pulls/38776 | 2020-12-29T11:20:53Z | 2020-12-29T17:03:09Z | 2020-12-29T17:03:09Z | 2020-12-29T17:03:13Z |
BUG: Added test cases to check loc on multiindex with NaNs #29751 | diff --git a/pandas/tests/indexing/multiindex/test_getitem.py b/pandas/tests/indexing/multiindex/test_getitem.py
index 144df1e28f8b6..6c0d1c285acf3 100644
--- a/pandas/tests/indexing/multiindex/test_getitem.py
+++ b/pandas/tests/indexing/multiindex/test_getitem.py
@@ -197,6 +197,38 @@ def test_frame_mixed_depth_get():
tm.assert_series_equal(result, expected)
+def test_frame_getitem_nan_multiindex(nulls_fixture):
+ # GH#29751
+ # loc on a multiindex containing nan values
+ n = nulls_fixture # for code readability
+ cols = ["a", "b", "c"]
+ df = DataFrame(
+ [[11, n, 13], [21, n, 23], [31, n, 33], [41, n, 43]],
+ columns=cols,
+ dtype="int64",
+ ).set_index(["a", "b"])
+
+ idx = (21, n)
+ result = df.loc[:idx]
+ expected = DataFrame(
+ [[11, n, 13], [21, n, 23]], columns=cols, dtype="int64"
+ ).set_index(["a", "b"])
+ tm.assert_frame_equal(result, expected)
+
+ result = df.loc[idx:]
+ expected = DataFrame(
+ [[21, n, 23], [31, n, 33], [41, n, 43]], columns=cols, dtype="int64"
+ ).set_index(["a", "b"])
+ tm.assert_frame_equal(result, expected)
+
+ idx1, idx2 = (21, n), (31, n)
+ result = df.loc[idx1:idx2]
+ expected = DataFrame(
+ [[21, n, 23], [31, n, 33]], columns=cols, dtype="int64"
+ ).set_index(["a", "b"])
+ tm.assert_frame_equal(result, expected)
+
+
# ----------------------------------------------------------------------------
# test indexing of DataFrame with multi-level Index with duplicates
# ----------------------------------------------------------------------------
| - [x] closes #29751
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Added test cases to check loc on multiindex containing NaN values using `np.nan`, `pd.NA`, and `None` | https://api.github.com/repos/pandas-dev/pandas/pulls/38772 | 2020-12-29T05:05:16Z | 2021-01-07T13:58:09Z | 2021-01-07T13:58:08Z | 2021-01-07T13:58:12Z |
DOC: suppress debug messages when displaying plots | diff --git a/doc/source/user_guide/10min.rst b/doc/source/user_guide/10min.rst
index 7629870a8de66..2b329ef362354 100644
--- a/doc/source/user_guide/10min.rst
+++ b/doc/source/user_guide/10min.rst
@@ -730,7 +730,7 @@ The :meth:`~plt.close` method is used to `close <https://matplotlib.org/3.1.1/ap
ts = ts.cumsum()
@savefig series_plot_basic.png
- ts.plot()
+ ts.plot();
On a DataFrame, the :meth:`~DataFrame.plot` method is a convenience to plot all
of the columns with labels:
@@ -743,10 +743,10 @@ of the columns with labels:
df = df.cumsum()
- plt.figure()
- df.plot()
+ plt.figure();
+ df.plot();
@savefig frame_plot_basic.png
- plt.legend(loc='best')
+ plt.legend(loc='best');
Getting data in/out
-------------------
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/38770 | 2020-12-29T03:46:04Z | 2020-12-29T14:06:41Z | 2020-12-29T14:06:41Z | 2020-12-29T22:00:30Z |
Backport PR #38759 on branch 1.2.x (BUG: float-like string, trailing 0 truncation) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index e280b730679f0..37562be17f02e 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -16,6 +16,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
+- Bug in repr of float-like strings of an ``object`` dtype having trailing 0's truncated after the decimal (:issue:`38708`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index db34b882a3c35..d0b821a3679bb 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1305,7 +1305,7 @@ def _format(x):
if not is_float_type[i] and leading_space:
fmt_values.append(f" {_format(v)}")
elif is_float_type[i]:
- fmt_values.append(float_format(v))
+ fmt_values.append(_trim_zeros_single_float(float_format(v)))
else:
if leading_space is False:
# False specifically, so that the default is
@@ -1315,8 +1315,6 @@ def _format(x):
tpl = " {v}"
fmt_values.append(tpl.format(v=_format(v)))
- fmt_values = _trim_zeros_float(str_floats=fmt_values, decimal=".")
-
return fmt_values
@@ -1832,11 +1830,25 @@ def _trim_zeros_complex(str_complexes: np.ndarray, decimal: str = ".") -> List[s
return padded
+def _trim_zeros_single_float(str_float: str) -> str:
+ """
+ Trims trailing zeros after a decimal point,
+ leaving just one if necessary.
+ """
+ str_float = str_float.rstrip("0")
+ if str_float.endswith("."):
+ str_float += "0"
+
+ return str_float
+
+
def _trim_zeros_float(
str_floats: Union[np.ndarray, List[str]], decimal: str = "."
) -> List[str]:
"""
- Trims zeros, leaving just one before the decimal points if need be.
+ Trims the maximum number of trailing zeros equally from
+ all numbers containing decimals, leaving just one if
+ necessary.
"""
trimmed = str_floats
number_regex = re.compile(fr"^\s*[\+-]?[0-9]+\{decimal}[0-9]*$")
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index fe85849c6dcca..b0b07045a9156 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -2002,6 +2002,25 @@ def test_float_trim_zeros(self):
assert ("+10" in line) or skip
skip = False
+ @pytest.mark.parametrize(
+ "data, expected",
+ [
+ (["3.50"], "0 3.50\ndtype: object"),
+ ([1.20, "1.00"], "0 1.2\n1 1.00\ndtype: object"),
+ ([np.nan], "0 NaN\ndtype: float64"),
+ ([None], "0 None\ndtype: object"),
+ (["3.50", np.nan], "0 3.50\n1 NaN\ndtype: object"),
+ ([3.50, np.nan], "0 3.5\n1 NaN\ndtype: float64"),
+ ([3.50, np.nan, "3.50"], "0 3.5\n1 NaN\n2 3.50\ndtype: object"),
+ ([3.50, None, "3.50"], "0 3.5\n1 None\n2 3.50\ndtype: object"),
+ ],
+ )
+ def test_repr_str_float_truncation(self, data, expected):
+ # GH#38708
+ series = Series(data)
+ result = repr(series)
+ assert result == expected
+
def test_dict_entries(self):
df = DataFrame({"A": [{"a": 1, "b": 2}]})
| Backport PR #38759: BUG: float-like string, trailing 0 truncation | https://api.github.com/repos/pandas-dev/pandas/pulls/38769 | 2020-12-29T03:19:31Z | 2020-12-29T10:22:15Z | 2020-12-29T10:22:15Z | 2020-12-29T10:22:15Z |
CLN: rolling.py and window/aggregations.pyx | diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 54a09a6d2ede7..c21e71c407630 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -55,39 +55,11 @@ cdef:
float64_t NaN = <float64_t>np.NaN
-cdef inline int int_max(int a, int b): return a if a >= b else b
-cdef inline int int_min(int a, int b): return a if a <= b else b
-
cdef bint is_monotonic_increasing_start_end_bounds(
ndarray[int64_t, ndim=1] start, ndarray[int64_t, ndim=1] end
):
return is_monotonic(start, False)[0] and is_monotonic(end, False)[0]
-# Cython implementations of rolling sum, mean, variance, skewness,
-# other statistical moment functions
-#
-# Misc implementation notes
-# -------------------------
-#
-# - In Cython x * x is faster than x ** 2 for C types, this should be
-# periodically revisited to see if it's still true.
-#
-
-# original C implementation by N. Devillard.
-# This code in public domain.
-# Function : kth_smallest()
-# In : array of elements, # of elements in the array, rank k
-# Out : one element
-# Job : find the kth smallest element in the array
-
-# Reference:
-
-# Author: Wirth, Niklaus
-# Title: Algorithms + data structures = programs
-# Publisher: Englewood Cliffs: Prentice-Hall, 1976
-# Physical description: 366 p.
-# Series: Prentice-Hall Series in Automatic Computation
-
# ----------------------------------------------------------------------
# Rolling sum
@@ -774,7 +746,6 @@ def roll_kurt(ndarray[float64_t] values, ndarray[int64_t] start,
def roll_median_c(const float64_t[:] values, ndarray[int64_t] start,
ndarray[int64_t] end, int64_t minp):
- # GH 32865. win argument kept for compatibility
cdef:
float64_t val, res, prev
bint err = False
@@ -1167,9 +1138,8 @@ def roll_apply(object obj,
arr = np.asarray(obj)
# ndarray input
- if raw:
- if not arr.flags.c_contiguous:
- arr = arr.copy('C')
+ if raw and not arr.flags.c_contiguous:
+ arr = arr.copy('C')
counts = roll_sum(np.isfinite(arr).astype(float), start, end, minp)
@@ -1195,17 +1165,17 @@ def roll_apply(object obj,
# Rolling sum and mean for weighted window
-def roll_weighted_sum(float64_t[:] values, float64_t[:] weights, int minp):
+def roll_weighted_sum(const float64_t[:] values, const float64_t[:] weights, int minp):
return _roll_weighted_sum_mean(values, weights, minp, avg=0)
-def roll_weighted_mean(float64_t[:] values, float64_t[:] weights, int minp):
+def roll_weighted_mean(const float64_t[:] values, const float64_t[:] weights, int minp):
return _roll_weighted_sum_mean(values, weights, minp, avg=1)
-cdef ndarray[float64_t] _roll_weighted_sum_mean(float64_t[:] values,
- float64_t[:] weights,
- int minp, bint avg):
+cdef float64_t[:] _roll_weighted_sum_mean(const float64_t[:] values,
+ const float64_t[:] weights,
+ int minp, bint avg):
"""
Assume len(weights) << len(values)
"""
@@ -1270,7 +1240,7 @@ cdef ndarray[float64_t] _roll_weighted_sum_mean(float64_t[:] values,
if c < minp:
output[in_i] = NaN
- return np.asarray(output)
+ return output
# ----------------------------------------------------------------------
@@ -1424,7 +1394,7 @@ cdef inline void remove_weighted_var(float64_t val,
mean[0] = 0
-def roll_weighted_var(float64_t[:] values, float64_t[:] weights,
+def roll_weighted_var(const float64_t[:] values, const float64_t[:] weights,
int64_t minp, unsigned int ddof):
"""
Calculates weighted rolling variance using West's online algorithm.
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 053c960cc5cbd..63e4d92d64efb 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -174,9 +174,8 @@ def _create_data(self, obj: FrameOrSeries) -> FrameOrSeries:
Split data into blocks & return conformed data.
"""
# filter out the on from the object
- if self.on is not None and not isinstance(self.on, Index):
- if obj.ndim == 2:
- obj = obj.reindex(columns=obj.columns.difference([self.on]), copy=False)
+ if self.on is not None and not isinstance(self.on, Index) and obj.ndim == 2:
+ obj = obj.reindex(columns=obj.columns.difference([self.on]), copy=False)
if self.axis == 1:
# GH: 20649 in case of mixed dtype and axis=1 we have to convert everything
# to float to calculate the complete row at once. We exclude all non-numeric
@@ -238,10 +237,6 @@ def _get_cov_corr_window(
"""
return self.window
- @property
- def _window_type(self) -> str:
- return type(self).__name__
-
def __repr__(self) -> str:
"""
Provide a nice str repr of our rolling object.
@@ -252,7 +247,7 @@ def __repr__(self) -> str:
if getattr(self, attr_name, None) is not None
)
attrs = ",".join(attrs_list)
- return f"{self._window_type} [{attrs}]"
+ return f"{type(self).__name__} [{attrs}]"
def __iter__(self):
obj = self._create_data(self._selected_obj)
@@ -278,7 +273,7 @@ def _prep_values(self, values: Optional[np.ndarray] = None) -> np.ndarray:
if needs_i8_conversion(values.dtype):
raise NotImplementedError(
- f"ops for {self._window_type} for this "
+ f"ops for {type(self).__name__} for this "
f"dtype {values.dtype} are not implemented"
)
else:
@@ -464,7 +459,6 @@ def calc(x):
result = np.apply_along_axis(calc, self.axis, values)
else:
result = calc(values)
- result = np.asarray(result)
if numba_cache_key is not None:
NUMBA_FUNC_CACHE[numba_cache_key] = func
@@ -1102,8 +1096,8 @@ def calc(x):
if values.ndim > 1:
result = np.apply_along_axis(calc, self.axis, values)
else:
- result = calc(values)
- result = np.asarray(result)
+ # Our weighted aggregations return memoryviews
+ result = np.asarray(calc(values))
if self.center:
result = self._center_window(result, offset)
@@ -2158,7 +2152,7 @@ def _validate_monotonic(self):
"""
Validate that on is monotonic;
in this case we have to check only for nans, because
- monotonicy was already validated at a higher level.
+ monotonicity was already validated at a higher level.
"""
if self._on.hasnans:
self._raise_monotonic_error()
| * Inlined a helper method
* Added `const`s
* Removed unneeded comments/code
| https://api.github.com/repos/pandas-dev/pandas/pulls/38768 | 2020-12-29T02:31:39Z | 2020-12-29T14:05:56Z | 2020-12-29T14:05:56Z | 2020-12-29T18:43:13Z |
Backport PR #38728 on branch 1.2.x (REGR: to_csv created corrupt ZIP files when chunksize<rows) | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index 769c195229bbd..e280b730679f0 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -15,6 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- The deprecated attributes ``_AXIS_NAMES`` and ``_AXIS_NUMBERS`` of :class:`DataFrame` and :class:`Series` will no longer show up in ``dir`` or ``inspect.getmembers`` calls (:issue:`38740`)
+- :meth:`to_csv` created corrupted zip files when there were more rows than ``chunksize`` (issue:`38714`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 64c5d3173fe0a..c189c3046b4f3 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -4,10 +4,10 @@
from collections import abc
import dataclasses
import gzip
-from io import BufferedIOBase, BytesIO, RawIOBase, TextIOWrapper
+from io import BufferedIOBase, BytesIO, RawIOBase, StringIO, TextIOWrapper
import mmap
import os
-from typing import IO, Any, AnyStr, Dict, List, Mapping, Optional, Tuple, cast
+from typing import IO, Any, AnyStr, Dict, List, Mapping, Optional, Tuple, Union, cast
from urllib.parse import (
urljoin,
urlparse as parse_url,
@@ -707,17 +707,36 @@ def __init__(
archive_name: Optional[str] = None,
**kwargs,
):
- if mode in ["wb", "rb"]:
- mode = mode.replace("b", "")
+ mode = mode.replace("b", "")
self.archive_name = archive_name
+ self.multiple_write_buffer: Optional[Union[StringIO, BytesIO]] = None
+
kwargs_zip: Dict[str, Any] = {"compression": zipfile.ZIP_DEFLATED}
kwargs_zip.update(kwargs)
+
super().__init__(file, mode, **kwargs_zip) # type: ignore[arg-type]
def write(self, data):
+ # buffer multiple write calls, write on flush
+ if self.multiple_write_buffer is None:
+ self.multiple_write_buffer = (
+ BytesIO() if isinstance(data, bytes) else StringIO()
+ )
+ self.multiple_write_buffer.write(data)
+
+ def flush(self) -> None:
+ # write to actual handle and close write buffer
+ if self.multiple_write_buffer is None or self.multiple_write_buffer.closed:
+ return
+
# ZipFile needs a non-empty string
archive_name = self.archive_name or self.filename or "zip"
- super().writestr(archive_name, data)
+ with self.multiple_write_buffer:
+ super().writestr(archive_name, self.multiple_write_buffer.getvalue())
+
+ def close(self):
+ self.flush()
+ super().close()
@property
def closed(self):
diff --git a/pandas/tests/io/formats/test_to_csv.py b/pandas/tests/io/formats/test_to_csv.py
index a9673ded7c377..6416cb93c7ff5 100644
--- a/pandas/tests/io/formats/test_to_csv.py
+++ b/pandas/tests/io/formats/test_to_csv.py
@@ -640,3 +640,25 @@ def test_to_csv_encoding_binary_handle(self, mode):
handle.seek(0)
assert handle.read().startswith(b'\xef\xbb\xbf""')
+
+
+def test_to_csv_iterative_compression_name(compression):
+ # GH 38714
+ df = tm.makeDataFrame()
+ with tm.ensure_clean() as path:
+ df.to_csv(path, compression=compression, chunksize=1)
+ tm.assert_frame_equal(
+ pd.read_csv(path, compression=compression, index_col=0), df
+ )
+
+
+def test_to_csv_iterative_compression_buffer(compression):
+ # GH 38714
+ df = tm.makeDataFrame()
+ with io.BytesIO() as buffer:
+ df.to_csv(buffer, compression=compression, chunksize=1)
+ buffer.seek(0)
+ tm.assert_frame_equal(
+ pd.read_csv(buffer, compression=compression, index_col=0), df
+ )
+ assert not buffer.closed
| Backport PR #38728: REGR: to_csv created corrupt ZIP files when chunksize<rows | https://api.github.com/repos/pandas-dev/pandas/pulls/38767 | 2020-12-29T00:05:21Z | 2020-12-29T03:16:45Z | 2020-12-29T03:16:45Z | 2020-12-29T03:16:45Z |
BLD: fix build failure py3.9.1 on OSX | diff --git a/doc/source/whatsnew/v1.2.1.rst b/doc/source/whatsnew/v1.2.1.rst
index e280b730679f0..cbbb0dfcf3b43 100644
--- a/doc/source/whatsnew/v1.2.1.rst
+++ b/doc/source/whatsnew/v1.2.1.rst
@@ -40,7 +40,7 @@ I/O
Other
~~~~~
--
+- Fixed build failure on MacOS 11 in Python 3.9.1 (:issue:`38766`)
-
.. ---------------------------------------------------------------------------
diff --git a/setup.py b/setup.py
index a25fe95e025b3..f9c4a1158fee0 100755
--- a/setup.py
+++ b/setup.py
@@ -435,7 +435,7 @@ def run(self):
"MACOSX_DEPLOYMENT_TARGET", current_system
)
if (
- LooseVersion(python_target) < "10.9"
+ LooseVersion(str(python_target)) < "10.9"
and LooseVersion(current_system) >= "10.9"
):
os.environ["MACOSX_DEPLOYMENT_TARGET"] = "10.9"
| Ran `brew upgrade` yesterday and ended up bumping from 3.9.0 to 3.9.1, after which I got build failures
```
LooseVersion(python_target) < "10.9"
File "/usr/local/Cellar/python@3.9/3.9.1_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/version.py", line 306, in __init__
self.parse(vstring)
File "/usr/local/Cellar/python@3.9/3.9.1_2/Frameworks/Python.framework/Versions/3.9/lib/python3.9/distutils/version.py", line 314, in parse
components = [x for x in self.component_re.split(vstring)
TypeError: expected string or bytes-like object
```
No idea why this just started breaking now.
| https://api.github.com/repos/pandas-dev/pandas/pulls/38766 | 2020-12-28T23:55:12Z | 2020-12-29T14:05:09Z | 2020-12-29T14:05:09Z | 2020-12-29T15:57:32Z |
TYP: DataFrame.(dot, __matmul__) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c32483aa2a231..1abbe37e67b09 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -48,6 +48,7 @@
from pandas._libs.lib import no_default
from pandas._typing import (
AggFuncType,
+ AnyArrayLike,
ArrayLike,
Axes,
Axis,
@@ -1141,7 +1142,17 @@ def __len__(self) -> int:
"""
return len(self.index)
- def dot(self, other):
+ # pandas/core/frame.py:1146: error: Overloaded function signatures 1 and 2
+ # overlap with incompatible return types [misc]
+ @overload
+ def dot(self, other: Series) -> Series: # type: ignore[misc]
+ ...
+
+ @overload
+ def dot(self, other: Union[DataFrame, Index, ArrayLike]) -> DataFrame:
+ ...
+
+ def dot(self, other: Union[AnyArrayLike, FrameOrSeriesUnion]) -> FrameOrSeriesUnion:
"""
Compute the matrix multiplication between the DataFrame and other.
@@ -1251,7 +1262,19 @@ def dot(self, other):
else: # pragma: no cover
raise TypeError(f"unsupported type: {type(other)}")
- def __matmul__(self, other):
+ @overload
+ def __matmul__(self, other: Series) -> Series:
+ ...
+
+ @overload
+ def __matmul__(
+ self, other: Union[AnyArrayLike, FrameOrSeriesUnion]
+ ) -> FrameOrSeriesUnion:
+ ...
+
+ def __matmul__(
+ self, other: Union[AnyArrayLike, FrameOrSeriesUnion]
+ ) -> FrameOrSeriesUnion:
"""
Matrix multiplication using binary `@` operator in Python>=3.5.
"""
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
xref https://github.com/pandas-dev/pandas/pull/38416/files#r549438292 | https://api.github.com/repos/pandas-dev/pandas/pulls/38765 | 2020-12-28T23:32:46Z | 2021-01-05T00:37:14Z | 2021-01-05T00:37:13Z | 2021-01-06T19:29:38Z |
BUG: Series construction with mismatched dt64 data vs td64 dtype | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 77bc080892e6c..0077f1061e588 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -189,7 +189,7 @@ Datetimelike
^^^^^^^^^^^^
- Bug in :class:`DataFrame` and :class:`Series` constructors sometimes dropping nanoseconds from :class:`Timestamp` (resp. :class:`Timedelta`) ``data``, with ``dtype=datetime64[ns]`` (resp. ``timedelta64[ns]``) (:issue:`38032`)
- Bug in :meth:`DataFrame.first` and :meth:`Series.first` returning two months for offset one month when first day is last calendar day (:issue:`29623`)
-- Bug in constructing a :class:`DataFrame` or :class:`Series` with mismatched ``datetime64`` data and ``timedelta64`` dtype, or vice-versa, failing to raise ``TypeError`` (:issue:`38575`)
+- Bug in constructing a :class:`DataFrame` or :class:`Series` with mismatched ``datetime64`` data and ``timedelta64`` dtype, or vice-versa, failing to raise ``TypeError`` (:issue:`38575`, :issue:`38764`)
- Bug in :meth:`DatetimeIndex.intersection`, :meth:`DatetimeIndex.symmetric_difference`, :meth:`PeriodIndex.intersection`, :meth:`PeriodIndex.symmetric_difference` always returning object-dtype when operating with :class:`CategoricalIndex` (:issue:`38741`)
Timedelta
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index b7113669a1905..25259093f9fba 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -166,14 +166,23 @@ def maybe_unbox_datetimelike(value: Scalar, dtype: DtypeObj) -> Scalar:
elif isinstance(value, Timedelta):
value = value.to_timedelta64()
- if (isinstance(value, np.timedelta64) and dtype.kind == "M") or (
- isinstance(value, np.datetime64) and dtype.kind == "m"
+ _disallow_mismatched_datetimelike(value, dtype)
+ return value
+
+
+def _disallow_mismatched_datetimelike(value: DtypeObj, dtype: DtypeObj):
+ """
+ numpy allows np.array(dt64values, dtype="timedelta64[ns]") and
+ vice-versa, but we do not want to allow this, so we need to
+ check explicitly
+ """
+ vdtype = getattr(value, "dtype", None)
+ if vdtype is None:
+ return
+ elif (vdtype.kind == "m" and dtype.kind == "M") or (
+ vdtype.kind == "M" and dtype.kind == "m"
):
- # numpy allows np.array(dt64values, dtype="timedelta64[ns]") and
- # vice-versa, but we do not want to allow this, so we need to
- # check explicitly
raise TypeError(f"Cannot cast {repr(value)} to {dtype}")
- return value
def maybe_downcast_to_dtype(result, dtype: Union[str, np.dtype]):
@@ -1715,6 +1724,8 @@ def construct_1d_ndarray_preserving_na(
if dtype is not None and dtype.kind == "U":
subarr = lib.ensure_string_array(values, convert_na_value=False, copy=copy)
else:
+ if dtype is not None:
+ _disallow_mismatched_datetimelike(values, dtype)
subarr = np.array(values, dtype=dtype, copy=copy)
return subarr
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 5ba38016ee552..94b2431650359 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2925,12 +2925,31 @@ def get1(obj):
class TestFromScalar:
- @pytest.fixture
- def constructor(self, frame_or_series):
- if frame_or_series is Series:
- return functools.partial(Series, index=range(2))
+ @pytest.fixture(params=[list, dict, None])
+ def constructor(self, request, frame_or_series):
+ box = request.param
+
+ extra = {"index": range(2)}
+ if frame_or_series is DataFrame:
+ extra["columns"] = ["A"]
+
+ if box is None:
+ return functools.partial(frame_or_series, **extra)
+
+ elif box is dict:
+ if frame_or_series is Series:
+ return lambda x, **kwargs: frame_or_series(
+ {0: x, 1: x}, **extra, **kwargs
+ )
+ else:
+ return lambda x, **kwargs: frame_or_series({"A": x}, **extra, **kwargs)
else:
- return functools.partial(DataFrame, index=range(2), columns=range(2))
+ if frame_or_series is Series:
+ return lambda x, **kwargs: frame_or_series([x, x], **extra, **kwargs)
+ else:
+ return lambda x, **kwargs: frame_or_series(
+ {"A": [x, x]}, **extra, **kwargs
+ )
@pytest.mark.parametrize("dtype", ["M8[ns]", "m8[ns]"])
def test_from_nat_scalar(self, dtype, constructor):
@@ -2951,7 +2970,8 @@ def test_from_timestamp_scalar_preserves_nanos(self, constructor):
assert get1(obj) == ts
def test_from_timedelta64_scalar_object(self, constructor, request):
- if constructor.func is DataFrame and _np_version_under1p20:
+ if getattr(constructor, "func", None) is DataFrame and _np_version_under1p20:
+ # getattr check means we only xfail when box is None
mark = pytest.mark.xfail(
reason="np.array(td64, dtype=object) converts to int"
)
@@ -2964,7 +2984,15 @@ def test_from_timedelta64_scalar_object(self, constructor, request):
assert isinstance(get1(obj), np.timedelta64)
@pytest.mark.parametrize("cls", [np.datetime64, np.timedelta64])
- def test_from_scalar_datetimelike_mismatched(self, constructor, cls):
+ def test_from_scalar_datetimelike_mismatched(self, constructor, cls, request):
+ node = request.node
+ params = node.callspec.params
+ if params["frame_or_series"] is DataFrame and params["constructor"] is not None:
+ mark = pytest.mark.xfail(
+ reason="DataFrame incorrectly allows mismatched datetimelike"
+ )
+ node.add_marker(mark)
+
scalar = cls("NaT", "ns")
dtype = {np.datetime64: "m8[ns]", np.timedelta64: "M8[ns]"}[cls]
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Fixing this for DataFrame is much harder, so punting for now. | https://api.github.com/repos/pandas-dev/pandas/pulls/38764 | 2020-12-28T23:23:35Z | 2020-12-29T19:05:43Z | 2020-12-29T19:05:43Z | 2020-12-29T19:07:22Z |
Backport PR #38761 on branch 1.2.x (CI: print build version) | diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml
index 5fe7fc17a98cb..c2f332cc5454a 100644
--- a/.github/workflows/database.yml
+++ b/.github/workflows/database.yml
@@ -86,6 +86,9 @@ jobs:
run: ci/run_tests.sh
if: always()
+ - name: Build Version
+ run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
+
- name: Publish test results
uses: actions/upload-artifact@master
with:
@@ -169,6 +172,9 @@ jobs:
run: ci/run_tests.sh
if: always()
+ - name: Build Version
+ run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
+
- name: Publish test results
uses: actions/upload-artifact@master
with:
| Backport PR #38761: CI: print build version | https://api.github.com/repos/pandas-dev/pandas/pulls/38763 | 2020-12-28T23:12:44Z | 2020-12-28T23:59:57Z | 2020-12-28T23:59:57Z | 2020-12-28T23:59:57Z |
CI: print build version | diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml
index 5fe7fc17a98cb..c2f332cc5454a 100644
--- a/.github/workflows/database.yml
+++ b/.github/workflows/database.yml
@@ -86,6 +86,9 @@ jobs:
run: ci/run_tests.sh
if: always()
+ - name: Build Version
+ run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
+
- name: Publish test results
uses: actions/upload-artifact@master
with:
@@ -169,6 +172,9 @@ jobs:
run: ci/run_tests.sh
if: always()
+ - name: Build Version
+ run: pushd /tmp && python -c "import pandas; pandas.show_versions();" && popd
+
- name: Publish test results
uses: actions/upload-artifact@master
with:
| xref #38344
followup
| https://api.github.com/repos/pandas-dev/pandas/pulls/38761 | 2020-12-28T21:27:36Z | 2020-12-28T23:12:15Z | 2020-12-28T23:12:15Z | 2020-12-29T01:57:33Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.