title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
(wip)BUG: make sure combine doesn't alter dtype for nullable dtypes | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 13827e8fc4c33..419d9e7ec01d7 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -245,6 +245,7 @@ Other
instead of ``TypeError: Can only append a Series if ignore_index=True or if the Series has a name`` (:issue:`30871`)
- Set operations on an object-dtype :class:`Index` now always return object-dtype results (:issue:`31401`)
- Bug in :meth:`AbstractHolidayCalendar.holidays` when no rules were defined (:issue:`31415`)
+- Bug in :meth:`Series.combine` was changing the result's dtype for nullable integers (:issue:`31899`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 256586f3d36a1..cef8a2113d77d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2691,7 +2691,7 @@ def combine(self, other, func, fill_value=None) -> "Series":
elif is_extension_array_dtype(self.values):
# The function can return something of any type, so check
# if the type is compatible with the calling EA.
- new_values = try_cast_to_ea(self._values, new_values)
+ new_values = try_cast_to_ea(self._values, new_values, dtype=self.dtype)
return self._constructor(new_values, index=new_index, name=new_name)
def combine_first(self, other) -> "Series":
diff --git a/pandas/tests/extension/base/methods.py b/pandas/tests/extension/base/methods.py
index 22e53dbc89f01..97257bb03927c 100644
--- a/pandas/tests/extension/base/methods.py
+++ b/pandas/tests/extension/base/methods.py
@@ -193,14 +193,15 @@ def test_combine_add(self, data_repeated):
expected = pd.Series(
orig_data1._from_sequence(
[a + b for (a, b) in zip(list(orig_data1), list(orig_data2))]
- )
+ ),
+ dtype=orig_data1.dtype,
)
self.assert_series_equal(result, expected)
-
val = s1.iloc[0]
result = s1.combine(val, lambda x1, x2: x1 + x2)
expected = pd.Series(
- orig_data1._from_sequence([a + val for a in list(orig_data1)])
+ orig_data1._from_sequence([a + val for a in list(orig_data1)]),
+ dtype=orig_data1.dtype,
)
self.assert_series_equal(result, expected)
| - [ ] closes #31899
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Haven't added new tests, but have changed the expected return dtype in existing ones
EDIT
----
This is the wrong fix - for, say
```
s1.combine(s2, lambda x, y: x<y)
```
the dtype _would_ be expected to change (in this case to bool) | https://api.github.com/repos/pandas-dev/pandas/pulls/32012 | 2020-02-15T10:59:40Z | 2020-02-28T10:30:50Z | null | 2020-10-10T14:15:00Z |
CI: temporary fix to the CI | diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py
index 5fc991df49424..60e278f47d0f8 100644
--- a/pandas/tests/scalar/timedelta/test_arithmetic.py
+++ b/pandas/tests/scalar/timedelta/test_arithmetic.py
@@ -384,8 +384,11 @@ def test_td_div_nan(self, nan):
result = td / nan
assert result is NaT
- result = td // nan
- assert result is NaT
+ # TODO: Don't leave commented, this is just a temporary fix for
+ # https://github.com/pandas-dev/pandas/issues/31992
+
+ # result = td // nan
+ # assert result is NaT
# ---------------------------------------------------------------
# Timedelta.__rdiv__
| ref #31992
---
This is just a temporary fix to the CI.
Until someone smarter than me fix it :)
| https://api.github.com/repos/pandas-dev/pandas/pulls/32011 | 2020-02-15T10:56:36Z | 2020-02-15T13:14:12Z | 2020-02-15T13:14:12Z | 2020-02-29T10:30:46Z |
DOC: Fix errors in pandas.Series.argmax | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 56d3596f71813..69a4ecd03a2a6 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -929,11 +929,17 @@ def argmax(self, axis=None, skipna=True, *args, **kwargs):
"""
Return an ndarray of the maximum argument indexer.
+ If multiple values equal the maximum, the first row label with that
+ value is returned.
+
Parameters
----------
axis : {None}
Dummy argument for consistency with Series.
skipna : bool, default True
+ Exclude NA/null values when showing the result.
+ *args, **kwargs
+ Additional arguments and keywords for compatibility with NumPy.
Returns
-------
@@ -942,7 +948,22 @@ def argmax(self, axis=None, skipna=True, *args, **kwargs):
See Also
--------
- numpy.ndarray.argmax
+ numpy.ndarray.argmax : Returns the indices of the maximum values along an axis.
+
+ Examples
+ --------
+ >>> s = pd.Series(data=[1, None, 5, 4, 5],
+ ... index=['A', 'B', 'C', 'D', 'E'])
+ >>> s
+ A 1.0
+ B NaN
+ C 5.0
+ D 4.0
+ E 5.0
+ dtype: float64
+
+ >>> s.argmax()
+ 2
"""
nv.validate_minmax_axis(axis)
nv.validate_argmax_with_skipna(skipna, args, kwargs)
| - [x] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/18
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
output of `python scripts/validate_docstrings.py pandas.Series.argmax`:
```
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32010 | 2020-02-15T10:55:10Z | 2020-02-15T11:32:58Z | null | 2020-02-15T11:33:09Z |
DOC: Improve documentation for Index.where | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index c896e68f7a188..6b6753ff31e29 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3936,18 +3936,35 @@ def memory_usage(self, deep: bool = False) -> int:
def where(self, cond, other=None):
"""
- Return an Index of same shape as self and whose corresponding
- entries are from self where cond is True and otherwise are from
- other.
+ Replace values where the condition is False.
+
+ The replacement is taken from other.
Parameters
----------
cond : bool array-like with the same length as self
- other : scalar, or array-like
+ Condition to select the values on.
+ other : scalar, or array-like, default None
+ Replacement if the condition is False.
Returns
-------
- Index
+ pandas.Index
+ A copy of self with values replaced from other
+ where the condition is False.
+
+ See Also
+ --------
+ Series.where : Same method for Series.
+ DataFrame.where : Same method for DataFrame.
+
+ Examples
+ --------
+ >>> idx = pd.Index(['car', 'bike', 'train', 'tractor'])
+ >>> idx
+ Index(['car', 'bike', 'train', 'tractor'], dtype='object')
+ >>> idx.where(idx.isin(['car', 'train']), 'other')
+ Index(['car', 'other', 'train', 'other'], dtype='object')
"""
if other is None:
other = self._na_value
| - [x] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/15
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32009 | 2020-02-15T10:50:46Z | 2020-02-26T20:38:48Z | 2020-02-26T20:38:47Z | 2020-02-26T20:39:02Z |
DOC: Add description for tz parameter in DatetimeIndex | diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index e303e487b1a7d..234dc00210e87 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -93,6 +93,8 @@ class DatetimeIndex(DatetimeTimedeltaMixin):
'infer' can be passed in order to set the frequency of the index as the
inferred frequency upon creation.
tz : pytz.timezone or dateutil.tz.tzfile
+ Time zone information object corresponding to the desired time zone.
+ For example: :func:`dateutil.tz.UTC`.
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from 03:00
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32008 | 2020-02-15T10:40:58Z | 2020-03-07T20:37:12Z | null | 2020-03-07T20:37:13Z |
CLN 29574 Replace old string formating | diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index aeff92971b42a..86c9a98377f3f 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -687,7 +687,7 @@ def _make_frame(names=None):
df.to_csv(path)
for i in [6, 7]:
- msg = "len of {i}, but only 5 lines in file".format(i=i)
+ msg = f"len of {i}, but only 5 lines in file"
with pytest.raises(ParserError, match=msg):
read_csv(path, header=list(range(i)), index_col=0)
@@ -744,7 +744,7 @@ def test_to_csv_withcommas(self):
def test_to_csv_mixed(self):
def create_cols(name):
- return ["{name}{i:03d}".format(name=name, i=i) for i in range(5)]
+ return [f"{name}{i:03d}" for i in range(5)]
df_float = DataFrame(
np.random.randn(100, 5), dtype="float64", columns=create_cols("float")
| Its my first contribution, modified the str formating to use fstrings
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] pandas/tests/frame/test_to_csv.py
| https://api.github.com/repos/pandas-dev/pandas/pulls/32007 | 2020-02-15T10:39:50Z | 2020-02-15T19:05:03Z | 2020-02-15T19:05:03Z | 2020-02-16T06:31:37Z |
DOC: Fix pandas.index.copy summary documentation | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 3d549405592d6..e8d65fbf11128 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -825,20 +825,24 @@ def repeat(self, repeats, axis=None):
def copy(self, name=None, deep=False, dtype=None, names=None):
"""
- Make a copy of this object. Name and dtype sets those attributes on
- the new object.
+ Make a copy of this object.
+
+ Name and dtype sets those attributes on the new object.
Parameters
----------
- name : Label
+ name : Label, optional
+ Set name for new object.
deep : bool, default False
dtype : numpy dtype or pandas type, optional
+ Set dtype for new object.
names : list-like, optional
Kept for compatibility with MultiIndex. Should not be used.
Returns
-------
Index
+ Index refer to new object which is a copy of this object.
Notes
-----
| - [ ] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/8
- [ ] tests added / passed
- [ ] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.Index.copy`
################################################################################
######################## Docstring (pandas.Index.copy) ########################
################################################################################
Make a copy of this object.
Name and dtype sets those attributes on the new object.
Parameters
----------
name : Label, optional
Set name for new object.
deep : bool, default False
dtype : numpy dtype or pandas type, optional
Set dtype for new object.
names : list-like, optional
Kept for compatibility with MultiIndex. Should not be used.
Returns
-------
Index
Index refer to new object which is a copy of this object.
Notes
-----
In most cases, there should be no functional difference from using
``deep``, but if ``deep`` is passed it will attempt to deepcopy.
################################################################################
################################## Validation ##################################
################################################################################
3 Errors found:
Parameter "deep" has no description
See Also section not found
No examples section found | https://api.github.com/repos/pandas-dev/pandas/pulls/32006 | 2020-02-15T10:35:39Z | 2020-02-26T20:35:42Z | 2020-02-26T20:35:41Z | 2020-02-26T20:36:11Z |
doc: Add period to parameter description | diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 986f87ffe3734..0b85433b699a8 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -85,7 +85,7 @@ class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
copy : bool
Make a copy of input ndarray.
freq : str or period object, optional
- One of pandas period strings or corresponding objects
+ One of pandas period strings or corresponding objects.
year : int, array, or Series, default None
month : int, array, or Series, default None
quarter : int, array, or Series, default None
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
```
there are a few errors left:
################################################################################
################################## Validation ##################################
################################################################################
11 Errors found:
Parameters {'**fields', 'name', 'ordinal'} not documented
Unknown parameters {'second', 'minute', 'year', 'month', 'quarter', 'hour', 'day'}
Parameter "year" has no description
Parameter "month" has no description
Parameter "quarter" has no description
Parameter "day" has no description
Parameter "hour" has no description
Parameter "minute" has no description
Parameter "second" has no description
Parameter "dtype" has no description
Examples do not pass tests
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32005 | 2020-02-15T10:34:47Z | 2020-02-15T19:06:26Z | 2020-02-15T19:06:26Z | 2020-02-15T19:06:35Z |
DOC: Update pandas.DataFrame.droplevel docstring | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index bb7d8a388e6e2..e2dc543360a62 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -269,7 +269,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests generic.py' ; echo $MSG
pytest -q --doctest-modules pandas/core/generic.py \
- -k"-_set_axis_name -_xs -describe -droplevel -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -to_json -transpose -values -xs -to_clipboard"
+ -k"-_set_axis_name -_xs -describe -groupby -interpolate -pct_change -pipe -reindex -reindex_axis -to_json -transpose -values -xs -to_clipboard"
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests groupby.py' ; echo $MSG
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a6ab0d4034ddb..ff7c481d550d4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -602,6 +602,10 @@ def droplevel(self: FrameOrSeries, level, axis=0) -> FrameOrSeries:
of levels.
axis : {0 or 'index', 1 or 'columns'}, default 0
+ Axis along which the level(s) is removed:
+
+ * 0 or 'index': remove level(s) in column.
+ * 1 or 'columns': remove level(s) in row.
Returns
-------
@@ -617,7 +621,7 @@ def droplevel(self: FrameOrSeries, level, axis=0) -> FrameOrSeries:
... ]).set_index([0, 1]).rename_axis(['a', 'b'])
>>> df.columns = pd.MultiIndex.from_tuples([
- ... ('c', 'e'), ('d', 'f')
+ ... ('c', 'e'), ('d', 'f')
... ], names=['level_1', 'level_2'])
>>> df
@@ -636,7 +640,7 @@ def droplevel(self: FrameOrSeries, level, axis=0) -> FrameOrSeries:
6 7 8
10 11 12
- >>> df.droplevel('level2', axis=1)
+ >>> df.droplevel('level_2', axis=1)
level_1 c d
a b
1 2 3 4
| - [ ] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/23
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.DataFrame.droplevel`:
```
################################################################################
#################### Docstring (pandas.DataFrame.droplevel) ####################
################################################################################
Return DataFrame with requested index / column level(s) removed.
.. versionadded:: 0.24.0
Parameters
----------
level : int, str, or list-like
If a string is given, must be the name of a level
If list-like, elements must be names or positional indexes
of levels.
axis : {0 or 'index', 1 or 'columns'}, default 0
Axis along which the level(s) is removed:
* 0 or 'index': remove level(s) in column.
* 1 or 'columns': remove level(s) in row.
Returns
-------
DataFrame
DataFrame with requested index / column level(s) removed.
Examples
--------
>>> df = pd.DataFrame([
... [1, 2, 3, 4],
... [5, 6, 7, 8],
... [9, 10, 11, 12]
... ]).set_index([0, 1]).rename_axis(['a', 'b'])
>>> df.columns = pd.MultiIndex.from_tuples([
... ('c', 'e'), ('d', 'f')
... ], names=['level_1', 'level_2'])
>>> df
level_1 c d
level_2 e f
a b
1 2 3 4
5 6 7 8
9 10 11 12
>>> df.droplevel('a')
level_1 c d
level_2 e f
b
2 3 4
6 7 8
10 11 12
>>> df.droplevel('level_2', axis=1)
level_1 c d
a b
1 2 3 4
5 6 7 8
9 10 11 12
################################################################################
################################## Validation ##################################
################################################################################
1 Errors found:
See Also section not found
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32004 | 2020-02-15T10:29:42Z | 2020-02-25T01:54:46Z | 2020-02-25T01:54:46Z | 2020-02-25T01:54:52Z |
DOC: Fix SA04 errors from DataFrame.melt | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9fe1ec7b792c8..0c68b76efc66b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6268,9 +6268,10 @@ def unstack(self, level=-1, fill_value=None):
See Also
--------
%(other)s
- pivot_table
- DataFrame.pivot
- Series.explode
+ pivot_table : Backward fill the new missing values in the resampled data.
+ DataFrame.pivot : Create a spreadsheet-style pivot table as a DataFrame.
+ Series.explode : Transform each element of a list-like to a row,
+ replicating the index values.
Examples
--------
@@ -6335,7 +6336,10 @@ def unstack(self, level=-1, fill_value=None):
% dict(
caller="df.melt(",
versionadded="\n .. versionadded:: 0.20.0\n",
- other="melt",
+ other=(
+ "melt: Unpivot a DataFrame from wide to long format,"
+ " optionally leaving identifiers set."
+ ),
)
)
def melt(
| - [ ] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/10
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32003 | 2020-02-15T10:11:25Z | 2020-02-26T20:48:28Z | null | 2020-03-02T12:52:34Z |
DOC: Update the pandas.DataFrame.plot docstring | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index d3db539084609..66941598a498f 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -608,6 +608,7 @@ class PlotAccessor(PandasObject):
- 'hexbin' : hexbin plot.
figsize : a tuple (width, height) in inches
+ The size of the figure to create in matplotlib.
use_index : bool, default True
Use index as ticks for x axis.
title : str or list
@@ -637,7 +638,9 @@ class PlotAccessor(PandasObject):
yticks : sequence
Values to use for the yticks.
xlim : 2-tuple/list
+ Set or query x-axis limits.
ylim : 2-tuple/list
+ Set or query y-axis limits.
rot : int, default None
Rotation for ticks (xticks for vertical, yticks for horizontal
plots).
| Give description for 'figsize','xlim', and 'ylim'.
- [x] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/11
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.DataFrame.plot`:|
################################################################################
###################### Docstring (pandas.DataFrame.plot) ######################
################################################################################
Make plots of Series or DataFrame.
Uses the backend specified by the
option ``plotting.backend``. By default, matplotlib is used.
Parameters
----------
data : Series or DataFrame
The object for which the method is called.
x : label or position, default None
Only used if data is a DataFrame.
y : label, position or list of label, positions, default None
Allows plotting of one column versus another. Only used if data is a
DataFrame.
kind : str
The kind of plot to produce:
- 'line' : line plot (default)
- 'bar' : vertical bar plot
- 'barh' : horizontal bar plot
- 'hist' : histogram
- 'box' : boxplot
- 'kde' : Kernel Density Estimation plot
- 'density' : same as 'kde'
- 'area' : area plot
- 'pie' : pie plot
- 'scatter' : scatter plot
- 'hexbin' : hexbin plot.
figsize : a tuple (width, height) in inches
The size of the figure to create in matplotlib.
use_index : bool, default True
Use index as ticks for x axis.
title : str or list
Title to use for the plot. If a string is passed, print the string
at the top of the figure. If a list is passed and `subplots` is
True, print each item in the list above the corresponding subplot.
grid : bool, default None (matlab style default)
Axis grid lines.
legend : bool or {'reverse'}
Place legend on axis subplots.
style : list or dict
The matplotlib line style per column.
logx : bool or 'sym', default False
Use log scaling or symlog scaling on x axis.
.. versionchanged:: 0.25.0
logy : bool or 'sym' default False
Use log scaling or symlog scaling on y axis.
.. versionchanged:: 0.25.0
loglog : bool or 'sym', default False
Use log scaling or symlog scaling on both x and y axes.
.. versionchanged:: 0.25.0
xticks : sequence
Values to use for the xticks.
yticks : sequence
Values to use for the yticks.
xlim : 2-tuple/list
Set or query x-axis limits.
ylim : 2-tuple/list
Set or query y-axis limits.
rot : int, default None
Rotation for ticks (xticks for vertical, yticks for horizontal
plots).
fontsize : int, default None
Font size for xticks and yticks.
colormap : str or matplotlib colormap object, default None
Colormap to select colors from. If string, load colormap with that
name from matplotlib.
colorbar : bool, optional
If True, plot colorbar (only relevant for 'scatter' and 'hexbin'
plots).
position : float
Specify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
(center).
table : bool, Series or DataFrame, default False
If True, draw a table using the data in the DataFrame and the data
will be transposed to meet matplotlib's default layout.
If a Series or DataFrame is passed, use passed data to draw a
table.
yerr : DataFrame, Series, array-like, dict and str
See :ref:`Plotting with Error Bars <visualization.errorbars>` for
detail.
xerr : DataFrame, Series, array-like, dict and str
Equivalent to yerr.
mark_right : bool, default True
When using a secondary_y axis, automatically mark the column
labels with "(right)" in the legend.
include_bool : bool, default is False
If True, boolean values can be plotted.
backend : str, default None
Backend to use instead of the backend specified in the option
``plotting.backend``. For instance, 'matplotlib'. Alternatively, to
specify the ``plotting.backend`` for the whole session, set
``pd.options.plotting.backend``.
.. versionadded:: 1.0.0
**kwargs
Options to pass to matplotlib plotting method.
Returns
-------
:class:`matplotlib.axes.Axes` or numpy.ndarray of them
If the backend is not the default matplotlib one, the return value
will be the object returned by the backend.
Notes
-----
- See matplotlib documentation online for more on this subject
- If `kind` = 'bar' or 'barh', you can specify relative alignments
for bar plot layout by `position` keyword.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5
(center)
################################################################################
################################## Validation ##################################
################################################################################
3 Errors found:
Unknown parameters {'colorbar', '**kwargs', 'style', 'use_index', 'rot', 'mark_right', 'grid', 'y', 'fontsize', 'position', 'kind', 'include_bool', 'logy', 'x', 'yerr', 'xlim', 'backend', 'table', 'logx', 'ylim', 'yticks', 'xerr', 'loglog', 'legend', 'colormap', 'figsize', 'title', 'xticks'}
See Also section not found
No examples section found | https://api.github.com/repos/pandas-dev/pandas/pulls/32002 | 2020-02-15T09:46:58Z | 2020-02-26T20:47:43Z | null | 2020-02-26T20:47:44Z |
DOC PR09 Add . in the description parameter | diff --git a/pandas/_testing.py b/pandas/_testing.py
index 01d2bfe0458c8..0b3004ce12013 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1014,7 +1014,7 @@ def assert_extension_array_equal(
Parameters
----------
left, right : ExtensionArray
- The two arrays to compare
+ The two arrays to compare.
check_dtype : bool, default True
Whether to check if the ExtensionArray dtypes are identical.
check_less_precise : bool or int, default False
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 1d7527e73079c..6b2880810dcb2 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2329,7 +2329,7 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
periods : int, default 1
Number of periods to shift.
freq : str, optional
- Frequency string
+ Frequency string.
axis : axis to shift, default 0
fill_value : optional
diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index fe475527f4596..cadae9da6092f 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -296,7 +296,7 @@ def check_array_indexer(array: AnyArrayLike, indexer: Any) -> Any:
indexer : array-like or list-like
The array-like that's used to index. List-like input that is not yet
a numpy array or an ExtensionArray is converted to one. Other input
- types are passed through as is
+ types are passed through as is.
Returns
-------
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 986f87ffe3734..0b85433b699a8 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -85,7 +85,7 @@ class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
copy : bool
Make a copy of input ndarray.
freq : str or period object, optional
- One of pandas period strings or corresponding objects
+ One of pandas period strings or corresponding objects.
year : int, array, or Series, default None
month : int, array, or Series, default None
quarter : int, array, or Series, default None
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 9c46a0036ab0d..d0c64d54f30d6 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -85,7 +85,7 @@ class Styler:
number and ``<num_col>`` is the column number.
na_rep : str, optional
Representation for missing values.
- If ``na_rep`` is None, no special formatting is applied
+ If ``na_rep`` is None, no special formatting is applied.
.. versionadded:: 1.0.0
@@ -446,7 +446,7 @@ def format(self, formatter, subset=None, na_rep: Optional[str] = None) -> "Style
Parameters
----------
formatter : str, callable, dict or None
- If ``formatter`` is None, the default formatter is used
+ If ``formatter`` is None, the default formatter is used.
subset : IndexSlice
An argument to ``DataFrame.loc`` that restricts which elements
``formatter`` is applied to.
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 69e5a973ff706..e8666c495d39a 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -356,7 +356,7 @@ def read_sql(
sql : str or SQLAlchemy Selectable (select or text object)
SQL query to be executed or a table name.
con : SQLAlchemy connectable (engine/connection) or database str URI
- or DBAPI2 connection (fallback mode)'
+ or DBAPI2 connection (fallback mode).
Using SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/32001 | 2020-02-15T09:31:01Z | 2020-02-15T10:28:55Z | 2020-02-15T10:28:55Z | 2020-02-15T10:29:08Z |
DOC: PR09 Add missing . on Parameter con description | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 04e8b78fb1b87..1139d22d53e7d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2415,7 +2415,7 @@ def to_sql(
library. Legacy support is provided for sqlite3.Connection objects. The user
is responsible for engine disposal and connection closure for the SQLAlchemy
connectable See `here \
- <https://docs.sqlalchemy.org/en/13/core/connections.html>`_
+ <https://docs.sqlalchemy.org/en/13/core/connections.html>`_.
schema : str, optional
Specify the schema (if database flavor supports this). If None, use
| - [ ] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/22
- [ ] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Output of `python scripts/validate_docstrings.py pandas.DataFrame.to_sql`:
```
################################################################################
##################### Docstring (pandas.DataFrame.to_sql) #####################
################################################################################
Write records stored in a DataFrame to a SQL database.
Databases supported by SQLAlchemy [1]_ are supported. Tables can be
newly created, appended to, or overwritten.
Parameters
----------
name : str
Name of SQL table.
con : sqlalchemy.engine.Engine or sqlite3.Connection
Using SQLAlchemy makes it possible to use any DB supported by that
library. Legacy support is provided for sqlite3.Connection objects. The user
is responsible for engine disposal and connection closure for the SQLAlchemy
connectable See `here <https://docs.sqlalchemy.org/en/13/core/connections.html>`_.
schema : str, optional
Specify the schema (if database flavor supports this). If None, use
default schema.
if_exists : {'fail', 'replace', 'append'}, default 'fail'
How to behave if the table already exists.
* fail: Raise a ValueError.
* replace: Drop the table before inserting new values.
* append: Insert new values to the existing table.
index : bool, default True
Write DataFrame index as a column. Uses `index_label` as the column
name in the table.
index_label : str or sequence, default None
Column label for index column(s). If None is given (default) and
`index` is True, then the index names are used.
A sequence should be given if the DataFrame uses MultiIndex.
chunksize : int, optional
Specify the number of rows in each batch to be written at a time.
By default, all rows will be written at once.
dtype : dict or scalar, optional
Specifying the datatype for columns. If a dictionary is used, the
keys should be the column names and the values should be the
SQLAlchemy types or strings for the sqlite3 legacy mode. If a
scalar is provided, it will be applied to all columns.
method : {None, 'multi', callable}, optional
Controls the SQL insertion clause used:
* None : Uses standard SQL ``INSERT`` clause (one per row).
* 'multi': Pass multiple values in a single ``INSERT`` clause.
* callable with signature ``(pd_table, conn, keys, data_iter)``.
Details and a sample callable implementation can be found in the
section :ref:`insert method <io.sql.method>`.
.. versionadded:: 0.24.0
Raises
------
ValueError
When the table already exists and `if_exists` is 'fail' (the
default).
See Also
--------
read_sql : Read a DataFrame from a table.
Notes
-----
Timezone aware datetime columns will be written as
``Timestamp with timezone`` type with SQLAlchemy if supported by the
database. Otherwise, the datetimes will be stored as timezone unaware
timestamps local to the original timezone.
.. versionadded:: 0.24.0
References
----------
.. [1] https://docs.sqlalchemy.org
.. [2] https://www.python.org/dev/peps/pep-0249/
Examples
--------
Create an in-memory SQLite database.
>>> from sqlalchemy import create_engine
>>> engine = create_engine('sqlite://', echo=False)
Create a table from scratch with 3 rows.
>>> df = pd.DataFrame({'name' : ['User 1', 'User 2', 'User 3']})
>>> df
name
0 User 1
1 User 2
2 User 3
>>> df.to_sql('users', con=engine)
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3')]
>>> df1 = pd.DataFrame({'name' : ['User 4', 'User 5']})
>>> df1.to_sql('users', con=engine, if_exists='append')
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 1'), (1, 'User 2'), (2, 'User 3'),
(0, 'User 4'), (1, 'User 5')]
Overwrite the table with just ``df1``.
>>> df1.to_sql('users', con=engine, if_exists='replace',
... index_label='id')
>>> engine.execute("SELECT * FROM users").fetchall()
[(0, 'User 4'), (1, 'User 5')]
Specify the dtype (especially useful for integers with missing values).
Notice that while pandas is forced to store the data as floating point,
the database supports nullable integers. When fetching the data with
Python, we get back integer scalars.
>>> df = pd.DataFrame({"A": [1, None, 2]})
>>> df
A
0 1.0
1 NaN
2 2.0
>>> from sqlalchemy.types import Integer
>>> df.to_sql('integers', con=engine, index=False,
... dtype={"A": Integer()})
>>> engine.execute("SELECT * FROM integers").fetchall()
[(1,), (None,), (2,)]
################################################################################
################################## Validation ##################################
################################################################################
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/32000 | 2020-02-15T09:30:45Z | 2020-02-15T10:58:12Z | 2020-02-15T10:58:12Z | 2020-02-15T10:58:37Z |
DOC PR07: Add description to parameter skipna | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 56d3596f71813..69a4ecd03a2a6 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -929,11 +929,17 @@ def argmax(self, axis=None, skipna=True, *args, **kwargs):
"""
Return an ndarray of the maximum argument indexer.
+ If multiple values equal the maximum, the first row label with that
+ value is returned.
+
Parameters
----------
axis : {None}
Dummy argument for consistency with Series.
skipna : bool, default True
+ Exclude NA/null values when showing the result.
+ *args, **kwargs
+ Additional arguments and keywords for compatibility with NumPy.
Returns
-------
@@ -942,7 +948,22 @@ def argmax(self, axis=None, skipna=True, *args, **kwargs):
See Also
--------
- numpy.ndarray.argmax
+ numpy.ndarray.argmax : Returns the indices of the maximum values along an axis.
+
+ Examples
+ --------
+ >>> s = pd.Series(data=[1, None, 5, 4, 5],
+ ... index=['A', 'B', 'C', 'D', 'E'])
+ >>> s
+ A 1.0
+ B NaN
+ C 5.0
+ D 4.0
+ E 5.0
+ dtype: float64
+
+ >>> s.argmax()
+ 2
"""
nv.validate_minmax_axis(axis)
nv.validate_argmax_with_skipna(skipna, args, kwargs)
| - [ ] closes https://github.com/pandanistas/pandanistas_sprint_jakarta2020/issues/18
- [ ] tests added / passed
- [ ] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
output of `python scripts/validate_docstrings.py pandas.Series.argmax`:
```
################################################################################
################################## Validation ##################################
################################################################################
4 Errors found:
No extended summary found
Parameters {'*args', '**kwargs'} not documented
Missing description for See Also "numpy.ndarray.argmax" reference
No examples section found
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/31999 | 2020-02-15T09:29:50Z | 2020-02-15T11:33:43Z | null | 2020-02-15T11:33:47Z |
DOC PR09 Add missing . on freq parameter on groupby.py | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 1d7527e73079c..6b2880810dcb2 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2329,7 +2329,7 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
periods : int, default 1
Number of periods to shift.
freq : str, optional
- Frequency string
+ Frequency string.
axis : axis to shift, default 0
fill_value : optional
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31998 | 2020-02-15T09:07:44Z | 2020-02-15T09:55:11Z | 2020-02-15T09:55:11Z | 2020-02-15T10:07:58Z |
DOC PR09 on freq parameter | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 1d7527e73079c..6b2880810dcb2 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2329,7 +2329,7 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
periods : int, default 1
Number of periods to shift.
freq : str, optional
- Frequency string
+ Frequency string.
axis : axis to shift, default 0
fill_value : optional
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 256586f3d36a1..d7bdc0f34f744 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1441,6 +1441,10 @@ def to_string(
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
+
+ See Also
+ --------
+ WIP.
"""
)
@Substitution(klass="Series")
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 69e5a973ff706..53a0423ad8d2c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -356,7 +356,7 @@ def read_sql(
sql : str or SQLAlchemy Selectable (select or text object)
SQL query to be executed or a table name.
con : SQLAlchemy connectable (engine/connection) or database str URI
- or DBAPI2 connection (fallback mode)'
+ or DBAPI2 connection (fallback mode)'.
Using SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31997 | 2020-02-15T09:03:43Z | 2020-02-15T09:05:18Z | null | 2020-02-15T09:05:18Z |
DOC PR09 Add missing dots at con parameter on io/sql.py file | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 256586f3d36a1..d7bdc0f34f744 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1441,6 +1441,10 @@ def to_string(
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
+
+ See Also
+ --------
+ WIP.
"""
)
@Substitution(klass="Series")
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 69e5a973ff706..53a0423ad8d2c 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -356,7 +356,7 @@ def read_sql(
sql : str or SQLAlchemy Selectable (select or text object)
SQL query to be executed or a table name.
con : SQLAlchemy connectable (engine/connection) or database str URI
- or DBAPI2 connection (fallback mode)'
+ or DBAPI2 connection (fallback mode)'.
Using SQLAlchemy makes it possible to use any DB supported by that
library. If a DBAPI2 object, only sqlite3 is supported. The user is responsible
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31996 | 2020-02-15T08:54:18Z | 2020-02-15T09:04:32Z | null | 2020-02-15T09:04:32Z |
DOC: Add See Also section on series.py | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 256586f3d36a1..d7bdc0f34f744 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1441,6 +1441,10 @@ def to_string(
| 1 | pig |
| 2 | dog |
| 3 | quetzal |
+
+ See Also
+ --------
+ WIP.
"""
)
@Substitution(klass="Series")
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31995 | 2020-02-15T08:39:06Z | 2020-02-17T08:23:49Z | null | 2020-02-17T08:23:50Z |
Backport PR: CI:Testing whole doctest, and not specific module | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 83ceb11dfcbf4..30d3a3ffe5f7b 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -259,8 +259,7 @@ fi
if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests frame.py' ; echo $MSG
- pytest -q --doctest-modules pandas/core/frame.py \
- -k" -itertuples -join -reindex -reindex_axis -round"
+ pytest -q --doctest-modules pandas/core/frame.py
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests series.py' ; echo $MSG
@@ -293,9 +292,8 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests interval classes' ; echo $MSG
pytest -q --doctest-modules \
- pandas/core/indexes/interval.py \
- pandas/core/arrays/interval.py \
- -k"-from_arrays -from_breaks -from_intervals -from_tuples -set_closed -to_tuples -interval_range"
+ pandas/core/indexes/interval.py
+ pandas/core/arrays/interval.py
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests arrays'; echo $MSG
| xref #31975
Doing as mentioned [here](https://github.com/pandas-dev/pandas/pull/31975#issuecomment-586443425) | https://api.github.com/repos/pandas-dev/pandas/pulls/31993 | 2020-02-15T08:21:19Z | 2020-02-15T09:59:33Z | 2020-02-15T09:59:33Z | 2020-02-15T10:15:40Z |
BUG: Fix wrong reading sparse matrix | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 718de09a0c3e4..0dd24bcc54933 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -572,7 +572,7 @@ Reshaping
Sparse
^^^^^^
- Creating a :class:`SparseArray` from timezone-aware dtype will issue a warning before dropping timezone information, instead of doing so silently (:issue:`32501`)
--
+- Bug in :meth:`arrays.SparseArray.from_spmatrix` wrongly read scipy sparse matrix (:issue:`31991`)
-
ExtensionArray
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index a98875ace09aa..620e157ee54ec 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -439,11 +439,10 @@ def from_spmatrix(cls, data):
# our sparse index classes require that the positions be strictly
# increasing. So we need to sort loc, and arr accordingly.
+ data = data.tocsc()
+ data.sort_indices()
arr = data.data
- idx, _ = data.nonzero()
- loc = np.argsort(idx)
- arr = arr.take(loc)
- idx.sort()
+ idx = data.indices
zero = np.array(0, dtype=arr.dtype).item()
dtype = SparseDtype(arr.dtype, zero)
diff --git a/pandas/tests/arrays/sparse/test_accessor.py b/pandas/tests/arrays/sparse/test_accessor.py
index d8a1831cd61ec..2a81b94ce779c 100644
--- a/pandas/tests/arrays/sparse/test_accessor.py
+++ b/pandas/tests/arrays/sparse/test_accessor.py
@@ -41,6 +41,18 @@ def test_from_spmatrix(self, format, labels, dtype):
).astype(sp_dtype)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.parametrize("format", ["csc", "csr", "coo"])
+ @td.skip_if_no_scipy
+ def test_from_spmatrix_including_explicit_zero(self, format):
+ import scipy.sparse
+
+ mat = scipy.sparse.random(10, 2, density=0.5, format=format)
+ mat.data[0] = 0
+ result = pd.DataFrame.sparse.from_spmatrix(mat)
+ dtype = SparseDtype("float64", 0.0)
+ expected = pd.DataFrame(mat.todense()).astype(dtype)
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize(
"columns",
[["a", "b"], pd.MultiIndex.from_product([["A"], ["a", "b"]]), ["a", "a"]],
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index cb3a70e934dcb..f1e5050fa8a2e 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -208,6 +208,19 @@ def test_from_spmatrix(self, size, format):
expected = mat.toarray().ravel()
tm.assert_numpy_array_equal(result, expected)
+ @pytest.mark.parametrize("format", ["coo", "csc", "csr"])
+ @td.skip_if_no_scipy
+ def test_from_spmatrix_including_explicit_zero(self, format):
+ import scipy.sparse
+
+ mat = scipy.sparse.random(10, 1, density=0.5, format=format)
+ mat.data[0] = 0
+ result = SparseArray.from_spmatrix(mat)
+
+ result = np.asarray(result)
+ expected = mat.toarray().ravel()
+ tm.assert_numpy_array_equal(result, expected)
+
@td.skip_if_no_scipy
def test_from_spmatrix_raises(self):
import scipy.sparse
| - [x] closes #29814
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31991 | 2020-02-15T07:29:20Z | 2020-04-16T21:22:53Z | 2020-04-16T21:22:52Z | 2020-04-16T22:16:53Z |
CLN: 29547 replace old string formatting 7 | diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 737a85faa4c9b..b4a7173da84d0 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -314,7 +314,7 @@ def test_constructor_nanosecond(self, result):
def test_constructor_invalid_Z0_isostring(self, z):
# GH 8910
with pytest.raises(ValueError):
- Timestamp("2014-11-02 01:00{}".format(z))
+ Timestamp(f"2014-11-02 01:00{z}")
@pytest.mark.parametrize(
"arg",
@@ -455,9 +455,7 @@ def test_disallow_setting_tz(self, tz):
@pytest.mark.parametrize("offset", ["+0300", "+0200"])
def test_construct_timestamp_near_dst(self, offset):
# GH 20854
- expected = Timestamp(
- "2016-10-30 03:00:00{}".format(offset), tz="Europe/Helsinki"
- )
+ expected = Timestamp(f"2016-10-30 03:00:00{offset}", tz="Europe/Helsinki")
result = Timestamp(expected).tz_convert("Europe/Helsinki")
assert result == expected
diff --git a/pandas/tests/scalar/timestamp/test_rendering.py b/pandas/tests/scalar/timestamp/test_rendering.py
index cab6946bb8d02..a27d233d5ab88 100644
--- a/pandas/tests/scalar/timestamp/test_rendering.py
+++ b/pandas/tests/scalar/timestamp/test_rendering.py
@@ -17,7 +17,7 @@ class TestTimestampRendering:
)
def test_repr(self, date, freq, tz):
# avoid to match with timezone name
- freq_repr = "'{0}'".format(freq)
+ freq_repr = f"'{freq}'"
if tz.startswith("dateutil"):
tz_repr = tz.replace("dateutil", "")
else:
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index f968144286bd4..78e795e71cd07 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -232,17 +232,17 @@ def test_round_int64(self, timestamp, freq):
# test floor
result = dt.floor(freq)
- assert result.value % unit == 0, "floor not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"floor not a {freq} multiple"
assert 0 <= dt.value - result.value < unit, "floor error"
# test ceil
result = dt.ceil(freq)
- assert result.value % unit == 0, "ceil not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"ceil not a {freq} multiple"
assert 0 <= result.value - dt.value < unit, "ceil error"
# test round
result = dt.round(freq)
- assert result.value % unit == 0, "round not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"round not a {freq} multiple"
assert abs(result.value - dt.value) <= unit // 2, "round error"
if unit % 2 == 0 and abs(result.value - dt.value) == unit // 2:
# round half to even
diff --git a/pandas/tests/series/methods/test_nlargest.py b/pandas/tests/series/methods/test_nlargest.py
index a029965c7394f..b1aa09f387a13 100644
--- a/pandas/tests/series/methods/test_nlargest.py
+++ b/pandas/tests/series/methods/test_nlargest.py
@@ -98,7 +98,7 @@ class TestSeriesNLargestNSmallest:
)
def test_nlargest_error(self, r):
dt = r.dtype
- msg = "Cannot use method 'n(larg|small)est' with dtype {dt}".format(dt=dt)
+ msg = f"Cannot use method 'n(larg|small)est' with dtype {dt}"
args = 2, len(r), 0, -1
methods = r.nlargest, r.nsmallest
for method, arg in product(methods, args):
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index e6e91b5d4f5f4..6f45b72154805 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -169,10 +169,10 @@ def test_validate_any_all_out_keepdims_raises(self, kwargs, func):
name = func.__name__
msg = (
- r"the '{arg}' parameter is not "
- r"supported in the pandas "
- r"implementation of {fname}\(\)"
- ).format(arg=param, fname=name)
+ f"the '{param}' parameter is not "
+ "supported in the pandas "
+ fr"implementation of {name}\(\)"
+ )
with pytest.raises(ValueError, match=msg):
func(s, **kwargs)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index f96d6ddfc357e..33706c00c53f4 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -136,9 +136,7 @@ def test_constructor_subclass_dict(self, dict_subclass):
def test_constructor_ordereddict(self):
# GH3283
- data = OrderedDict(
- ("col{i}".format(i=i), np.random.random()) for i in range(12)
- )
+ data = OrderedDict((f"col{i}", np.random.random()) for i in range(12))
series = Series(data)
expected = Series(list(data.values()), list(data.keys()))
@@ -258,7 +256,7 @@ def get_dir(s):
tm.makeIntIndex(10),
tm.makeFloatIndex(10),
Index([True, False]),
- Index(["a{}".format(i) for i in range(101)]),
+ Index([f"a{i}" for i in range(101)]),
pd.MultiIndex.from_tuples(zip("ABCD", "EFGH")),
pd.MultiIndex.from_tuples(zip([0, 1, 2, 3], "EFGH")),
],
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 1fc582156a884..80a024eda7848 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -261,7 +261,7 @@ def test_astype_categorical_to_other(self):
value = np.random.RandomState(0).randint(0, 10000, 100)
df = DataFrame({"value": value})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
@@ -384,9 +384,9 @@ def test_astype_generic_timestamp_no_frequency(self, dtype):
s = Series(data)
msg = (
- r"The '{dtype}' dtype has no unit\. "
- r"Please pass in '{dtype}\[ns\]' instead."
- ).format(dtype=dtype.__name__)
+ fr"The '{dtype.__name__}' dtype has no unit\. "
+ fr"Please pass in '{dtype.__name__}\[ns\]' instead."
+ )
with pytest.raises(ValueError, match=msg):
s.astype(dtype)
diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py
index ece7f1f21ab23..536f15ea75d69 100644
--- a/pandas/tests/series/test_ufunc.py
+++ b/pandas/tests/series/test_ufunc.py
@@ -287,7 +287,7 @@ def __eq__(self, other) -> bool:
return type(other) is Thing and self.value == other.value
def __repr__(self) -> str:
- return "Thing({})".format(self.value)
+ return f"Thing({self.value})"
s = pd.Series([Thing(1), Thing(2)])
result = np.add(s, Thing(1))
| I splitted PR #31844 in batches, this is the seventh
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31986 | 2020-02-14T19:45:33Z | 2020-02-15T19:04:20Z | 2020-02-15T19:04:20Z | 2020-02-21T01:29:56Z |
REF: dont use numexpr in places where it doesnt help. | diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index f3c1a609d50a1..410bb3ac9f162 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -374,7 +374,6 @@ def dispatch_to_series(left, right, func, str_rep=None, axis=None):
"""
# Note: we use iloc to access columns for compat with cases
# with non-unique columns.
- import pandas.core.computation.expressions as expressions
right = lib.item_from_zerodim(right)
if lib.is_scalar(right) or np.ndim(right) == 0:
@@ -419,7 +418,7 @@ def column_op(a, b):
# Remaining cases have less-obvious dispatch rules
raise NotImplementedError(right)
- new_data = expressions.evaluate(column_op, str_rep, left, right)
+ new_data = column_op(left, right)
return new_data
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index fadab5d821470..d6560c6341021 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -95,7 +95,10 @@ def run_binary(self, df, other):
expr.get_test_result()
result = op(df, other)
used_numexpr = expr.get_test_result()
- assert used_numexpr, "Did not use numexpr as expected."
+
+ # We don't currently use numexpr for comparisons in Series,
+ # so dont for DataFrame either.
+ assert not used_numexpr, "unexpectedly numexpr as expected."
tm.assert_equal(expected, result)
def run_frame(self, df, other, run_binary=True):
| DataFrame ops will still use it when dispatching to series/block ops.
Perf neutral:
```
In [3]: arr = np.arange(10**4).reshape(100, 100)
In [4]: df = pd.DataFrame(arr)
In [5]: df[4] = df[4].astype(float)
In [6]: ser = df[0]
In [8]: %timeit res = df+ser
19.8 ms ± 317 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- PR
19.8 ms ± 267 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- master
In [10]: %timeit res2 = df.add(ser, axis=0)
19.2 ms ± 276 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- PR
19.2 ms ± 301 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- master
In [13]: %timeit res3 = df + df
23.6 ms ± 382 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- PR
24 ms ± 551 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) # <-- master
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/31984 | 2020-02-14T19:14:12Z | 2020-02-17T03:31:50Z | null | 2021-03-02T21:10:52Z |
CLN 29547 Replace old string formatting syntax with f-strings | diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 461d393dc4521..e67d68f7e0975 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -79,12 +79,12 @@
Return XportReader object for reading file incrementally."""
-_read_sas_doc = """Read a SAS file into a DataFrame.
+_read_sas_doc = f"""Read a SAS file into a DataFrame.
-%(_base_params_doc)s
-%(_format_params_doc)s
-%(_params2_doc)s
-%(_iterator_doc)s
+{_base_params_doc}
+{_format_params_doc}
+{_params2_doc}
+{_iterator_doc}
Returns
-------
@@ -102,19 +102,13 @@
>>> for chunk in itr:
>>> do_something(chunk)
-""" % {
- "_base_params_doc": _base_params_doc,
- "_format_params_doc": _format_params_doc,
- "_params2_doc": _params2_doc,
- "_iterator_doc": _iterator_doc,
-}
-
+"""
-_xport_reader_doc = """\
+_xport_reader_doc = f"""\
Class for reading SAS Xport files.
-%(_base_params_doc)s
-%(_params2_doc)s
+{_base_params_doc}
+{_params2_doc}
Attributes
----------
@@ -122,11 +116,7 @@
Contains information about the file
fields : list
Contains information about the variables in the file
-""" % {
- "_base_params_doc": _base_params_doc,
- "_params2_doc": _params2_doc,
-}
-
+"""
_read_method_doc = """\
Read observations from SAS Xport file, returning as data frame.
@@ -185,7 +175,7 @@ def _handle_truncated_float_vec(vec, nbytes):
if nbytes != 8:
vec1 = np.zeros(len(vec), np.dtype("S8"))
- dtype = np.dtype("S%d,S%d" % (nbytes, 8 - nbytes))
+ dtype = np.dtype(f"S{nbytes},S{8 - nbytes}")
vec2 = vec1.view(dtype=dtype)
vec2["f0"] = vec
return vec2
| Addresses #29547 for pandas/io/sas/sas_xport.py. I searched through for instances of the `%` operator and `.format`.
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
I noticed that `_read_sas_doc` doesn't seem to be used anymore. I could delete that as well, or we could save that for a separate PR.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31982 | 2020-02-14T17:36:30Z | 2020-02-15T02:19:38Z | 2020-02-15T02:19:38Z | 2020-02-15T02:19:42Z |
CLN: 29547 replace old string formatting 6 | diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 80a4d81b20a13..dbda3994b1c2a 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -53,7 +53,7 @@ def test_scalar_error(self, index_func):
s.iloc[3.0]
msg = (
- fr"cannot do positional indexing on {type(i).__name__} with these "
+ f"cannot do positional indexing on {type(i).__name__} with these "
r"indexers \[3\.0\] of type float"
)
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
index 2bf22d982e5fe..74d5a77f86827 100644
--- a/pandas/tests/io/pytables/test_timezones.py
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -24,9 +24,7 @@ def _compare_with_tz(a, b):
a_e = a.loc[i, c]
b_e = b.loc[i, c]
if not (a_e == b_e and a_e.tz == b_e.tz):
- raise AssertionError(
- "invalid tz comparison [{a_e}] [{b_e}]".format(a_e=a_e, b_e=b_e)
- )
+ raise AssertionError(f"invalid tz comparison [{a_e}] [{b_e}]")
def test_append_with_timezones_dateutil(setup_path):
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index b649e394c780b..cbaf16d048eda 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -40,8 +40,8 @@ def html_encoding_file(request, datapath):
def assert_framelist_equal(list1, list2, *args, **kwargs):
assert len(list1) == len(list2), (
"lists are not of equal size "
- "len(list1) == {0}, "
- "len(list2) == {1}".format(len(list1), len(list2))
+ f"len(list1) == {len(list1)}, "
+ f"len(list2) == {len(list2)}"
)
msg = "not all list elements are DataFrames"
both_frames = all(
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index cb2112b481952..b65efac2bd527 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1715,7 +1715,7 @@ def test_invalid_file_not_written(self, version):
"'ascii' codec can't decode byte 0xef in position 14: "
r"ordinal not in range\(128\)"
)
- with pytest.raises(UnicodeEncodeError, match=r"{}|{}".format(msg1, msg2)):
+ with pytest.raises(UnicodeEncodeError, match=f"{msg1}|{msg2}"):
with tm.assert_produces_warning(ResourceWarning):
df.to_stata(path)
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index ff303b808f6f5..70b65209db955 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -96,9 +96,7 @@ def test_selection(self, index, freq, kind, kwargs):
def test_annual_upsample_cases(
self, targ, conv, meth, month, simple_period_range_series
):
- ts = simple_period_range_series(
- "1/1/1990", "12/31/1991", freq="A-{month}".format(month=month)
- )
+ ts = simple_period_range_series("1/1/1990", "12/31/1991", freq=f"A-{month}")
result = getattr(ts.resample(targ, convention=conv), meth)()
expected = result.to_timestamp(targ, how=conv)
@@ -130,9 +128,9 @@ def test_not_subperiod(self, simple_period_range_series, rule, expected_error_ms
# These are incompatible period rules for resampling
ts = simple_period_range_series("1/1/1990", "6/30/1995", freq="w-wed")
msg = (
- "Frequency <Week: weekday=2> cannot be resampled to {}, as they "
- "are not sub or super periods"
- ).format(expected_error_msg)
+ "Frequency <Week: weekday=2> cannot be resampled to "
+ f"{expected_error_msg}, as they are not sub or super periods"
+ )
with pytest.raises(IncompatibleFrequency, match=msg):
ts.resample(rule).mean()
@@ -176,7 +174,7 @@ def test_annual_upsample(self, simple_period_range_series):
def test_quarterly_upsample(
self, month, target, convention, simple_period_range_series
):
- freq = "Q-{month}".format(month=month)
+ freq = f"Q-{month}"
ts = simple_period_range_series("1/1/1990", "12/31/1995", freq=freq)
result = ts.resample(target, convention=convention).ffill()
expected = result.to_timestamp(target, how=convention)
@@ -351,7 +349,7 @@ def test_fill_method_and_how_upsample(self):
@pytest.mark.parametrize("target", ["D", "B"])
@pytest.mark.parametrize("convention", ["start", "end"])
def test_weekly_upsample(self, day, target, convention, simple_period_range_series):
- freq = "W-{day}".format(day=day)
+ freq = f"W-{day}"
ts = simple_period_range_series("1/1/1990", "12/31/1995", freq=freq)
result = ts.resample(target, convention=convention).ffill()
expected = result.to_timestamp(target, how=convention)
@@ -367,16 +365,14 @@ def test_resample_to_timestamps(self, simple_period_range_series):
def test_resample_to_quarterly(self, simple_period_range_series):
for month in MONTHS:
- ts = simple_period_range_series(
- "1990", "1992", freq="A-{month}".format(month=month)
- )
- quar_ts = ts.resample("Q-{month}".format(month=month)).ffill()
+ ts = simple_period_range_series("1990", "1992", freq=f"A-{month}")
+ quar_ts = ts.resample(f"Q-{month}").ffill()
stamps = ts.to_timestamp("D", how="start")
qdates = period_range(
ts.index[0].asfreq("D", "start"),
ts.index[-1].asfreq("D", "end"),
- freq="Q-{month}".format(month=month),
+ freq=f"Q-{month}",
)
expected = stamps.reindex(qdates.to_timestamp("D", "s"), method="ffill")
diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py
index 7020d373caf82..685995ee201f8 100644
--- a/pandas/tests/reshape/merge/test_join.py
+++ b/pandas/tests/reshape/merge/test_join.py
@@ -262,8 +262,9 @@ def test_join_on_fails_with_wrong_object_type(self, wrong_type):
# Edited test to remove the Series object from test parameters
df = DataFrame({"a": [1, 1]})
- msg = "Can only merge Series or DataFrame objects, a {} was passed".format(
- str(type(wrong_type))
+ msg = (
+ "Can only merge Series or DataFrame objects, "
+ f"a {type(wrong_type)} was passed"
)
with pytest.raises(TypeError, match=msg):
merge(wrong_type, df, left_on="a", right_on="a")
@@ -812,9 +813,7 @@ def _check_join(left, right, result, join_col, how="left", lsuffix="_x", rsuffix
except KeyError:
if how in ("left", "inner"):
raise AssertionError(
- "key {group_key!s} should not have been in the join".format(
- group_key=group_key
- )
+ f"key {group_key} should not have been in the join"
)
_assert_all_na(l_joined, left.columns, join_col)
@@ -826,9 +825,7 @@ def _check_join(left, right, result, join_col, how="left", lsuffix="_x", rsuffix
except KeyError:
if how in ("right", "inner"):
raise AssertionError(
- "key {group_key!s} should not have been in the join".format(
- group_key=group_key
- )
+ f"key {group_key} should not have been in the join"
)
_assert_all_na(r_joined, right.columns, join_col)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index fd189c7435b29..4f2cd878df613 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -710,7 +710,7 @@ def test_other_timedelta_unit(self, unit):
df1 = pd.DataFrame({"entity_id": [101, 102]})
s = pd.Series([None, None], index=[101, 102], name="days")
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
df2 = s.astype(dtype).to_frame("days")
assert df2["days"].dtype == "m8[ns]"
@@ -1012,9 +1012,9 @@ def test_indicator(self):
msg = (
"Cannot use `indicator=True` option when data contains a "
- "column named {}|"
+ f"column named {i}|"
"Cannot use name of an existing column for indicator column"
- ).format(i)
+ )
with pytest.raises(ValueError, match=msg):
merge(df1, df_badcolumn, on="col1", how="outer", indicator=True)
with pytest.raises(ValueError, match=msg):
@@ -1555,11 +1555,9 @@ def test_merge_incompat_dtypes_error(self, df1_vals, df2_vals):
df2 = DataFrame({"A": df2_vals})
msg = (
- "You are trying to merge on {lk_dtype} and "
- "{rk_dtype} columns. If you wish to proceed "
- "you should use pd.concat".format(
- lk_dtype=df1["A"].dtype, rk_dtype=df2["A"].dtype
- )
+ f"You are trying to merge on {df1['A'].dtype} and "
+ f"{df2['A'].dtype} columns. If you wish to proceed "
+ "you should use pd.concat"
)
msg = re.escape(msg)
with pytest.raises(ValueError, match=msg):
@@ -1567,11 +1565,9 @@ def test_merge_incompat_dtypes_error(self, df1_vals, df2_vals):
# Check that error still raised when swapping order of dataframes
msg = (
- "You are trying to merge on {lk_dtype} and "
- "{rk_dtype} columns. If you wish to proceed "
- "you should use pd.concat".format(
- lk_dtype=df2["A"].dtype, rk_dtype=df1["A"].dtype
- )
+ f"You are trying to merge on {df2['A'].dtype} and "
+ f"{df1['A'].dtype} columns. If you wish to proceed "
+ "you should use pd.concat"
)
msg = re.escape(msg)
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 9b5dea7663396..9b09f0033715d 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1196,7 +1196,7 @@ def test_merge_groupby_multiple_column_with_categorical_column(self):
@pytest.mark.parametrize("side", ["left", "right"])
def test_merge_on_nans(self, func, side):
# GH 23189
- msg = "Merge keys contain null values on {} side".format(side)
+ msg = f"Merge keys contain null values on {side} side"
nulls = func([1.0, 5.0, np.nan])
non_nulls = func([1.0, 5.0, 10.0])
df_null = pd.DataFrame({"a": nulls, "left_val": ["a", "b", "c"]})
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index 814325844cb4c..6a670e6c729e9 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -364,8 +364,8 @@ def test_pairs(self):
df = DataFrame(data)
spec = {
- "visitdt": ["visitdt{i:d}".format(i=i) for i in range(1, 4)],
- "wt": ["wt{i:d}".format(i=i) for i in range(1, 4)],
+ "visitdt": [f"visitdt{i:d}" for i in range(1, 4)],
+ "wt": [f"wt{i:d}" for i in range(1, 4)],
}
result = lreshape(df, spec)
@@ -557,8 +557,8 @@ def test_pairs(self):
result = lreshape(df, spec, dropna=False, label="foo")
spec = {
- "visitdt": ["visitdt{i:d}".format(i=i) for i in range(1, 3)],
- "wt": ["wt{i:d}".format(i=i) for i in range(1, 4)],
+ "visitdt": [f"visitdt{i:d}" for i in range(1, 3)],
+ "wt": [f"wt{i:d}" for i in range(1, 4)],
}
msg = "All column lists must be same length"
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index fe75aef1ca3d7..e09a2a7907177 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1161,9 +1161,9 @@ def test_margins_no_values_two_row_two_cols(self):
def test_pivot_table_with_margins_set_margin_name(self, margin_name):
# see gh-3335
msg = (
- r'Conflicting name "{}" in margins|'
+ f'Conflicting name "{margin_name}" in margins|'
"margins_name argument must be a string"
- ).format(margin_name)
+ )
with pytest.raises(ValueError, match=msg):
# multi-index index
pivot_table(
diff --git a/pandas/tests/scalar/timedelta/test_constructors.py b/pandas/tests/scalar/timedelta/test_constructors.py
index 25c9fc19981be..d32d1994cac74 100644
--- a/pandas/tests/scalar/timedelta/test_constructors.py
+++ b/pandas/tests/scalar/timedelta/test_constructors.py
@@ -239,7 +239,7 @@ def test_iso_constructor(fmt, exp):
],
)
def test_iso_constructor_raises(fmt):
- msg = "Invalid ISO 8601 Duration format - {}".format(fmt)
+ msg = f"Invalid ISO 8601 Duration format - {fmt}"
with pytest.raises(ValueError, match=msg):
Timedelta(fmt)
| I splitted PR #31844 in batches, this is the sixth
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31980 | 2020-02-14T17:19:28Z | 2020-02-14T19:11:47Z | 2020-02-14T19:11:47Z | 2020-02-21T01:30:05Z |
Avoid importing from pandas at _libs files | diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index 4d17a6f883c1c..ba8a5dd3dd1d7 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -1,6 +1,7 @@
import cython
from cython import Py_ssize_t
+import platform
import numbers
import numpy as np
@@ -16,16 +17,13 @@ from pandas._libs.tslibs.nattype cimport (
checknull_with_nat, c_NaT as NaT, is_null_datetimelike)
from pandas._libs.ops_dispatch import maybe_dispatch_ufunc_to_dunder_op
-from pandas.compat import is_platform_32bit
-
-
cdef:
float64_t INF = <float64_t>np.inf
float64_t NEGINF = -INF
int64_t NPY_NAT = util.get_nat()
- bint is_32bit = is_platform_32bit()
+ bint is_32bit = platform.architecture()[0] == "32-bit"
cpdef bint checknull(object val):
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Something that brought at to me at first [here](https://github.com/pandas-dev/pandas/pull/30395#discussion_r363101229)
---
cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/31977 | 2020-02-14T15:54:20Z | 2020-02-16T16:50:01Z | null | 2020-02-29T10:23:29Z |
CI: Removed pattern check for specific modules | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 7eb80077c4fab..bb7d8a388e6e2 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -259,8 +259,7 @@ fi
if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests frame.py' ; echo $MSG
- pytest -q --doctest-modules pandas/core/frame.py \
- -k" -itertuples -join -reindex -reindex_axis -round"
+ pytest -q --doctest-modules pandas/core/frame.py
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests series.py' ; echo $MSG
@@ -294,8 +293,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests interval classes' ; echo $MSG
pytest -q --doctest-modules \
pandas/core/indexes/interval.py \
- pandas/core/arrays/interval.py \
- -k"-from_arrays -from_breaks -from_intervals -from_tuples -set_closed -to_tuples -interval_range"
+ pandas/core/arrays/interval.py
RET=$(($RET + $?)) ; echo $MSG "DONE"
MSG='Doctests arrays'; echo $MSG
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31975 | 2020-02-14T14:47:35Z | 2020-02-14T19:23:53Z | 2020-02-14T19:23:52Z | 2020-02-15T08:23:06Z |
STY: Fixed wrong placement of whitespace | diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 9186c33c12c06..0a5a2362bd290 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -347,10 +347,10 @@ def test_error(self, data, all_arithmetic_operators):
# TODO(extension)
# rpow with a datetimelike coerces the integer array incorrectly
msg = (
- r"(:?can only perform ops with numeric values)"
- r"|(:?cannot perform .* with this index type: DatetimeArray)"
- r"|(:?Addition/subtraction of integers and integer-arrays"
- r" with DatetimeArray is no longer supported. *)"
+ "can only perform ops with numeric values|"
+ "cannot perform .* with this index type: DatetimeArray|"
+ "Addition/subtraction of integers and integer-arrays "
+ "with DatetimeArray is no longer supported. *"
)
with pytest.raises(TypeError, match=msg):
ops(pd.Series(pd.date_range("20180101", periods=len(s))))
| - [x] ref #30755
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31974 | 2020-02-14T13:39:16Z | 2020-02-14T18:45:36Z | 2020-02-14T18:45:36Z | 2020-02-15T08:37:15Z |
BUG: groupby-nunique modifies null values | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index 0216007ea5ba8..77a8e7dc6e39d 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -19,7 +19,7 @@ Fixed regressions
- Fixed regression in :meth:`Series.align` when ``other`` is a DataFrame and ``method`` is not None (:issue:`31785`)
- Fixed regression in :meth:`pandas.core.groupby.RollingGroupby.apply` where the ``raw`` parameter was ignored (:issue:`31754`)
- Fixed regression in :meth:`rolling(..).corr() <pandas.core.window.Rolling.corr>` when using a time offset (:issue:`31789`)
--
+- Fixed regression in :meth:`DataFrameGroupBy.nunique` which was modifying the original values if ``NaN`` values were present (:issue:`31950`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 37b6429167646..ed3856bd58ed5 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -591,20 +591,24 @@ def nunique(self, dropna: bool = True) -> Series:
val = self.obj._internal_get_values()
- # GH 27951
- # temporary fix while we wait for NumPy bug 12629 to be fixed
- val[isna(val)] = np.datetime64("NaT")
-
- try:
- sorter = np.lexsort((val, ids))
- except TypeError: # catches object dtypes
- msg = f"val.dtype must be object, got {val.dtype}"
- assert val.dtype == object, msg
+ def _object_sorter(val, ids):
val, _ = algorithms.factorize(val, sort=False)
sorter = np.lexsort((val, ids))
_isna = lambda a: a == -1
+ return val, sorter, _isna
+
+ if isna(val).any() and val.dtype == object:
+ # Deal with pandas.NaT
+ val, sorter, _isna = _object_sorter(val, ids)
else:
- _isna = isna
+ try:
+ sorter = np.lexsort((val, ids))
+ except TypeError: # catches object dtypes
+ msg = f"val.dtype must be object, got {val.dtype}"
+ assert val.dtype == object, msg
+ val, sorter, _isna = _object_sorter(val, ids)
+ else:
+ _isna = isna
ids, val = ids[sorter], val[sorter]
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index 73e36cb5e6c84..245ed5bf9900b 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -966,6 +966,7 @@ def test_frame_describe_unstacked_format():
@pytest.mark.parametrize("dropna", [False, True])
def test_series_groupby_nunique(n, m, sort, dropna):
def check_nunique(df, keys, as_index=True):
+ original_df = df.copy()
gr = df.groupby(keys, as_index=as_index, sort=sort)
left = gr["julie"].nunique(dropna=dropna)
@@ -975,6 +976,7 @@ def check_nunique(df, keys, as_index=True):
right = right.reset_index(drop=True)
tm.assert_series_equal(left, right, check_names=False)
+ tm.assert_frame_equal(df, original_df)
days = date_range("2015-08-23", periods=10)
| - [ ] closes #31950
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31973 | 2020-02-14T11:02:01Z | 2020-02-14T11:37:53Z | null | 2020-02-14T11:37:53Z |
CLN: @doc - base.py & indexing.py | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index ba4c2e168e0c4..24a2bf12fd0b5 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -15,6 +15,7 @@
Substitution,
cache_readonly,
deprecate_kwarg,
+ doc,
)
from pandas.util._validators import validate_bool_kwarg, validate_fillna_kwargs
@@ -1352,8 +1353,7 @@ def memory_usage(self, deep=False):
"""
return self._codes.nbytes + self.dtype.categories.memory_usage(deep=deep)
- @Substitution(klass="Categorical")
- @Appender(_shared_docs["searchsorted"])
+ @doc(_shared_docs["searchsorted"], klass="Categorical")
def searchsorted(self, value, side="left", sorter=None):
# searchsorted is very performance sensitive. By converting codes
# to same dtype as self.codes, we get much faster performance.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 478b83f538b7d..56cd49e040ec2 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -13,7 +13,7 @@
from pandas.compat import PYPY
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender, Substitution, cache_readonly, doc
+from pandas.util._decorators import cache_readonly, doc
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import is_nested_object
@@ -1429,13 +1429,13 @@ def factorize(self, sort=False, na_sentinel=-1):
] = """
Find indices where elements should be inserted to maintain order.
- Find the indices into a sorted %(klass)s `self` such that, if the
+ Find the indices into a sorted {klass} `self` such that, if the
corresponding elements in `value` were inserted before the indices,
the order of `self` would be preserved.
.. note::
- The %(klass)s *must* be monotonically sorted, otherwise
+ The {klass} *must* be monotonically sorted, otherwise
wrong locations will likely be returned. Pandas does *not*
check this for you.
@@ -1443,7 +1443,7 @@ def factorize(self, sort=False, na_sentinel=-1):
----------
value : array_like
Values to insert into `self`.
- side : {'left', 'right'}, optional
+ side : {{'left', 'right'}}, optional
If 'left', the index of the first suitable location found is given.
If 'right', return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of `self`).
@@ -1519,8 +1519,7 @@ def factorize(self, sort=False, na_sentinel=-1):
0 # wrong result, correct would be 1
"""
- @Substitution(klass="Index")
- @Appender(_shared_docs["searchsorted"])
+ @doc(_shared_docs["searchsorted"], klass="Index")
def searchsorted(self, value, side="left", sorter=None) -> np.ndarray:
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 894e1d95a17bc..f5efde99d7950 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -11,7 +11,7 @@
from pandas._typing import Label
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly, doc
from pandas.core.dtypes.common import (
ensure_int64,
@@ -31,7 +31,7 @@
from pandas.core import algorithms
from pandas.core.arrays import DatetimeArray, PeriodArray, TimedeltaArray
from pandas.core.arrays.datetimelike import DatetimeLikeArrayMixin
-from pandas.core.base import _shared_docs
+from pandas.core.base import IndexOpsMixin
import pandas.core.indexes.base as ibase
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.core.indexes.extension import (
@@ -206,7 +206,7 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
self, indices, axis, allow_fill, fill_value, **kwargs
)
- @Appender(_shared_docs["searchsorted"])
+ @doc(IndexOpsMixin.searchsorted, klass="Datetime-like Index")
def searchsorted(self, value, side="left", sorter=None):
if isinstance(value, str):
raise TypeError(
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 9a671c7fc170a..3d86162d2a88c 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -5,7 +5,7 @@
from pandas._libs.indexing import _NDFrameIndexerBase
from pandas._libs.lib import item_from_zerodim
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender
+from pandas.util._decorators import doc
from pandas.core.dtypes.common import (
is_integer,
@@ -847,7 +847,7 @@ def _getbool_axis(self, key, axis: int):
return self.obj._take_with_is_copy(inds, axis=axis)
-@Appender(IndexingMixin.loc.__doc__)
+@doc(IndexingMixin.loc)
class _LocIndexer(_LocationIndexer):
_takeable: bool = False
_valid_types = (
@@ -859,7 +859,7 @@ class _LocIndexer(_LocationIndexer):
# -------------------------------------------------------------------
# Key Checks
- @Appender(_LocationIndexer._validate_key.__doc__)
+ @doc(_LocationIndexer._validate_key)
def _validate_key(self, key, axis: int):
# valid for a collection of labels (we check their presence later)
@@ -1289,7 +1289,7 @@ def _validate_read_indexer(
)
-@Appender(IndexingMixin.iloc.__doc__)
+@doc(IndexingMixin.iloc)
class _iLocIndexer(_LocationIndexer):
_valid_types = (
"integer, integer slice (START point is INCLUDED, END "
@@ -1998,7 +1998,7 @@ def __setitem__(self, key, value):
self.obj._set_value(*key, value=value, takeable=self._takeable)
-@Appender(IndexingMixin.at.__doc__)
+@doc(IndexingMixin.at)
class _AtIndexer(_ScalarAccessIndexer):
_takeable = False
@@ -2024,7 +2024,7 @@ def __getitem__(self, key):
return obj.index._get_values_for_loc(obj, loc, key)
-@Appender(IndexingMixin.iat.__doc__)
+@doc(IndexingMixin.iat)
class _iAtIndexer(_ScalarAccessIndexer):
_takeable = True
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 568e99622dd29..6c8b1c1085b18 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2443,8 +2443,7 @@ def __rmatmul__(self, other):
"""
return self.dot(np.transpose(other))
- @Substitution(klass="Series")
- @Appender(base._shared_docs["searchsorted"])
+ @doc(base.IndexOpsMixin.searchsorted, klass="Series")
def searchsorted(self, value, side="left", sorter=None):
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
diff --git a/pandas/tests/util/test_doc.py b/pandas/tests/util/test_doc.py
index 7e5e24456b9a7..50859564e654f 100644
--- a/pandas/tests/util/test_doc.py
+++ b/pandas/tests/util/test_doc.py
@@ -14,13 +14,15 @@ def cumsum(whatever):
@doc(
cumsum,
- """
- Examples
- --------
+ dedent(
+ """
+ Examples
+ --------
- >>> cumavg([1, 2, 3])
- 2
- """,
+ >>> cumavg([1, 2, 3])
+ 2
+ """
+ ),
method="cumavg",
operation="average",
)
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index d854be062fcbb..7a804792174c7 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -250,9 +250,11 @@ def doc(*args: Union[str, Callable], **kwargs: str) -> Callable[[F], F]:
A decorator take docstring templates, concatenate them and perform string
substitution on it.
- This decorator is robust even if func.__doc__ is None. This decorator will
- add a variable "_docstr_template" to the wrapped function to save original
- docstring template for potential usage.
+ This decorator will add a variable "_docstring_components" to the wrapped
+ function to keep track the original docstring template for potential usage.
+ If it should be consider as a template, it will be saved as a string.
+ Otherwise, it will be saved as callable, and later user __doc__ and dedent
+ to get docstring.
Parameters
----------
@@ -268,17 +270,28 @@ def decorator(func: F) -> F:
def wrapper(*args, **kwargs) -> Callable:
return func(*args, **kwargs)
- templates = [func.__doc__ if func.__doc__ else ""]
+ # collecting docstring and docstring templates
+ docstring_components: List[Union[str, Callable]] = []
+ if func.__doc__:
+ docstring_components.append(dedent(func.__doc__))
+
for arg in args:
- if isinstance(arg, str):
- templates.append(arg)
- elif hasattr(arg, "_docstr_template"):
- templates.append(arg._docstr_template) # type: ignore
- elif arg.__doc__:
- templates.append(arg.__doc__)
-
- wrapper._docstr_template = "".join(dedent(t) for t in templates) # type: ignore
- wrapper.__doc__ = wrapper._docstr_template.format(**kwargs) # type: ignore
+ if hasattr(arg, "_docstring_components"):
+ docstring_components.extend(arg._docstring_components) # type: ignore
+ elif isinstance(arg, str) or arg.__doc__:
+ docstring_components.append(arg)
+
+ # formatting templates and concatenating docstring
+ wrapper.__doc__ = "".join(
+ [
+ arg.format(**kwargs)
+ if isinstance(arg, str)
+ else dedent(arg.__doc__ or "")
+ for arg in docstring_components
+ ]
+ )
+
+ wrapper._docstring_components = docstring_components # type: ignore
return cast(F, wrapper)
| - [x] working on #31942
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31970 | 2020-02-14T02:20:29Z | 2020-03-17T00:10:28Z | 2020-03-17T00:10:28Z | 2020-03-23T21:42:50Z |
Update doc decorator for pandas/core/base.py | diff --git a/doc/source/development/contributing_docstring.rst b/doc/source/development/contributing_docstring.rst
index 649dd37b497b2..1c99b341f6c5a 100644
--- a/doc/source/development/contributing_docstring.rst
+++ b/doc/source/development/contributing_docstring.rst
@@ -937,33 +937,31 @@ classes. This helps us keep docstrings consistent, while keeping things clear
for the user reading. It comes at the cost of some complexity when writing.
Each shared docstring will have a base template with variables, like
-``%(klass)s``. The variables filled in later on using the ``Substitution``
-decorator. Finally, docstrings can be appended to with the ``Appender``
-decorator.
+``{klass}``. The variables filled in later on using the ``doc`` decorator.
+Finally, docstrings can also be appended to with the ``doc`` decorator.
In this example, we'll create a parent docstring normally (this is like
``pandas.core.generic.NDFrame``. Then we'll have two children (like
``pandas.core.series.Series`` and ``pandas.core.frame.DataFrame``). We'll
-substitute the children's class names in this docstring.
+substitute the class names in this docstring.
.. code-block:: python
class Parent:
+ @doc(klass="Parent")
def my_function(self):
- """Apply my function to %(klass)s."""
+ """Apply my function to {klass}."""
...
class ChildA(Parent):
- @Substitution(klass="ChildA")
- @Appender(Parent.my_function.__doc__)
+ @doc(Parent.my_function, klass="ChildA")
def my_function(self):
...
class ChildB(Parent):
- @Substitution(klass="ChildB")
- @Appender(Parent.my_function.__doc__)
+ @doc(Parent.my_function, klass="ChildB")
def my_function(self):
...
@@ -972,18 +970,16 @@ The resulting docstrings are
.. code-block:: python
>>> print(Parent.my_function.__doc__)
- Apply my function to %(klass)s.
+ Apply my function to Parent.
>>> print(ChildA.my_function.__doc__)
Apply my function to ChildA.
>>> print(ChildB.my_function.__doc__)
Apply my function to ChildB.
-Notice two things:
+Notice:
1. We "append" the parent docstring to the children docstrings, which are
initially empty.
-2. Python decorators are applied inside out. So the order is Append then
- Substitution, even though Substitution comes first in the file.
Our files will often contain a module-level ``_shared_doc_kwargs`` with some
common substitution values (things like ``klass``, ``axes``, etc).
@@ -992,14 +988,13 @@ You can substitute and append in one shot with something like
.. code-block:: python
- @Appender(template % _shared_doc_kwargs)
+ @doc(template, **_shared_doc_kwargs)
def my_function(self):
...
where ``template`` may come from a module-level ``_shared_docs`` dictionary
mapping function names to docstrings. Wherever possible, we prefer using
-``Appender`` and ``Substitution``, since the docstring-writing processes is
-slightly closer to normal.
+``doc``, since the docstring-writing processes is slightly closer to normal.
See ``pandas.core.generic.NDFrame.fillna`` for an example template, and
``pandas.core.series.Series.fillna`` and ``pandas.core.generic.frame.fillna``
diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index a04e9c3e68310..4e3ef0c52bbdd 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -7,7 +7,7 @@
from typing import FrozenSet, Set
import warnings
-from pandas.util._decorators import Appender
+from pandas.util._decorators import doc
class DirNamesMixin:
@@ -193,98 +193,97 @@ def __get__(self, obj, cls):
return accessor_obj
+@doc(klass="", others="")
def _register_accessor(name, cls):
- def decorator(accessor):
- if hasattr(cls, name):
- warnings.warn(
- f"registration of accessor {repr(accessor)} under name "
- f"{repr(name)} for type {repr(cls)} is overriding a preexisting"
- f"attribute with the same name.",
- UserWarning,
- stacklevel=2,
- )
- setattr(cls, name, CachedAccessor(name, accessor))
- cls._accessors.add(name)
- return accessor
-
- return decorator
+ """
+ Register a custom accessor on {klass} objects.
+ Parameters
+ ----------
+ name : str
+ Name under which the accessor should be registered. A warning is issued
+ if this name conflicts with a preexisting attribute.
-_doc = """
-Register a custom accessor on %(klass)s objects.
+ Returns
+ -------
+ callable
+ A class decorator.
-Parameters
-----------
-name : str
- Name under which the accessor should be registered. A warning is issued
- if this name conflicts with a preexisting attribute.
+ See Also
+ --------
+ {others}
-Returns
--------
-callable
- A class decorator.
+ Notes
+ -----
+ When accessed, your accessor will be initialized with the pandas object
+ the user is interacting with. So the signature must be
-See Also
---------
-%(others)s
+ .. code-block:: python
-Notes
------
-When accessed, your accessor will be initialized with the pandas object
-the user is interacting with. So the signature must be
+ def __init__(self, pandas_object): # noqa: E999
+ ...
-.. code-block:: python
+ For consistency with pandas methods, you should raise an ``AttributeError``
+ if the data passed to your accessor has an incorrect dtype.
- def __init__(self, pandas_object): # noqa: E999
- ...
+ >>> pd.Series(['a', 'b']).dt
+ Traceback (most recent call last):
+ ...
+ AttributeError: Can only use .dt accessor with datetimelike values
-For consistency with pandas methods, you should raise an ``AttributeError``
-if the data passed to your accessor has an incorrect dtype.
+ Examples
+ --------
->>> pd.Series(['a', 'b']).dt
-Traceback (most recent call last):
-...
-AttributeError: Can only use .dt accessor with datetimelike values
+ In your library code::
-Examples
---------
+ import pandas as pd
-In your library code::
+ @pd.api.extensions.register_dataframe_accessor("geo")
+ class GeoAccessor:
+ def __init__(self, pandas_obj):
+ self._obj = pandas_obj
- import pandas as pd
+ @property
+ def center(self):
+ # return the geographic center point of this DataFrame
+ lat = self._obj.latitude
+ lon = self._obj.longitude
+ return (float(lon.mean()), float(lat.mean()))
- @pd.api.extensions.register_dataframe_accessor("geo")
- class GeoAccessor:
- def __init__(self, pandas_obj):
- self._obj = pandas_obj
+ def plot(self):
+ # plot this array's data on a map, e.g., using Cartopy
+ pass
- @property
- def center(self):
- # return the geographic center point of this DataFrame
- lat = self._obj.latitude
- lon = self._obj.longitude
- return (float(lon.mean()), float(lat.mean()))
+ Back in an interactive IPython session:
- def plot(self):
- # plot this array's data on a map, e.g., using Cartopy
- pass
+ >>> ds = pd.DataFrame({{'longitude': np.linspace(0, 10),
+ ... 'latitude': np.linspace(0, 20)}})
+ >>> ds.geo.center
+ (5.0, 10.0)
+ >>> ds.geo.plot()
+ # plots data on a map
+ """
-Back in an interactive IPython session:
+ def decorator(accessor):
+ if hasattr(cls, name):
+ warnings.warn(
+ f"registration of accessor {repr(accessor)} under name "
+ f"{repr(name)} for type {repr(cls)} is overriding a preexisting"
+ f"attribute with the same name.",
+ UserWarning,
+ stacklevel=2,
+ )
+ setattr(cls, name, CachedAccessor(name, accessor))
+ cls._accessors.add(name)
+ return accessor
- >>> ds = pd.DataFrame({'longitude': np.linspace(0, 10),
- ... 'latitude': np.linspace(0, 20)})
- >>> ds.geo.center
- (5.0, 10.0)
- >>> ds.geo.plot()
- # plots data on a map
-"""
+ return decorator
-@Appender(
- _doc
- % dict(
- klass="DataFrame", others=("register_series_accessor, register_index_accessor")
- )
+@doc(
+ _register_accessor,
+ klass="DataFrame",
+ others="register_series_accessor, register_index_accessor",
)
def register_dataframe_accessor(name):
from pandas import DataFrame
@@ -292,11 +291,10 @@ def register_dataframe_accessor(name):
return _register_accessor(name, DataFrame)
-@Appender(
- _doc
- % dict(
- klass="Series", others=("register_dataframe_accessor, register_index_accessor")
- )
+@doc(
+ _register_accessor,
+ klass="Series",
+ others="register_dataframe_accessor, register_index_accessor",
)
def register_series_accessor(name):
from pandas import Series
@@ -304,11 +302,10 @@ def register_series_accessor(name):
return _register_accessor(name, Series)
-@Appender(
- _doc
- % dict(
- klass="Index", others=("register_dataframe_accessor, register_series_accessor")
- )
+@doc(
+ _register_accessor,
+ klass="Index",
+ others="register_dataframe_accessor, register_series_accessor",
)
def register_index_accessor(name):
from pandas import Index
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 886b0a3c5fec1..c915895a8fc4a 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -11,7 +11,7 @@
from pandas._libs import Timestamp, algos, hashtable as htable, lib
from pandas._libs.tslib import iNaT
-from pandas.util._decorators import Appender, Substitution
+from pandas.util._decorators import doc
from pandas.core.dtypes.cast import (
construct_1d_object_array_from_listlike,
@@ -487,9 +487,32 @@ def _factorize_array(
return codes, uniques
-_shared_docs[
- "factorize"
-] = """
+@doc(
+ values=dedent(
+ """\
+ values : sequence
+ A 1-D sequence. Sequences that aren't pandas objects are
+ coerced to ndarrays before factorization.
+ """
+ ),
+ sort=dedent(
+ """\
+ sort : bool, default False
+ Sort `uniques` and shuffle `codes` to maintain the
+ relationship.
+ """
+ ),
+ size_hint=dedent(
+ """\
+ size_hint : int, optional
+ Hint to the hashtable sizer.
+ """
+ ),
+)
+def factorize(
+ values, sort: bool = False, na_sentinel: int = -1, size_hint: Optional[int] = None
+) -> Tuple[np.ndarray, Union[np.ndarray, ABCIndex]]:
+ """
Encode the object as an enumerated type or categorical variable.
This method is useful for obtaining a numeric representation of an
@@ -499,10 +522,10 @@ def _factorize_array(
Parameters
----------
- %(values)s%(sort)s
+ {values}{sort}
na_sentinel : int, default -1
Value to mark "not found".
- %(size_hint)s\
+ {size_hint}\
Returns
-------
@@ -580,34 +603,6 @@ def _factorize_array(
>>> uniques
Index(['a', 'c'], dtype='object')
"""
-
-
-@Substitution(
- values=dedent(
- """\
- values : sequence
- A 1-D sequence. Sequences that aren't pandas objects are
- coerced to ndarrays before factorization.
- """
- ),
- sort=dedent(
- """\
- sort : bool, default False
- Sort `uniques` and shuffle `codes` to maintain the
- relationship.
- """
- ),
- size_hint=dedent(
- """\
- size_hint : int, optional
- Hint to the hashtable sizer.
- """
- ),
-)
-@Appender(_shared_docs["factorize"])
-def factorize(
- values, sort: bool = False, na_sentinel: int = -1, size_hint: Optional[int] = None
-) -> Tuple[np.ndarray, Union[np.ndarray, ABCIndex]]:
# Implementation notes: This method is responsible for 3 things
# 1.) coercing data to array-like (ndarray, Index, extension array)
# 2.) factorizing codes and uniques
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index d26ff7490e714..fb1bbb66fe2e3 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -15,6 +15,7 @@
Substitution,
cache_readonly,
deprecate_kwarg,
+ doc,
)
from pandas.util._validators import validate_bool_kwarg, validate_fillna_kwargs
@@ -51,7 +52,7 @@
_extension_array_shared_docs,
try_cast_to_ea,
)
-from pandas.core.base import NoNewAttributesMixin, PandasObject, _shared_docs
+from pandas.core.base import IndexOpsMixin, NoNewAttributesMixin, PandasObject
import pandas.core.common as com
from pandas.core.construction import array, extract_array, sanitize_array
from pandas.core.indexers import check_array_indexer, deprecate_ndim_indexing
@@ -1348,8 +1349,7 @@ def memory_usage(self, deep=False):
"""
return self._codes.nbytes + self.dtype.categories.memory_usage(deep=deep)
- @Substitution(klass="Categorical")
- @Appender(_shared_docs["searchsorted"])
+ @doc(IndexOpsMixin.searchsorted, klass="Categorical")
def searchsorted(self, value, side="left", sorter=None):
# searchsorted is very performance sensitive. By converting codes
# to same dtype as self.codes, we get much faster performance.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index f3c8b50e774af..bd3e98e1e0e20 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -13,7 +13,7 @@
from pandas.compat import PYPY
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender, Substitution, cache_readonly
+from pandas.util._decorators import cache_readonly, doc
from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.cast import is_nested_object
@@ -36,7 +36,6 @@
from pandas.core.construction import create_series_with_explicit_dtype
import pandas.core.nanops as nanops
-_shared_docs: Dict[str, str] = dict()
_indexops_doc_kwargs = dict(
klass="IndexOpsMixin",
inplace="",
@@ -1386,7 +1385,8 @@ def memory_usage(self, deep=False):
v += lib.memory_usage_of_objects(self.array)
return v
- @Substitution(
+ @doc(
+ algorithms.factorize,
values="",
order="",
size_hint="",
@@ -1398,22 +1398,21 @@ def memory_usage(self, deep=False):
"""
),
)
- @Appender(algorithms._shared_docs["factorize"])
def factorize(self, sort=False, na_sentinel=-1):
return algorithms.factorize(self, sort=sort, na_sentinel=na_sentinel)
- _shared_docs[
- "searchsorted"
- ] = """
+ @doc(klass="Index")
+ def searchsorted(self, value, side="left", sorter=None) -> np.ndarray:
+ """
Find indices where elements should be inserted to maintain order.
- Find the indices into a sorted %(klass)s `self` such that, if the
+ Find the indices into a sorted {klass} `self` such that, if the
corresponding elements in `value` were inserted before the indices,
the order of `self` would be preserved.
.. note::
- The %(klass)s *must* be monotonically sorted, otherwise
+ The {klass} *must* be monotonically sorted, otherwise
wrong locations will likely be returned. Pandas does *not*
check this for you.
@@ -1421,7 +1420,7 @@ def factorize(self, sort=False, na_sentinel=-1):
----------
value : array_like
Values to insert into `self`.
- side : {'left', 'right'}, optional
+ side : {{'left', 'right'}}, optional
If 'left', the index of the first suitable location found is given.
If 'right', return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of `self`).
@@ -1488,10 +1487,6 @@ def factorize(self, sort=False, na_sentinel=-1):
>>> x.searchsorted(1)
0 # wrong result, correct would be 1
"""
-
- @Substitution(klass="Index")
- @Appender(_shared_docs["searchsorted"])
- def searchsorted(self, value, side="left", sorter=None) -> np.ndarray:
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
def drop_duplicates(self, keep="first", inplace=False):
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e0efa93379bca..234bf356dc26b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -49,6 +49,7 @@
Appender,
Substitution,
deprecate_kwarg,
+ doc,
rewrite_axis_style_signature,
)
from pandas.util._validators import (
@@ -4164,8 +4165,7 @@ def rename(
errors=errors,
)
- @Substitution(**_shared_doc_kwargs)
- @Appender(NDFrame.fillna.__doc__)
+ @doc(NDFrame.fillna, **_shared_doc_kwargs)
def fillna(
self,
value=None,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 313d40b575629..93c0965f5bed9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -45,7 +45,12 @@
from pandas.compat._optional import import_optional_dependency
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender, Substitution, rewrite_axis_style_signature
+from pandas.util._decorators import (
+ Appender,
+ Substitution,
+ doc,
+ rewrite_axis_style_signature,
+)
from pandas.util._validators import (
validate_bool_kwarg,
validate_fillna_kwargs,
@@ -5879,6 +5884,7 @@ def convert_dtypes(
# ----------------------------------------------------------------------
# Filling NA's
+ @doc(**_shared_doc_kwargs)
def fillna(
self: FrameOrSeries,
value=None,
@@ -5899,11 +5905,11 @@ def fillna(
each index (for a Series) or column (for a DataFrame). Values not
in the dict/Series/DataFrame will not be filled. This value cannot
be a list.
- method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
+ method : {{'backfill', 'bfill', 'pad', 'ffill', None}}, default None
Method to use for filling holes in reindexed Series
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use next valid observation to fill gap.
- axis : %(axes_single_arg)s
+ axis : {axes_single_arg}
Axis along which to fill missing values.
inplace : bool, default False
If True, fill in-place. Note: this will modify any
@@ -5923,7 +5929,7 @@ def fillna(
Returns
-------
- %(klass)s or None
+ {klass} or None
Object with missing values filled or None if ``inplace=True``.
See Also
@@ -5967,7 +5973,7 @@ def fillna(
Replace all NaN elements in column 'A', 'B', 'C', and 'D', with 0, 1,
2, and 3 respectively.
- >>> values = {'A': 0, 'B': 1, 'C': 2, 'D': 3}
+ >>> values = {{'A': 0, 'B': 1, 'C': 2, 'D': 3}}
>>> df.fillna(value=values)
A B C D
0 0.0 2.0 2.0 0
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index b143ff0aa9c02..32bdf82c08158 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -10,7 +10,7 @@
from pandas._libs.tslibs import timezones
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import Appender, cache_readonly
+from pandas.util._decorators import Appender, cache_readonly, doc
from pandas.core.dtypes.common import (
ensure_int64,
@@ -31,7 +31,7 @@
from pandas.core import algorithms
from pandas.core.arrays import DatetimeArray, PeriodArray, TimedeltaArray
from pandas.core.arrays.datetimelike import DatetimeLikeArrayMixin
-from pandas.core.base import _shared_docs
+from pandas.core.base import IndexOpsMixin
import pandas.core.indexes.base as ibase
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.core.indexes.extension import (
@@ -215,7 +215,7 @@ def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
self, indices, axis, allow_fill, fill_value, **kwargs
)
- @Appender(_shared_docs["searchsorted"])
+ @doc(IndexOpsMixin.searchsorted, klass="Datetime Index")
def searchsorted(self, value, side="left", sorter=None):
if isinstance(value, str):
raise TypeError(
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 0786674daf874..cf0f37a48fc80 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -25,7 +25,7 @@
from pandas._libs import lib, properties, reshape, tslibs
from pandas._typing import Label
from pandas.compat.numpy import function as nv
-from pandas.util._decorators import Appender, Substitution
+from pandas.util._decorators import Appender, Substitution, doc
from pandas.util._validators import validate_bool_kwarg, validate_percentile
from pandas.core.dtypes.cast import convert_dtypes, validate_numeric_casting
@@ -73,6 +73,7 @@
is_empty_data,
sanitize_array,
)
+from pandas.core.generic import NDFrame
from pandas.core.indexers import maybe_convert_indices
from pandas.core.indexes.accessors import CombinedDatetimelikeProperties
from pandas.core.indexes.api import (
@@ -2468,8 +2469,7 @@ def __rmatmul__(self, other):
"""
return self.dot(np.transpose(other))
- @Substitution(klass="Series")
- @Appender(base._shared_docs["searchsorted"])
+ @doc(base.IndexOpsMixin.searchsorted, klass="Series")
def searchsorted(self, value, side="left", sorter=None):
return algorithms.searchsorted(self._values, value, side=side, sorter=sorter)
@@ -4142,8 +4142,7 @@ def drop(
errors=errors,
)
- @Substitution(**_shared_doc_kwargs)
- @Appender(generic.NDFrame.fillna.__doc__)
+ @doc(NDFrame.fillna, **_shared_doc_kwargs)
def fillna(
self,
value=None,
diff --git a/pandas/tests/util/test_doc.py b/pandas/tests/util/test_doc.py
new file mode 100644
index 0000000000000..7e5e24456b9a7
--- /dev/null
+++ b/pandas/tests/util/test_doc.py
@@ -0,0 +1,88 @@
+from textwrap import dedent
+
+from pandas.util._decorators import doc
+
+
+@doc(method="cumsum", operation="sum")
+def cumsum(whatever):
+ """
+ This is the {method} method.
+
+ It computes the cumulative {operation}.
+ """
+
+
+@doc(
+ cumsum,
+ """
+ Examples
+ --------
+
+ >>> cumavg([1, 2, 3])
+ 2
+ """,
+ method="cumavg",
+ operation="average",
+)
+def cumavg(whatever):
+ pass
+
+
+@doc(cumsum, method="cummax", operation="maximum")
+def cummax(whatever):
+ pass
+
+
+@doc(cummax, method="cummin", operation="minimum")
+def cummin(whatever):
+ pass
+
+
+def test_docstring_formatting():
+ docstr = dedent(
+ """
+ This is the cumsum method.
+
+ It computes the cumulative sum.
+ """
+ )
+ assert cumsum.__doc__ == docstr
+
+
+def test_docstring_appending():
+ docstr = dedent(
+ """
+ This is the cumavg method.
+
+ It computes the cumulative average.
+
+ Examples
+ --------
+
+ >>> cumavg([1, 2, 3])
+ 2
+ """
+ )
+ assert cumavg.__doc__ == docstr
+
+
+def test_doc_template_from_func():
+ docstr = dedent(
+ """
+ This is the cummax method.
+
+ It computes the cumulative maximum.
+ """
+ )
+ assert cummax.__doc__ == docstr
+
+
+def test_inherit_doc_template():
+ docstr = dedent(
+ """
+ This is the cummin method.
+
+ It computes the cumulative minimum.
+ """
+ )
+ assert cummin.__doc__ == docstr
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 0aab5a9c4113d..05f73a126feca 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -247,6 +247,46 @@ def wrapper(*args, **kwargs) -> Callable[..., Any]:
return decorate
+def doc(*args: Union[str, Callable], **kwargs: str) -> Callable[[F], F]:
+ """
+ A decorator take docstring templates, concatenate them and perform string
+ substitution on it.
+
+ This decorator is robust even if func.__doc__ is None. This decorator will
+ add a variable "_docstr_template" to the wrapped function to save original
+ docstring template for potential usage.
+
+ Parameters
+ ----------
+ *args : str or callable
+ The string / docstring / docstring template to be appended in order
+ after default docstring under function.
+ **kwags : str
+ The string which would be used to format docstring template.
+ """
+
+ def decorator(func: F) -> F:
+ @wraps(func)
+ def wrapper(*args, **kwargs) -> Callable:
+ return func(*args, **kwargs)
+
+ templates = [func.__doc__ if func.__doc__ else ""]
+ for arg in args:
+ if isinstance(arg, str):
+ templates.append(arg)
+ elif hasattr(arg, "_docstr_template"):
+ templates.append(arg._docstr_template) # type: ignore
+ elif arg.__doc__:
+ templates.append(arg.__doc__)
+
+ wrapper._docstr_template = "".join(dedent(t) for t in templates) # type: ignore
+ wrapper.__doc__ = wrapper._docstr_template.format(**kwargs) # type: ignore
+
+ return cast(F, wrapper)
+
+ return decorator
+
+
# Substitution and Appender are derived from matplotlib.docstring (1.1.0)
# module https://matplotlib.org/users/license.html
| - [ ] working on #31942
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31969 | 2020-02-14T02:01:22Z | 2020-02-14T02:02:05Z | null | 2020-02-14T02:02:05Z |
DOC: Use use recommended library over deprecated library | diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 714bebc260c06..f158ad6cd89e3 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -159,11 +159,10 @@ def _json_normalize(
Examples
--------
- >>> from pandas.io.json import json_normalize
>>> data = [{'id': 1, 'name': {'first': 'Coleen', 'last': 'Volk'}},
... {'name': {'given': 'Mose', 'family': 'Regner'}},
... {'id': 2, 'name': 'Faye Raker'}]
- >>> json_normalize(data)
+ >>> pandas.json_normalize(data)
id name name.family name.first name.given name.last
0 1.0 NaN NaN Coleen NaN Volk
1 NaN NaN Regner NaN Mose NaN
| Importing `pandas.io.json.json_normalize` in pandas 1.0.1 gives the following warning
```
FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead
```
This PR updates the documentation here https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html to use the recommended approach. | https://api.github.com/repos/pandas-dev/pandas/pulls/31968 | 2020-02-14T01:28:20Z | 2020-02-14T02:35:08Z | 2020-02-14T02:35:08Z | 2020-02-15T01:52:04Z |
CLN: 29547 replace old string formatting 5 | diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 1737f14e7adf9..5bbabc8e18c47 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -158,7 +158,7 @@ def test_precise_conversion(c_parser_only):
# test numbers between 1 and 2
for num in np.linspace(1.0, 2.0, num=500):
# 25 decimal digits of precision
- text = "a\n{0:.25}".format(num)
+ text = f"a\n{num:.25}"
normal_val = float(parser.read_csv(StringIO(text))["a"][0])
precise_val = float(
@@ -170,7 +170,7 @@ def test_precise_conversion(c_parser_only):
actual_val = Decimal(text[2:])
def error(val):
- return abs(Decimal("{0:.100}".format(val)) - actual_val)
+ return abs(Decimal(f"{val:.100}") - actual_val)
normal_errors.append(error(normal_val))
precise_errors.append(error(precise_val))
@@ -299,9 +299,7 @@ def test_grow_boundary_at_cap(c_parser_only):
def test_empty_header_read(count):
s = StringIO("," * count)
- expected = DataFrame(
- columns=["Unnamed: {i}".format(i=i) for i in range(count + 1)]
- )
+ expected = DataFrame(columns=[f"Unnamed: {i}" for i in range(count + 1)])
df = parser.read_csv(s)
tm.assert_frame_equal(df, expected)
@@ -489,7 +487,7 @@ def test_comment_whitespace_delimited(c_parser_only, capsys):
captured = capsys.readouterr()
# skipped lines 2, 3, 4, 9
for line_num in (2, 3, 4, 9):
- assert "Skipping line {}".format(line_num) in captured.err
+ assert f"Skipping line {line_num}" in captured.err
expected = DataFrame([[1, 2], [5, 2], [6, 2], [7, np.nan], [8, np.nan]])
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index c19056d434ec3..b3aa1aa14a509 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -957,7 +957,7 @@ def test_nonexistent_path(all_parsers):
# gh-14086: raise more helpful FileNotFoundError
# GH#29233 "File foo" instead of "File b'foo'"
parser = all_parsers
- path = "{}.csv".format(tm.rands(10))
+ path = f"{tm.rands(10)}.csv"
msg = f"File {path} does not exist" if parser.engine == "c" else r"\[Errno 2\]"
with pytest.raises(FileNotFoundError, match=msg) as e:
@@ -1872,7 +1872,7 @@ def test_internal_eof_byte_to_file(all_parsers):
parser = all_parsers
data = b'c1,c2\r\n"test \x1a test", test\r\n'
expected = DataFrame([["test \x1a test", " test"]], columns=["c1", "c2"])
- path = "__{}__.csv".format(tm.rands(10))
+ path = f"__{tm.rands(10)}__.csv"
with tm.ensure_clean(path) as path:
with open(path, "wb") as f:
diff --git a/pandas/tests/io/parser/test_compression.py b/pandas/tests/io/parser/test_compression.py
index dc03370daa1e2..b773664adda72 100644
--- a/pandas/tests/io/parser/test_compression.py
+++ b/pandas/tests/io/parser/test_compression.py
@@ -145,7 +145,7 @@ def test_invalid_compression(all_parsers, invalid_compression):
parser = all_parsers
compress_kwargs = dict(compression=invalid_compression)
- msg = "Unrecognized compression type: {compression}".format(**compress_kwargs)
+ msg = f"Unrecognized compression type: {invalid_compression}"
with pytest.raises(ValueError, match=msg):
parser.read_csv("test_file.zip", **compress_kwargs)
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index 13f72a0414bac..3661e4e056db2 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -45,7 +45,7 @@ def test_utf16_bom_skiprows(all_parsers, sep, encoding):
4,5,6""".replace(
",", sep
)
- path = "__{}__.csv".format(tm.rands(10))
+ path = f"__{tm.rands(10)}__.csv"
kwargs = dict(sep=sep, skiprows=2)
utf8 = "utf-8"
diff --git a/pandas/tests/io/parser/test_multi_thread.py b/pandas/tests/io/parser/test_multi_thread.py
index 64ccaf60ec230..458ff4da55ed3 100644
--- a/pandas/tests/io/parser/test_multi_thread.py
+++ b/pandas/tests/io/parser/test_multi_thread.py
@@ -41,9 +41,7 @@ def test_multi_thread_string_io_read_csv(all_parsers):
num_files = 100
bytes_to_df = [
- "\n".join(
- ["{i:d},{i:d},{i:d}".format(i=i) for i in range(max_row_range)]
- ).encode()
+ "\n".join([f"{i:d},{i:d},{i:d}" for i in range(max_row_range)]).encode()
for _ in range(num_files)
]
files = [BytesIO(b) for b in bytes_to_df]
diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py
index f9a083d7f5d22..9f86bbd65640e 100644
--- a/pandas/tests/io/parser/test_na_values.py
+++ b/pandas/tests/io/parser/test_na_values.py
@@ -111,10 +111,11 @@ def f(i, v):
elif i > 0:
buf = "".join([","] * i)
- buf = "{0}{1}".format(buf, v)
+ buf = f"{buf}{v}"
if i < nv - 1:
- buf = "{0}{1}".format(buf, "".join([","] * (nv - i - 1)))
+ joined = "".join([","] * (nv - i - 1))
+ buf = f"{buf}{joined}"
return buf
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index b01b22e811ee3..31573e4e6ecce 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -1101,7 +1101,7 @@ def test_bad_date_parse(all_parsers, cache_dates, value):
# if we have an invalid date make sure that we handle this with
# and w/o the cache properly
parser = all_parsers
- s = StringIO(("{value},\n".format(value=value)) * 50000)
+ s = StringIO((f"{value},\n") * 50000)
parser.read_csv(
s,
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 27aef2376e87d..e982667f06f31 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -260,7 +260,7 @@ def test_fwf_regression():
# Turns out "T060" is parsable as a datetime slice!
tz_list = [1, 10, 20, 30, 60, 80, 100]
widths = [16] + [8] * len(tz_list)
- names = ["SST"] + ["T{z:03d}".format(z=z) for z in tz_list[1:]]
+ names = ["SST"] + [f"T{z:03d}" for z in tz_list[1:]]
data = """ 2009164202000 9.5403 9.4105 8.6571 7.8372 6.0612 5.8843 5.5192
2009164203000 9.5435 9.2010 8.6167 7.8176 6.0804 5.8728 5.4869
diff --git a/pandas/tests/io/pytables/conftest.py b/pandas/tests/io/pytables/conftest.py
index 214f95c6fb441..38ffcb3b0e8ec 100644
--- a/pandas/tests/io/pytables/conftest.py
+++ b/pandas/tests/io/pytables/conftest.py
@@ -6,7 +6,7 @@
@pytest.fixture
def setup_path():
"""Fixture for setup path"""
- return "tmp.__{}__.h5".format(tm.rands(10))
+ return f"tmp.__{tm.rands(10)}__.h5"
@pytest.fixture(scope="module", autouse=True)
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 547de39eec5e0..fd585a73f6ce6 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -653,7 +653,7 @@ def test_getattr(self, setup_path):
# not stores
for x in ["mode", "path", "handle", "complib"]:
- getattr(store, "_{x}".format(x=x))
+ getattr(store, f"_{x}")
def test_put(self, setup_path):
@@ -690,9 +690,7 @@ def test_put_string_index(self, setup_path):
with ensure_clean_store(setup_path) as store:
- index = Index(
- ["I am a very long string index: {i}".format(i=i) for i in range(20)]
- )
+ index = Index([f"I am a very long string index: {i}" for i in range(20)])
s = Series(np.arange(20), index=index)
df = DataFrame({"A": s, "B": s})
@@ -705,7 +703,7 @@ def test_put_string_index(self, setup_path):
# mixed length
index = Index(
["abcdefghijklmnopqrstuvwxyz1234567890"]
- + ["I am a very long string index: {i}".format(i=i) for i in range(20)]
+ + [f"I am a very long string index: {i}" for i in range(20)]
)
s = Series(np.arange(21), index=index)
df = DataFrame({"A": s, "B": s})
@@ -2044,7 +2042,7 @@ def test_unimplemented_dtypes_table_columns(self, setup_path):
df = tm.makeDataFrame()
df[n] = f
with pytest.raises(TypeError):
- store.append("df1_{n}".format(n=n), df)
+ store.append(f"df1_{n}", df)
# frame
df = tm.makeDataFrame()
@@ -2689,16 +2687,12 @@ def test_select_dtypes(self, setup_path):
expected = df[df.boolv == True].reindex(columns=["A", "boolv"]) # noqa
for v in [True, "true", 1]:
- result = store.select(
- "df", "boolv == {v!s}".format(v=v), columns=["A", "boolv"]
- )
+ result = store.select("df", f"boolv == {v}", columns=["A", "boolv"])
tm.assert_frame_equal(expected, result)
expected = df[df.boolv == False].reindex(columns=["A", "boolv"]) # noqa
for v in [False, "false", 0]:
- result = store.select(
- "df", "boolv == {v!s}".format(v=v), columns=["A", "boolv"]
- )
+ result = store.select("df", f"boolv == {v}", columns=["A", "boolv"])
tm.assert_frame_equal(expected, result)
# integer index
@@ -2784,7 +2778,7 @@ def test_select_with_many_inputs(self, setup_path):
users=["a"] * 50
+ ["b"] * 50
+ ["c"] * 100
- + ["a{i:03d}".format(i=i) for i in range(100)],
+ + [f"a{i:03d}" for i in range(100)],
)
)
_maybe_remove(store, "df")
@@ -2805,7 +2799,7 @@ def test_select_with_many_inputs(self, setup_path):
tm.assert_frame_equal(expected, result)
# big selector along the columns
- selector = ["a", "b", "c"] + ["a{i:03d}".format(i=i) for i in range(60)]
+ selector = ["a", "b", "c"] + [f"a{i:03d}" for i in range(60)]
result = store.select(
"df", "ts>=Timestamp('2012-02-01') and users=selector"
)
@@ -2914,21 +2908,19 @@ def test_select_iterator_complete_8014(self, setup_path):
# select w/o iterator and where clause, single term, begin
# of range, works
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
# select w/o iterator and where clause, single term, end
# of range, works
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
# select w/o iterator and where clause, inclusive range,
# works
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
@@ -2948,21 +2940,19 @@ def test_select_iterator_complete_8014(self, setup_path):
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
@@ -2984,23 +2974,21 @@ def test_select_iterator_non_complete_8014(self, setup_path):
end_dt = expected.index[-2]
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index >= beg_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index <= end_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[
@@ -3018,7 +3006,7 @@ def test_select_iterator_non_complete_8014(self, setup_path):
end_dt = expected.index[-1]
# select w/iterator and where clause, single term, begin of range
- where = "index > '{end_dt}'".format(end_dt=end_dt)
+ where = f"index > '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
assert 0 == len(results)
@@ -3040,14 +3028,14 @@ def test_select_iterator_many_empty_frames(self, setup_path):
end_dt = expected.index[chunksize - 1]
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index >= beg_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
assert len(results) == 1
@@ -3056,9 +3044,7 @@ def test_select_iterator_many_empty_frames(self, setup_path):
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
# should be 1, is 10
@@ -3076,9 +3062,7 @@ def test_select_iterator_many_empty_frames(self, setup_path):
# return [] e.g. `for e in []: print True` never prints
# True.
- where = "index <= '{beg_dt}' & index >= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index <= '{beg_dt}' & index >= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
# should be []
@@ -3807,8 +3791,8 @@ def test_start_stop_fixed(self, setup_path):
def test_select_filter_corner(self, setup_path):
df = DataFrame(np.random.randn(50, 100))
- df.index = ["{c:3d}".format(c=c) for c in df.index]
- df.columns = ["{c:3d}".format(c=c) for c in df.columns]
+ df.index = [f"{c:3d}" for c in df.index]
+ df.columns = [f"{c:3d}" for c in df.columns]
with ensure_clean_store(setup_path) as store:
store.put("frame", df, format="table")
@@ -4259,7 +4243,7 @@ def test_append_with_diff_col_name_types_raises_value_error(self, setup_path):
df5 = DataFrame({("1", 2, object): np.random.randn(10)})
with ensure_clean_store(setup_path) as store:
- name = "df_{}".format(tm.rands(10))
+ name = f"df_{tm.rands(10)}"
store.append(name, df)
for d in (df2, df3, df4, df5):
@@ -4543,9 +4527,7 @@ def test_to_hdf_with_object_column_names(self, setup_path):
with ensure_clean_path(setup_path) as path:
with catch_warnings(record=True):
df.to_hdf(path, "df", format="table", data_columns=True)
- result = pd.read_hdf(
- path, "df", where="index = [{0}]".format(df.index[0])
- )
+ result = pd.read_hdf(path, "df", where=f"index = [{df.index[0]}]")
assert len(result)
def test_read_hdf_open_store(self, setup_path):
@@ -4678,16 +4660,16 @@ def test_query_long_float_literal(self, setup_path):
store.append("test", df, format="table", data_columns=True)
cutoff = 1000000000.0006
- result = store.select("test", "A < {cutoff:.4f}".format(cutoff=cutoff))
+ result = store.select("test", f"A < {cutoff:.4f}")
assert result.empty
cutoff = 1000000000.0010
- result = store.select("test", "A > {cutoff:.4f}".format(cutoff=cutoff))
+ result = store.select("test", f"A > {cutoff:.4f}")
expected = df.loc[[1, 2], :]
tm.assert_frame_equal(expected, result)
exact = 1000000000.0011
- result = store.select("test", "A == {exact:.4f}".format(exact=exact))
+ result = store.select("test", f"A == {exact:.4f}")
expected = df.loc[[1], :]
tm.assert_frame_equal(expected, result)
@@ -4714,21 +4696,21 @@ def test_query_compare_column_type(self, setup_path):
for op in ["<", ">", "=="]:
# non strings to string column always fail
for v in [2.1, True, pd.Timestamp("2014-01-01"), pd.Timedelta(1, "s")]:
- query = "date {op} v".format(op=op)
+ query = f"date {op} v"
with pytest.raises(TypeError):
store.select("test", where=query)
# strings to other columns must be convertible to type
v = "a"
for col in ["int", "float", "real_date"]:
- query = "{col} {op} v".format(op=op, col=col)
+ query = f"{col} {op} v"
with pytest.raises(ValueError):
store.select("test", where=query)
for v, col in zip(
["1", "1.1", "2014-01-01"], ["int", "float", "real_date"]
):
- query = "{col} {op} v".format(op=op, col=col)
+ query = f"{col} {op} v"
result = store.select("test", where=query)
if op == "==":
| I splitted PR #31844 in batches, this is the fifth
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31967 | 2020-02-14T01:22:43Z | 2020-02-14T16:54:33Z | 2020-02-14T16:54:33Z | 2020-02-21T01:29:58Z |
CLN: 29547 replace old string formatting 4 | diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index fe161a0da791a..0c9ddbf5473b3 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -91,9 +91,7 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
elif typestr in ("complex", "c16", "c8"):
values = 1.0j * (mat.astype(typestr) + num_offset)
elif typestr in ("object", "string", "O"):
- values = np.reshape(
- ["A{i:d}".format(i=i) for i in mat.ravel() + num_offset], shape
- )
+ values = np.reshape([f"A{i:d}" for i in mat.ravel() + num_offset], shape)
elif typestr in ("b", "bool"):
values = np.ones(shape, dtype=np.bool_)
elif typestr in ("datetime", "dt", "M8[ns]"):
@@ -101,7 +99,7 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
elif typestr.startswith("M8[ns"):
# datetime with tz
m = re.search(r"M8\[ns,\s*(\w+\/?\w*)\]", typestr)
- assert m is not None, "incompatible typestr -> {0}".format(typestr)
+ assert m is not None, f"incompatible typestr -> {typestr}"
tz = m.groups()[0]
assert num_items == 1, "must have only 1 num items for a tz-aware"
values = DatetimeIndex(np.arange(N) * 1e9, tz=tz)
@@ -607,9 +605,9 @@ def test_interleave(self):
# self
for dtype in ["f8", "i8", "object", "bool", "complex", "M8[ns]", "m8[ns]"]:
- mgr = create_mgr("a: {0}".format(dtype))
+ mgr = create_mgr(f"a: {dtype}")
assert mgr.as_array().dtype == dtype
- mgr = create_mgr("a: {0}; b: {0}".format(dtype))
+ mgr = create_mgr(f"a: {dtype}; b: {dtype}")
assert mgr.as_array().dtype == dtype
# will be converted according the actual dtype of the underlying
@@ -1136,7 +1134,7 @@ def __array__(self):
return np.array(self.value, dtype=self.dtype)
def __str__(self) -> str:
- return "DummyElement({}, {})".format(self.value, self.dtype)
+ return f"DummyElement({self.value}, {self.dtype})"
def __repr__(self) -> str:
return str(self)
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 8d00ef1b7fe3e..a59b409809eed 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -596,7 +596,8 @@ def test_read_from_file_url(self, read_ext, datapath):
# fails on some systems
import platform
- pytest.skip("failing on {}".format(" ".join(platform.uname()).strip()))
+ platform_info = " ".join(platform.uname()).strip()
+ pytest.skip(f"failing on {platform_info}")
tm.assert_frame_equal(url_table, local_table)
@@ -957,7 +958,7 @@ def test_excel_passes_na_filter(self, read_ext, na_filter):
def test_unexpected_kwargs_raises(self, read_ext, arg):
# gh-17964
kwarg = {arg: "Sheet1"}
- msg = r"unexpected keyword argument `{}`".format(arg)
+ msg = fr"unexpected keyword argument `{arg}`"
with pd.ExcelFile("test1" + read_ext) as excel:
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/io/excel/test_style.py b/pandas/tests/io/excel/test_style.py
index 88f4c3736bc0d..31b033f381f0c 100644
--- a/pandas/tests/io/excel/test_style.py
+++ b/pandas/tests/io/excel/test_style.py
@@ -45,10 +45,7 @@ def style(df):
def assert_equal_style(cell1, cell2, engine):
if engine in ["xlsxwriter", "openpyxl"]:
pytest.xfail(
- reason=(
- "GH25351: failing on some attribute "
- "comparisons in {}".format(engine)
- )
+ reason=(f"GH25351: failing on some attribute comparisons in {engine}")
)
# XXX: should find a better way to check equality
assert cell1.alignment.__dict__ == cell2.alignment.__dict__
@@ -108,7 +105,7 @@ def custom_converter(css):
for col1, col2 in zip(wb["frame"].columns, wb["styled"].columns):
assert len(col1) == len(col2)
for cell1, cell2 in zip(col1, col2):
- ref = "{cell2.column}{cell2.row:d}".format(cell2=cell2)
+ ref = f"{cell2.column}{cell2.row:d}"
# XXX: this isn't as strong a test as ideal; we should
# confirm that differences are exclusive
if ref == "B2":
@@ -156,7 +153,7 @@ def custom_converter(css):
for col1, col2 in zip(wb["frame"].columns, wb["custom"].columns):
assert len(col1) == len(col2)
for cell1, cell2 in zip(col1, col2):
- ref = "{cell2.column}{cell2.row:d}".format(cell2=cell2)
+ ref = f"{cell2.column}{cell2.row:d}"
if ref in ("B2", "C3", "D4", "B5", "C6", "D7", "B8", "B9"):
assert not cell1.font.bold
assert cell2.font.bold
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 91665a24fc4c5..506d223dbedb4 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -41,7 +41,7 @@ def set_engine(engine, ext):
which engine should be used to write Excel files. After executing
the test it rolls back said change to the global option.
"""
- option_name = "io.excel.{ext}.writer".format(ext=ext.strip("."))
+ option_name = f"io.excel.{ext.strip('.')}.writer"
prev_engine = get_option(option_name)
set_option(option_name, engine)
yield
@@ -1206,7 +1206,7 @@ def test_path_path_lib(self, engine, ext):
writer = partial(df.to_excel, engine=engine)
reader = partial(pd.read_excel, index_col=0)
- result = tm.round_trip_pathlib(writer, reader, path="foo.{ext}".format(ext=ext))
+ result = tm.round_trip_pathlib(writer, reader, path=f"foo.{ext}")
tm.assert_frame_equal(result, df)
def test_path_local_path(self, engine, ext):
@@ -1214,7 +1214,7 @@ def test_path_local_path(self, engine, ext):
writer = partial(df.to_excel, engine=engine)
reader = partial(pd.read_excel, index_col=0)
- result = tm.round_trip_pathlib(writer, reader, path="foo.{ext}".format(ext=ext))
+ result = tm.round_trip_pathlib(writer, reader, path=f"foo.{ext}")
tm.assert_frame_equal(result, df)
def test_merged_cell_custom_objects(self, merge_cells, path):
diff --git a/pandas/tests/io/excel/test_xlrd.py b/pandas/tests/io/excel/test_xlrd.py
index cc7e2311f362a..d456afe4ed351 100644
--- a/pandas/tests/io/excel/test_xlrd.py
+++ b/pandas/tests/io/excel/test_xlrd.py
@@ -37,7 +37,7 @@ def test_read_xlrd_book(read_ext, frame):
# TODO: test for openpyxl as well
def test_excel_table_sheet_by_index(datapath, read_ext):
- path = datapath("io", "data", "excel", "test1{}".format(read_ext))
+ path = datapath("io", "data", "excel", f"test1{read_ext}")
with pd.ExcelFile(path) as excel:
with pytest.raises(xlrd.XLRDError):
pd.read_excel(excel, "asdf")
diff --git a/pandas/tests/io/formats/test_console.py b/pandas/tests/io/formats/test_console.py
index e56d14885f11e..b57a2393461a2 100644
--- a/pandas/tests/io/formats/test_console.py
+++ b/pandas/tests/io/formats/test_console.py
@@ -34,8 +34,8 @@ def test_detect_console_encoding_from_stdout_stdin(monkeypatch, empty, filled):
# they have values filled.
# GH 21552
with monkeypatch.context() as context:
- context.setattr("sys.{}".format(empty), MockEncoding(""))
- context.setattr("sys.{}".format(filled), MockEncoding(filled))
+ context.setattr(f"sys.{empty}", MockEncoding(""))
+ context.setattr(f"sys.{filled}", MockEncoding(filled))
assert detect_console_encoding() == filled
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index d3f044a42eb28..9a14022d6f776 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -300,7 +300,7 @@ def test_to_html_border(option, result, expected):
else:
with option_context("display.html.border", option):
result = result(df)
- expected = 'border="{}"'.format(expected)
+ expected = f'border="{expected}"'
assert expected in result
@@ -318,7 +318,7 @@ def test_to_html(biggie_df_fixture):
assert isinstance(s, str)
df.to_html(columns=["B", "A"], col_space=17)
- df.to_html(columns=["B", "A"], formatters={"A": lambda x: "{x:.1f}".format(x=x)})
+ df.to_html(columns=["B", "A"], formatters={"A": lambda x: f"{x:.1f}"})
df.to_html(columns=["B", "A"], float_format=str)
df.to_html(columns=["B", "A"], col_space=12, float_format=str)
@@ -745,7 +745,7 @@ def test_to_html_with_col_space_units(unit):
if isinstance(unit, int):
unit = str(unit) + "px"
for h in hdrs:
- expected = '<th style="min-width: {unit};">'.format(unit=unit)
+ expected = f'<th style="min-width: {unit};">'
assert expected in h
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index bd681032f155d..c2fbc59b8f482 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -117,10 +117,10 @@ def test_to_latex_with_formatters(self):
formatters = {
"datetime64": lambda x: x.strftime("%Y-%m"),
- "float": lambda x: "[{x: 4.1f}]".format(x=x),
- "int": lambda x: "0x{x:x}".format(x=x),
- "object": lambda x: "-{x!s}-".format(x=x),
- "__index__": lambda x: "index: {x}".format(x=x),
+ "float": lambda x: f"[{x: 4.1f}]",
+ "int": lambda x: f"0x{x:x}",
+ "object": lambda x: f"-{x!s}-",
+ "__index__": lambda x: f"index: {x}",
}
result = df.to_latex(formatters=dict(formatters))
@@ -744,9 +744,7 @@ def test_to_latex_multiindex_names(self, name0, name1, axes):
idx_names = tuple(n or "{}" for n in names)
idx_names_row = (
- "{idx_names[0]} & {idx_names[1]} & & & & \\\\\n".format(
- idx_names=idx_names
- )
+ f"{idx_names[0]} & {idx_names[1]} & & & & \\\\\n"
if (0 in axes and any(names))
else ""
)
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index f7583c93b9288..ca853ba5f00f5 100755
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -323,17 +323,17 @@ def write_legacy_pickles(output_dir):
"This script generates a storage file for the current arch, system, "
"and python version"
)
- print(" pandas version: {0}".format(version))
- print(" output dir : {0}".format(output_dir))
+ print(f" pandas version: {version}")
+ print(f" output dir : {output_dir}")
print(" storage format: pickle")
- pth = "{0}.pickle".format(platform_name())
+ pth = f"{platform_name()}.pickle"
fh = open(os.path.join(output_dir, pth), "wb")
pickle.dump(create_pickle_data(), fh, pickle.HIGHEST_PROTOCOL)
fh.close()
- print("created pickle file: {pth}".format(pth=pth))
+ print(f"created pickle file: {pth}")
def write_legacy_file():
| I splitted PR #31844 in batches, this is the fourth
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31963 | 2020-02-13T20:15:24Z | 2020-02-14T01:04:47Z | 2020-02-14T01:04:47Z | 2020-02-21T01:30:00Z |
REF: remove _convert_scalar_indexer | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ceb3f26a0526a..f2d4151edb855 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3078,48 +3078,6 @@ def _get_partial_string_timestamp_match_key(self, key):
# GH#10331
return key
- def _convert_scalar_indexer(self, key, kind: str_t):
- """
- Convert a scalar indexer.
-
- Parameters
- ----------
- key : label of the slice bound
- kind : {'loc', 'getitem'}
- """
- assert kind in ["loc", "getitem"]
-
- if len(self) and not isinstance(self, ABCMultiIndex):
-
- # we can raise here if we are definitive that this
- # is positional indexing (eg. .loc on with a float)
- # or label indexing if we are using a type able
- # to be represented in the index
-
- if kind == "getitem" and is_float(key):
- if not self.is_floating():
- raise KeyError(key)
-
- elif kind == "loc" and is_float(key):
-
- # we want to raise KeyError on string/mixed here
- # technically we *could* raise a TypeError
- # on anything but mixed though
- if self.inferred_type not in [
- "floating",
- "mixed-integer-float",
- "integer-na",
- "string",
- "mixed",
- ]:
- raise KeyError(key)
-
- elif kind == "loc" and is_integer(key):
- if not (is_integer_dtype(self.dtype) or is_object_dtype(self.dtype)):
- raise KeyError(key)
-
- return key
-
def _validate_positional_slice(self, key: slice):
"""
For positional indexing, a slice must have either int or None
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 8c2d7f4aa6c0e..d43ae8eb54818 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -574,16 +574,6 @@ def get_indexer_non_unique(self, target):
indexer, missing = self._engine.get_indexer_non_unique(codes)
return ensure_platform_int(indexer), missing
- @Appender(Index._convert_scalar_indexer.__doc__)
- def _convert_scalar_indexer(self, key, kind: str):
- assert kind in ["loc", "getitem"]
- if kind == "loc":
- try:
- return self.categories._convert_scalar_indexer(key, kind="loc")
- except TypeError:
- raise KeyError(key)
- return super()._convert_scalar_indexer(key, kind=kind)
-
@Appender(Index._convert_list_indexer.__doc__)
def _convert_list_indexer(self, keyarr):
# Return our indexer or raise if all of the values are not included in
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index d1e21a2fe7657..894e1d95a17bc 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -18,7 +18,6 @@
is_bool_dtype,
is_categorical_dtype,
is_dtype_equal,
- is_float,
is_integer,
is_list_like,
is_period_dtype,
@@ -377,32 +376,6 @@ def _format_attrs(self):
# --------------------------------------------------------------------
# Indexing Methods
- def _convert_scalar_indexer(self, key, kind: str):
- """
- We don't allow integer or float indexing on datetime-like when using
- loc.
-
- Parameters
- ----------
- key : label of the slice bound
- kind : {'loc', 'getitem'}
- """
- assert kind in ["loc", "getitem"]
-
- if not is_scalar(key):
- raise TypeError(key)
-
- # we don't allow integer/float indexing for loc
- # we don't allow float indexing for getitem
- is_int = is_integer(key)
- is_flt = is_float(key)
- if kind == "loc" and (is_int or is_flt):
- raise KeyError(key)
- elif kind == "getitem" and is_flt:
- raise KeyError(key)
-
- return super()._convert_scalar_indexer(key, kind=kind)
-
def _validate_partial_date_slice(self, reso: str):
raise NotImplementedError
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index d396d1c76f357..6968837fb13e6 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -514,12 +514,6 @@ def _should_fallback_to_positional(self):
# positional in this case
return self.dtype.subtype.kind in ["m", "M"]
- @Appender(Index._convert_scalar_indexer.__doc__)
- def _convert_scalar_indexer(self, key, kind: str):
- assert kind in ["getitem", "loc"]
- # never iloc, so no-op
- return key
-
def _maybe_cast_slice_bound(self, label, side, kind):
return getattr(self, side)._maybe_cast_slice_bound(label, side, kind)
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index 06a26cc90555e..cb6f68ae0376d 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -254,14 +254,6 @@ def asi8(self) -> np.ndarray:
# do not cache or you'll create a memory leak
return self.values.view(self._default_dtype)
- @Appender(Index._convert_scalar_indexer.__doc__)
- def _convert_scalar_indexer(self, key, kind: str):
- assert kind in ["loc", "getitem"]
-
- # never iloc, which we don't coerce to integers
- key = self._maybe_cast_indexer(key)
- return super()._convert_scalar_indexer(key, kind=kind)
-
class Int64Index(IntegerIndex):
__doc__ = _num_index_shared_docs["class_descr"] % _int64_descr_args
@@ -391,12 +383,6 @@ def astype(self, dtype, copy=True):
def _should_fallback_to_positional(self):
return False
- @Appender(Index._convert_scalar_indexer.__doc__)
- def _convert_scalar_indexer(self, key, kind: str):
- assert kind in ["loc", "getitem"]
- # no-op for non-iloc
- return key
-
@Appender(Index._convert_slice_indexer.__doc__)
def _convert_slice_indexer(self, key: slice, kind: str):
assert kind in ["loc", "getitem"]
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 35e61ab6a59c9..9a671c7fc170a 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -866,16 +866,7 @@ def _validate_key(self, key, axis: int):
# slice of labels (where start-end in labels)
# slice of integers (only if in the labels)
# boolean
-
- if isinstance(key, slice):
- return
-
- if com.is_bool_indexer(key):
- return
-
- if not is_list_like_indexer(key):
- labels = self.obj._get_axis(axis)
- labels._convert_scalar_indexer(key, kind="loc")
+ pass
def _has_valid_setitem_indexer(self, indexer) -> bool:
return True
@@ -1139,15 +1130,6 @@ def _convert_to_indexer(self, key, axis: int, is_setter: bool = False):
if isinstance(key, slice):
return labels._convert_slice_indexer(key, kind="loc")
- if is_scalar(key):
- # try to find out correct indexer, if not type correct raise
- try:
- key = labels._convert_scalar_indexer(key, kind="loc")
- except KeyError:
- # but we will allow setting
- if not is_setter:
- raise
-
# see if we are positional in nature
is_int_index = labels.is_integer()
is_int_positional = is_integer(key) and not is_int_index
@@ -2029,11 +2011,17 @@ def _convert_key(self, key, is_setter: bool = False):
if is_setter:
return list(key)
- lkey = list(key)
- for n, (ax, i) in enumerate(zip(self.obj.axes, key)):
- lkey[n] = ax._convert_scalar_indexer(i, kind="loc")
+ return key
- return tuple(lkey)
+ def __getitem__(self, key):
+ if self.ndim != 1 or not is_scalar(key):
+ # FIXME: is_scalar check is a kludge
+ return super().__getitem__(key)
+
+ # Like Index.get_value, but we do not allow positional fallback
+ obj = self.obj
+ loc = obj.index.get_loc(key)
+ return obj.index._get_values_for_loc(obj, loc, key)
@Appender(IndexingMixin.iat.__doc__)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d565cbbdd5344..dde86cf303797 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -852,9 +852,7 @@ def __getitem__(self, key):
return self
key_is_scalar = is_scalar(key)
- if key_is_scalar:
- key = self.index._convert_scalar_indexer(key, kind="getitem")
- elif isinstance(key, (list, tuple)):
+ if isinstance(key, (list, tuple)):
key = unpack_1tuple(key)
if key_is_scalar or isinstance(self.index, MultiIndex):
@@ -974,8 +972,6 @@ def _get_value(self, label, takeable: bool = False):
# Similar to Index.get_value, but we do not fall back to positional
loc = self.index.get_loc(label)
- # We assume that _convert_scalar_indexer has already been called,
- # with kind="loc", if necessary, by the time we get here
return self.index._get_values_for_loc(self, loc, label)
def __setitem__(self, key, value):
| This sits on top of #31867, so is partially a demonstration of how much complication is caused by our inconsistent error-raising. | https://api.github.com/repos/pandas-dev/pandas/pulls/31962 | 2020-02-13T20:11:32Z | 2020-03-03T15:27:18Z | 2020-03-03T15:27:18Z | 2020-03-03T15:53:13Z |
CLN: D414: Section has no content | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index da152b70abd2e..22b62b56a3c88 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3821,6 +3821,8 @@ def align(
@Appender(
"""
+ Examples
+ --------
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
Change the row labels.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index adfb553d40ff0..25e98facaf76b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -579,9 +579,6 @@ def set_axis(self, labels, axis=0, inplace=False):
See Also
--------
%(klass)s.rename_axis : Alter the name of the index%(see_also_sub)s.
-
- Examples
- --------
"""
if inplace:
setattr(self, self._get_axis_name(axis), labels)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 6fa42804d2e39..5eef6e9979890 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2140,6 +2140,9 @@ def reorder_levels(self, order):
Parameters
----------
+ order : list of int or list of str
+ List representing new level order. Reference level by number
+ (position) or by key (label).
Returns
-------
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 37a4b43648bb1..fb20b5e89ccf3 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -101,7 +101,9 @@ class BlockManager(PandasObject):
Parameters
----------
-
+ blocks: Sequence of Block
+ axes: Sequence of Index
+ do_integrity_check: bool, default True
Notes
-----
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8577a7fb904dc..48fe86dd5e9c9 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4006,6 +4006,8 @@ def rename(
@Appender(
"""
+ Examples
+ --------
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
| https://api.github.com/repos/pandas-dev/pandas/pulls/31961 | 2020-02-13T18:54:56Z | 2020-02-13T20:11:14Z | 2020-02-13T20:11:14Z | 2020-02-13T20:31:15Z | |
CLN: remove blocking return | diff --git a/pandas/tests/indexes/multi/test_copy.py b/pandas/tests/indexes/multi/test_copy.py
index 1acc65aef8b8a..67b815ecba3b8 100644
--- a/pandas/tests/indexes/multi/test_copy.py
+++ b/pandas/tests/indexes/multi/test_copy.py
@@ -80,7 +80,6 @@ def test_copy_method_kwargs(deep, kwarg, value):
codes=[[0, 0, 0, 1], [0, 0, 1, 1]],
names=["first", "second"],
)
- return
idx_copy = idx.copy(**{kwarg: value, "deep": deep})
if kwarg == "names":
assert getattr(idx_copy, kwarg) == value
| just noticed a `return` statement in the middle of a test that prevents it from being executed. This was added in ##23752 | https://api.github.com/repos/pandas-dev/pandas/pulls/31960 | 2020-02-13T18:53:29Z | 2020-02-13T19:35:30Z | 2020-02-13T19:35:30Z | 2020-02-19T10:49:57Z |
CLN: D411: Missing blank line before section | diff --git a/pandas/_testing.py b/pandas/_testing.py
index 46ed65c87e8dd..4a6973b600409 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -2590,6 +2590,7 @@ def test_parallel(num_threads=2, kwargs_list=None):
kwargs_list : list of dicts, optional
The list of kwargs to update original
function kwargs on different threads.
+
Notes
-----
This decorator does not pass the return value of the decorated function.
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index d505778d18c52..1b3b6934aa53a 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -937,12 +937,14 @@ def _wrap_joined_index(self, joined, other):
def insert(self, loc, item):
"""
Make new Index inserting new item at location
+
Parameters
----------
loc : int
item : object
if not either a Python datetime or a numpy integer-like, returned
Index dtype will be object rather than datetime.
+
Returns
-------
new_index : Index
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index ab2d97e6026d1..d35d466e6c5c9 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -366,11 +366,13 @@ def _convert_to_number_format(cls, number_format_dict):
"""
Convert ``number_format_dict`` to an openpyxl v2.1.0 number format
initializer.
+
Parameters
----------
number_format_dict : dict
A dict with zero or more of the following keys.
'format_code' : str
+
Returns
-------
number_format : str
@@ -381,12 +383,14 @@ def _convert_to_number_format(cls, number_format_dict):
def _convert_to_protection(cls, protection_dict):
"""
Convert ``protection_dict`` to an openpyxl v2 Protection object.
+
Parameters
----------
protection_dict : dict
A dict with zero or more of the following keys.
'locked'
'hidden'
+
Returns
-------
"""
diff --git a/pandas/io/excel/_util.py b/pandas/io/excel/_util.py
index 9d284c8031840..a33406b6e80d7 100644
--- a/pandas/io/excel/_util.py
+++ b/pandas/io/excel/_util.py
@@ -174,6 +174,7 @@ def _fill_mi_header(row, control_row):
"""Forward fill blank entries in row but only inside the same parent index.
Used for creating headers in Multiindex.
+
Parameters
----------
row : list
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index a715094e65e98..bfcfd1c5a7101 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -91,6 +91,7 @@ def validate_args(fname, args, max_fname_arg_count, compat_args):
arguments **positionally** internally when calling downstream
implementations, a dict ensures that the original
order of the keyword arguments is enforced.
+
Raises
------
TypeError
| https://api.github.com/repos/pandas-dev/pandas/pulls/31959 | 2020-02-13T18:35:14Z | 2020-02-13T20:10:25Z | 2020-02-13T20:10:25Z | 2020-02-13T20:20:48Z | |
CLN: D409: Section underline should match the length of its name | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 6fa42804d2e39..3381a2f765223 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1686,7 +1686,7 @@ def _lexsort_depth(self) -> int:
MultiIndex that are sorted lexically
Returns
- ------
+ -------
int
"""
int64_codes = [ensure_int64(level_codes) for level_codes in self.codes]
diff --git a/pandas/tests/arithmetic/common.py b/pandas/tests/arithmetic/common.py
index 83d19b8a20ac3..ccc49adc5da82 100644
--- a/pandas/tests/arithmetic/common.py
+++ b/pandas/tests/arithmetic/common.py
@@ -13,7 +13,7 @@ def assert_invalid_addsub_type(left, right, msg=None):
Helper to assert that left and right can be neither added nor subtracted.
Parameters
- ---------
+ ----------
left : object
right : object
msg : str or None, default None
diff --git a/pandas/util/_test_decorators.py b/pandas/util/_test_decorators.py
index cd7fdd55a4d2c..25394dc6775d8 100644
--- a/pandas/util/_test_decorators.py
+++ b/pandas/util/_test_decorators.py
@@ -40,15 +40,15 @@ def test_foo():
def safe_import(mod_name: str, min_version: Optional[str] = None):
"""
- Parameters:
- -----------
+ Parameters
+ ----------
mod_name : str
Name of the module to be imported
min_version : str, default None
Minimum required version of the specified mod_name
- Returns:
- --------
+ Returns
+ -------
object
The imported module if successful, or False
"""
| https://api.github.com/repos/pandas-dev/pandas/pulls/31958 | 2020-02-13T18:14:27Z | 2020-02-13T20:07:34Z | 2020-02-13T20:07:34Z | 2020-02-13T20:19:08Z | |
CLN: D412: No blank lines allowed between a section header and its content | diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index c283baeb9d412..2df940817498c 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -395,7 +395,6 @@ class option_context:
Examples
--------
-
>>> with option_context('display.max_rows', 10, 'display.max_columns', 5):
... ...
"""
@@ -716,8 +715,8 @@ def config_prefix(prefix):
Warning: This is not thread - safe, and won't work properly if you import
the API functions into your module using the "from x import y" construct.
- Example:
-
+ Example
+ -------
import pandas._config.config as cf
with cf.config_prefix("display.font"):
cf.register_option("color", "red")
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 46ed65c87e8dd..6029fbe59bbcd 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1970,35 +1970,39 @@ def makeCustomDataframe(
r_idx_type=None,
):
"""
- nrows, ncols - number of data rows/cols
- c_idx_names, idx_names - False/True/list of strings, yields No names ,
- default names or uses the provided names for the levels of the
- corresponding index. You can provide a single string when
- c_idx_nlevels ==1.
- c_idx_nlevels - number of levels in columns index. > 1 will yield MultiIndex
- r_idx_nlevels - number of levels in rows index. > 1 will yield MultiIndex
- data_gen_f - a function f(row,col) which return the data value
- at that position, the default generator used yields values of the form
- "RxCy" based on position.
- c_ndupe_l, r_ndupe_l - list of integers, determines the number
- of duplicates for each label at a given level of the corresponding
- index. The default `None` value produces a multiplicity of 1 across
- all levels, i.e. a unique index. Will accept a partial list of length
- N < idx_nlevels, for just the first N levels. If ndupe doesn't divide
- nrows/ncol, the last label might have lower multiplicity.
- dtype - passed to the DataFrame constructor as is, in case you wish to
- have more control in conjunction with a custom `data_gen_f`
- r_idx_type, c_idx_type - "i"/"f"/"s"/"u"/"dt"/"td".
- If idx_type is not None, `idx_nlevels` must be 1.
- "i"/"f" creates an integer/float index,
- "s"/"u" creates a string/unicode index
- "dt" create a datetime index.
- "td" create a timedelta index.
-
- if unspecified, string labels will be generated.
+ Create a DataFrame using supplied parameters.
- Examples:
+ Parameters
+ ----------
+ nrows, ncols - number of data rows/cols
+ c_idx_names, idx_names - False/True/list of strings, yields No names ,
+ default names or uses the provided names for the levels of the
+ corresponding index. You can provide a single string when
+ c_idx_nlevels ==1.
+ c_idx_nlevels - number of levels in columns index. > 1 will yield MultiIndex
+ r_idx_nlevels - number of levels in rows index. > 1 will yield MultiIndex
+ data_gen_f - a function f(row,col) which return the data value
+ at that position, the default generator used yields values of the form
+ "RxCy" based on position.
+ c_ndupe_l, r_ndupe_l - list of integers, determines the number
+ of duplicates for each label at a given level of the corresponding
+ index. The default `None` value produces a multiplicity of 1 across
+ all levels, i.e. a unique index. Will accept a partial list of length
+ N < idx_nlevels, for just the first N levels. If ndupe doesn't divide
+ nrows/ncol, the last label might have lower multiplicity.
+ dtype - passed to the DataFrame constructor as is, in case you wish to
+ have more control in conjunction with a custom `data_gen_f`
+ r_idx_type, c_idx_type - "i"/"f"/"s"/"u"/"dt"/"td".
+ If idx_type is not None, `idx_nlevels` must be 1.
+ "i"/"f" creates an integer/float index,
+ "s"/"u" creates a string/unicode index
+ "dt" create a datetime index.
+ "td" create a timedelta index.
+
+ if unspecified, string labels will be generated.
+ Examples
+ --------
# 5 row, 3 columns, default names on both, single index on both axis
>> makeCustomDataframe(5,3)
@@ -2514,7 +2518,6 @@ class RNGContext:
Examples
--------
-
with RNGContext(42):
np.random.randn()
"""
@@ -2669,7 +2672,6 @@ def set_timezone(tz: str):
Examples
--------
-
>>> from datetime import datetime
>>> from dateutil.tz import tzlocal
>>> tzlocal().tzname(datetime.now())
diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index 4e3ef0c52bbdd..fc40f1db1918a 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -233,7 +233,6 @@ def __init__(self, pandas_object): # noqa: E999
Examples
--------
-
In your library code::
import pandas as pd
diff --git a/pandas/core/arrays/boolean.py b/pandas/core/arrays/boolean.py
index 590b40b0434e5..d93b5fbc83312 100644
--- a/pandas/core/arrays/boolean.py
+++ b/pandas/core/arrays/boolean.py
@@ -488,7 +488,6 @@ def any(self, skipna: bool = True, **kwargs):
Examples
--------
-
The result indicates whether any element is True (and by default
skips NAs):
@@ -557,7 +556,6 @@ def all(self, skipna: bool = True, **kwargs):
Examples
--------
-
The result indicates whether any element is True (and by default
skips NAs):
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index b095288acca90..6c7c35e9b4763 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2388,7 +2388,6 @@ def isin(self, values):
Examples
--------
-
>>> s = pd.Categorical(['lama', 'cow', 'lama', 'beetle', 'lama',
... 'hippo'])
>>> s.isin(['cow', 'lama'])
diff --git a/pandas/core/arrays/sparse/dtype.py b/pandas/core/arrays/sparse/dtype.py
index 1ce735421e7d6..86869f50aab8e 100644
--- a/pandas/core/arrays/sparse/dtype.py
+++ b/pandas/core/arrays/sparse/dtype.py
@@ -336,7 +336,6 @@ def _subtype_with_str(self):
Returns
-------
-
>>> SparseDtype(int, 1)._subtype_with_str
dtype('int64')
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 19f151846a080..097c3c22aa6c3 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -501,7 +501,6 @@ class PyTablesExpr(expr.Expr):
Examples
--------
-
'index>=date'
"columns=['A', 'D']"
'columns=A'
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index e53eb3b4d8e71..003c3505885bb 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -215,13 +215,11 @@ def union_categoricals(
Notes
-----
-
To learn more about categories, see `link
<https://pandas.pydata.org/pandas-docs/stable/user_guide/categorical.html#unioning>`__
Examples
--------
-
>>> from pandas.api.types import union_categoricals
If you want to combine categoricals that do not necessarily have
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index da152b70abd2e..2f9c8286d2988 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -927,7 +927,6 @@ def iterrows(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
Notes
-----
-
1. Because ``iterrows`` returns a Series for each row,
it does **not** preserve dtypes across the rows (dtypes are
preserved across columns for DataFrames). For example,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index adfb553d40ff0..0ea8da0da9c6d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -899,7 +899,6 @@ def rename(
Examples
--------
-
>>> s = pd.Series([1, 2, 3])
>>> s
0 1
@@ -2208,7 +2207,6 @@ def to_json(
Examples
--------
-
>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
@@ -2507,7 +2505,6 @@ def to_sql(
Examples
--------
-
Create an in-memory SQLite database.
>>> from sqlalchemy import create_engine
@@ -4185,7 +4182,6 @@ def reindex(self: FrameOrSeries, *args, **kwargs) -> FrameOrSeries:
Examples
--------
-
``DataFrame.reindex`` supports two calling conventions
* ``(index=index_labels, columns=column_labels, ...)``
@@ -5768,7 +5764,6 @@ def convert_dtypes(
Notes
-----
-
By default, ``convert_dtypes`` will attempt to convert a Series (or each
Series in a DataFrame) to dtypes that support ``pd.NA``. By using the options
``convert_string``, ``convert_integer``, and ``convert_boolean``, it is
@@ -7434,7 +7429,6 @@ def asfreq(
Examples
--------
-
Start by creating a series with 4 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=4, freq='T')
@@ -7713,7 +7707,6 @@ def resample(
Examples
--------
-
Start by creating a series with 9 one minute timestamps.
>>> index = pd.date_range('1/1/2000', periods=9, freq='T')
@@ -8100,7 +8093,6 @@ def rank(
Examples
--------
-
>>> df = pd.DataFrame(data={'Animal': ['cat', 'penguin', 'dog',
... 'spider', 'snake'],
... 'Number_legs': [4, 2, 4, 8, np.nan]})
@@ -9235,7 +9227,6 @@ def tz_localize(
Examples
--------
-
Localize local times:
>>> s = pd.Series([1],
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 426b3b47d9530..1d7527e73079c 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1995,7 +1995,6 @@ def ngroup(self, ascending: bool = True):
Examples
--------
-
>>> df = pd.DataFrame({"A": list("aaabba")})
>>> df
A
@@ -2062,7 +2061,6 @@ def cumcount(self, ascending: bool = True):
Examples
--------
-
>>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
... columns=['A'])
>>> df
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 8a42a8fa297cd..21e171f937de8 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -77,7 +77,6 @@ class Grouper:
Examples
--------
-
Syntactic sugar for ``df.groupby('A')``
>>> df.groupby(Grouper(key='A'))
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index f3bae63aa7e03..3d549405592d6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1452,7 +1452,6 @@ def _get_level_values(self, level):
Examples
--------
-
>>> idx = pd.Index(list('abc'))
>>> idx
Index(['a', 'b', 'c'], dtype='object')
@@ -2501,7 +2500,6 @@ def union(self, other, sort=None):
Examples
--------
-
Union matching dtypes
>>> idx1 = pd.Index([1, 2, 3, 4])
@@ -2632,7 +2630,6 @@ def intersection(self, other, sort=False):
Examples
--------
-
>>> idx1 = pd.Index([1, 2, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.intersection(idx2)
@@ -2713,7 +2710,6 @@ def difference(self, other, sort=None):
Examples
--------
-
>>> idx1 = pd.Index([2, 1, 3, 4])
>>> idx2 = pd.Index([3, 4, 5, 6])
>>> idx1.difference(idx2)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 6fa42804d2e39..5b357af0d3244 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1549,7 +1549,6 @@ def get_level_values(self, level):
Examples
--------
-
Create a MultiIndex:
>>> mi = pd.MultiIndex.from_arrays((list('abc'), list('def')))
@@ -1713,7 +1712,6 @@ def _sort_levels_monotonic(self):
Examples
--------
-
>>> mi = pd.MultiIndex(levels=[['a', 'b'], ['bb', 'aa']],
... codes=[[0, 0, 1, 1], [0, 1, 0, 1]])
>>> mi
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 987725bb4b70b..986f87ffe3734 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -788,7 +788,6 @@ def period_range(
Examples
--------
-
>>> pd.period_range(start='2017-01-01', end='2018-01-01', freq='M')
PeriodIndex(['2017-01', '2017-02', '2017-03', '2017-04', '2017-05',
'2017-06', '2017-06', '2017-07', '2017-08', '2017-09',
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 4a69570f1844c..b3b2bc46f6659 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -321,7 +321,6 @@ def timedelta_range(
Examples
--------
-
>>> pd.timedelta_range(start='1 day', periods=4)
TimedeltaIndex(['1 days', '2 days', '3 days', '4 days'],
dtype='timedelta64[ns]', freq='D')
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b3777e949a08c..cb8b9cc04fc24 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -48,7 +48,6 @@ class _IndexSlice:
Examples
--------
-
>>> midx = pd.MultiIndex.from_product([['A0','A1'], ['B0','B1','B2','B3']])
>>> columns = ['foo', 'bar']
>>> dfmi = pd.DataFrame(np.arange(16).reshape((len(midx), len(columns))),
@@ -124,7 +123,6 @@ def iloc(self) -> "_iLocIndexer":
Examples
--------
-
>>> mydict = [{'a': 1, 'b': 2, 'c': 3, 'd': 4},
... {'a': 100, 'b': 200, 'c': 300, 'd': 400},
... {'a': 1000, 'b': 2000, 'c': 3000, 'd': 4000 }]
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 98910a9baf962..f19a82ab6f86a 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -550,7 +550,6 @@ def backfill(self, limit=None):
Examples
--------
-
Resampling a Series:
>>> s = pd.Series([1, 2, 3],
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 3a7e3fdab5dca..4b0fc3e47356c 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -349,7 +349,6 @@ def str_contains(arr, pat, case=True, flags=0, na=np.nan, regex=True):
Examples
--------
-
Returning a Series of booleans using only a literal pattern.
>>> s1 = pd.Series(['Mouse', 'dog', 'house and parrot', '23', np.NaN])
@@ -1274,7 +1273,6 @@ def str_findall(arr, pat, flags=0):
Examples
--------
-
>>> s = pd.Series(['Lion', 'Monkey', 'Rabbit'])
The search for the pattern 'Monkey' returns one match:
@@ -1743,7 +1741,6 @@ def str_wrap(arr, width, **kwargs):
Examples
--------
-
>>> s = pd.Series(['line to be wrapped', 'another line to be wrapped'])
>>> s.str.wrap(12)
0 line to be\nwrapped
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 1d933cf431b4b..d7529ec799022 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -53,7 +53,6 @@ def to_timedelta(arg, unit="ns", errors="raise"):
Examples
--------
-
Parsing a single string to a Timedelta:
>>> pd.to_timedelta('1 days 06:05:01.00003')
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 6da8b0c5ccadd..e045d1c2211d7 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -98,7 +98,6 @@ class EWM(_Rolling):
Examples
--------
-
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py
index a0bf3376d2352..140e0144d0a2d 100644
--- a/pandas/core/window/expanding.py
+++ b/pandas/core/window/expanding.py
@@ -37,7 +37,6 @@ class Expanding(_Rolling_and_Expanding):
Examples
--------
-
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
B
0 0.0
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index f29cd428b7bad..65ac064a1322e 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -846,7 +846,6 @@ class Window(_Window):
Examples
--------
-
>>> df = pd.DataFrame({'B': [0, 1, 2, np.nan, 4]})
>>> df
B
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index eca5a3fb18e60..9c46a0036ab0d 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -462,7 +462,6 @@ def format(self, formatter, subset=None, na_rep: Optional[str] = None) -> "Style
Notes
-----
-
``formatter`` is either an ``a`` or a dict ``{column name: a}`` where
``a`` is one of
@@ -474,7 +473,6 @@ def format(self, formatter, subset=None, na_rep: Optional[str] = None) -> "Style
Examples
--------
-
>>> df = pd.DataFrame(np.random.randn(4, 2), columns=['a', 'b'])
>>> df.style.format("{:.2%}")
>>> df['c'] = ['a', 'b', 'c', 'd']
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 39ee097bc743b..77a0c2f99496b 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -525,7 +525,6 @@ def read_json(
Examples
--------
-
>>> df = pd.DataFrame([['a', 'b'], ['c', 'd']],
... index=['row 1', 'row 2'],
... columns=['col 1', 'col 2'])
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 08dca6b573a2f..714bebc260c06 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -61,7 +61,6 @@ def nested_to_record(
Examples
--------
-
IN[52]: nested_to_record(dict(flat1=1,dict1=dict(c=1,d=2),
nested=dict(e=dict(c=1,d=2),d=2)))
Out[52]:
@@ -160,7 +159,6 @@ def _json_normalize(
Examples
--------
-
>>> from pandas.io.json import json_normalize
>>> data = [{'id': 1, 'name': {'first': 'Coleen', 'last': 'Volk'}},
... {'name': {'given': 'Mose', 'family': 'Regner'}},
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index a75819d33d967..1390d2d514a5e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1830,7 +1830,6 @@ class IndexCol:
Parameters
----------
-
axis : axis which I reference
values : the ndarray like converted values
kind : a string description of this type
@@ -2142,7 +2141,6 @@ class DataCol(IndexCol):
Parameters
----------
-
data : the actual data
cname : the column name in the table to hold the data (typically
values)
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 1fe383706f74d..d3db539084609 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -176,13 +176,12 @@ def hist_frame(
Examples
--------
+ This example draws a histogram based on the length and width of
+ some animals, displayed in three bins
.. plot::
:context: close-figs
- This example draws a histogram based on the length and width of
- some animals, displayed in three bins
-
>>> df = pd.DataFrame({
... 'length': [1.5, 0.5, 1.2, 0.9, 3],
... 'width': [0.7, 0.2, 0.15, 0.2, 1.1]
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index d7732c86911b8..dafdd6eecabc0 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -105,8 +105,8 @@ def _subplots(
This utility wrapper makes it convenient to create common layouts of
subplots, including the enclosing figure object, in a single call.
- Keyword arguments:
-
+ Parameters
+ ----------
naxes : int
Number of required axes. Exceeded axes are set invisible. Default is
nrows * ncols.
@@ -146,16 +146,16 @@ def _subplots(
Note that all keywords not recognized above will be
automatically included here.
- Returns:
-
+ Returns
+ -------
fig, ax : tuple
- fig is the Matplotlib Figure object
- ax can be either a single axis object or an array of axis objects if
more than one subplot was created. The dimensions of the resulting array
can be controlled with the squeeze keyword, see above.
- **Examples:**
-
+ Examples
+ --------
x = np.linspace(0, 2*np.pi, 400)
y = np.sin(x**2)
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 1369adcd80269..47a4fd8ff0e95 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -294,6 +294,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
Examples
--------
+ This example draws a basic bootstap plot for a Series.
.. plot::
:context: close-figs
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 1ba1b872fa5e2..e6b147e7a4ce7 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -3,8 +3,8 @@
The JSONArray stores lists of dictionaries. The storage mechanism is a list,
not an ndarray.
-Note:
-
+Note
+----
We currently store lists of UserDicts. Pandas has a few places
internally that specifically check for dicts, and does non-scalar things
in that case. We *want* the dictionaries to be treated as scalars, so we
| https://api.github.com/repos/pandas-dev/pandas/pulls/31956 | 2020-02-13T17:56:02Z | 2020-02-13T20:06:57Z | 2020-02-13T20:06:56Z | 2020-02-13T20:15:35Z | |
add eval examples | diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index 4cdf4bac61316..f6947d5ec6233 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -276,6 +276,21 @@ def eval(
See the :ref:`enhancing performance <enhancingperf.eval>` documentation for
more details.
+
+ Examples
+ --------
+ >>> df = pd.DataFrame({"animal": ["dog", "pig"], "age": [10, 20]})
+ >>> df
+ animal age
+ 0 dog 10
+ 1 pig 20
+
+ We can add a new column using ``pd.eval``:
+
+ >>> pd.eval("double_age = df.age * 2", target=df)
+ animal age double_age
+ 0 dog 10 20
+ 1 pig 20 40
"""
inplace = validate_bool_kwarg(inplace, "inplace")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index da152b70abd2e..5f62c1e2e7ade 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3320,6 +3320,21 @@ def eval(self, expr, inplace=False, **kwargs):
2 3 6 9
3 4 4 8
4 5 2 7
+
+ Multiple columns can be assigned to using multi-line expressions:
+
+ >>> df.eval(
+ ... '''
+ ... C = A + B
+ ... D = A - B
+ ... '''
+ ... )
+ A B C D
+ 0 1 10 11 -9
+ 1 2 8 10 -6
+ 2 3 6 9 -3
+ 3 4 4 8 0
+ 4 5 2 7 3
"""
from pandas.core.computation.eval import eval as _eval
| xref #31952
Screenshot for `pandas.eval`:

Screenshot for `pandas.DataFrame.eval`:

| https://api.github.com/repos/pandas-dev/pandas/pulls/31955 | 2020-02-13T17:11:26Z | 2020-02-14T01:21:42Z | 2020-02-14T01:21:42Z | 2020-02-25T17:11:43Z |
CLN: index related attributes on Series/DataFrame | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index da152b70abd2e..8cad92ab2bb32 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8569,6 +8569,14 @@ def isin(self, values) -> "DataFrame":
# ----------------------------------------------------------------------
# Add index and columns
+ _AXIS_ORDERS = ["index", "columns"]
+ _AXIS_NUMBERS = {"index": 0, "columns": 1}
+ _AXIS_NAMES = {0: "index", 1: "columns"}
+ _AXIS_REVERSED = True
+ _AXIS_LEN = len(_AXIS_ORDERS)
+ _info_axis_number = 1
+ _info_axis_name = "columns"
+
index: "Index" = properties.AxisProperty(
axis=1, doc="The index (row labels) of the DataFrame."
)
@@ -8584,13 +8592,6 @@ def isin(self, values) -> "DataFrame":
sparse = CachedAccessor("sparse", SparseFrameAccessor)
-DataFrame._setup_axes(
- ["index", "columns"],
- docs={
- "index": "The index (row labels) of the DataFrame.",
- "columns": "The column labels of the DataFrame.",
- },
-)
DataFrame._add_numeric_operations()
DataFrame._add_series_or_dataframe_operations()
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 934c4c6e92bbe..60d152eba4485 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -315,28 +315,6 @@ def _constructor_expanddim(self):
_info_axis_name: str
_AXIS_LEN: int
- @classmethod
- def _setup_axes(cls, axes: List[str], docs: Dict[str, str]) -> None:
- """
- Provide axes setup for the major PandasObjects.
-
- Parameters
- ----------
- axes : the names of the axes in order (lowest to highest)
- docs : docstrings for the axis properties
- """
- info_axis = len(axes) - 1
- axes_are_reversed = len(axes) > 1
-
- cls._AXIS_ORDERS = axes
- cls._AXIS_NUMBERS = {a: i for i, a in enumerate(axes)}
- cls._AXIS_LEN = len(axes)
- cls._AXIS_NAMES = dict(enumerate(axes))
- cls._AXIS_REVERSED = axes_are_reversed
-
- cls._info_axis_number = info_axis
- cls._info_axis_name = axes[info_axis]
-
def _construct_axes_dict(self, axes=None, **kwargs):
"""Return an axes dictionary for myself."""
d = {a: self._get_axis(a) for a in (axes or self._AXIS_ORDERS)}
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 24e794014a15f..f6536884e007a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4562,6 +4562,14 @@ def to_period(self, freq=None, copy=True) -> "Series":
# ----------------------------------------------------------------------
# Add index
+ _AXIS_ORDERS = ["index"]
+ _AXIS_NUMBERS = {"index": 0}
+ _AXIS_NAMES = {0: "index"}
+ _AXIS_REVERSED = False
+ _AXIS_LEN = len(_AXIS_ORDERS)
+ _info_axis_number = 0
+ _info_axis_name = "index"
+
index: "Index" = properties.AxisProperty(
axis=0, doc="The index (axis labels) of the Series."
)
@@ -4580,7 +4588,6 @@ def to_period(self, freq=None, copy=True) -> "Series":
hist = pandas.plotting.hist_series
-Series._setup_axes(["index"], docs={"index": "The index (axis labels) of the Series."})
Series._add_numeric_operations()
Series._add_series_or_dataframe_operations()
| Followup to #31126. IMO the current approach to adding the index related class attributes is too indirect and therefore unnecessary difficult to follow. Just adding the class attributes directly on the relevant class makes reading the code easier, IMO.
Notice that the types are already defined in pandas/core/generic.py:304-316. | https://api.github.com/repos/pandas-dev/pandas/pulls/31953 | 2020-02-13T14:47:06Z | 2020-02-14T01:07:53Z | 2020-02-14T01:07:53Z | 2020-02-14T21:50:48Z |
Backport PR #31910 on branch 1.0.x (BUG: Handle NA in assert_numpy_array_equal) | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 8231471dfbde9..3f98a479bc587 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -527,6 +527,8 @@ def array_equivalent_object(left: object[:], right: object[:]) -> bool:
if PyArray_Check(x) and PyArray_Check(y):
if not array_equivalent_object(x, y):
return False
+ elif (x is C_NA) ^ (y is C_NA):
+ return False
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
(x is None or is_nan(x)) and (y is None or is_nan(y))):
return False
diff --git a/pandas/tests/util/test_assert_numpy_array_equal.py b/pandas/tests/util/test_assert_numpy_array_equal.py
index c8ae9ebdd8651..d29ddedd2fdd6 100644
--- a/pandas/tests/util/test_assert_numpy_array_equal.py
+++ b/pandas/tests/util/test_assert_numpy_array_equal.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+import pandas as pd
from pandas import Timestamp
import pandas._testing as tm
@@ -175,3 +176,38 @@ def test_numpy_array_equal_copy_flag(other_type, check_same):
tm.assert_numpy_array_equal(a, other, check_same=check_same)
else:
tm.assert_numpy_array_equal(a, other, check_same=check_same)
+
+
+def test_numpy_array_equal_contains_na():
+ # https://github.com/pandas-dev/pandas/issues/31881
+ a = np.array([True, False])
+ b = np.array([True, pd.NA], dtype=object)
+
+ msg = """numpy array are different
+
+numpy array values are different \\(50.0 %\\)
+\\[left\\]: \\[True, False\\]
+\\[right\\]: \\[True, <NA>\\]"""
+
+ with pytest.raises(AssertionError, match=msg):
+ tm.assert_numpy_array_equal(a, b)
+
+
+def test_numpy_array_equal_identical_na(nulls_fixture):
+ a = np.array([nulls_fixture], dtype=object)
+
+ tm.assert_numpy_array_equal(a, a)
+
+
+def test_numpy_array_equal_different_na():
+ a = np.array([np.nan], dtype=object)
+ b = np.array([pd.NA], dtype=object)
+
+ msg = """numpy array are different
+
+numpy array values are different \\(100.0 %\\)
+\\[left\\]: \\[nan\\]
+\\[right\\]: \\[<NA>\\]"""
+
+ with pytest.raises(AssertionError, match=msg):
+ tm.assert_numpy_array_equal(a, b)
| Backport PR #31910: BUG: Handle NA in assert_numpy_array_equal | https://api.github.com/repos/pandas-dev/pandas/pulls/31947 | 2020-02-13T08:16:45Z | 2020-02-13T09:05:41Z | 2020-02-13T09:05:41Z | 2020-02-13T12:41:05Z |
API: replace() should raise an exception if invalid argument is given | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 0f18a1fd81815..43c0a3cfa1b94 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -322,6 +322,7 @@ Reshaping
- Bug in :func:`crosstab` when inputs are two Series and have tuple names, the output will keep dummy MultiIndex as columns. (:issue:`18321`)
- :meth:`DataFrame.pivot` can now take lists for ``index`` and ``columns`` arguments (:issue:`21425`)
- Bug in :func:`concat` where the resulting indices are not copied when ``copy=True`` (:issue:`29879`)
+- :meth:`DataFrame.replace` and :meth:`Series.replace` will raise a ``TypeError`` if ``to_replace`` is not an expected type. Previously the ``replace`` would fail silently (:issue:`18634`)
Sparse
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ff7c481d550d4..bcf0d94e6faaf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6172,7 +6172,9 @@ def bfill(
AssertionError
* If `regex` is not a ``bool`` and `to_replace` is not
``None``.
+
TypeError
+ * If `to_replace` is not a scalar, array-like, ``dict``, or ``None``
* If `to_replace` is a ``dict`` and `value` is not a ``list``,
``dict``, ``ndarray``, or ``Series``
* If `to_replace` is ``None`` and `regex` is not compilable
@@ -6181,6 +6183,7 @@ def bfill(
* When replacing multiple ``bool`` or ``datetime64`` objects and
the arguments to `to_replace` does not match the type of the
value being replaced
+
ValueError
* If a ``list`` or an ``ndarray`` is passed to `to_replace` and
`value` but they are not the same length.
@@ -6376,6 +6379,18 @@ def replace(
regex=False,
method="pad",
):
+ if not (
+ is_scalar(to_replace)
+ or isinstance(to_replace, pd.Series)
+ or is_re_compilable(to_replace)
+ or is_list_like(to_replace)
+ ):
+ raise TypeError(
+ "Expecting 'to_replace' to be either a scalar, array-like, "
+ "dict or None, got invalid type "
+ f"{repr(type(to_replace).__name__)}"
+ )
+
inplace = validate_bool_kwarg(inplace, "inplace")
if not is_bool(regex) and to_replace is not None:
raise AssertionError("'to_replace' must be 'None' if 'regex' is not a bool")
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 92b74c4409d7d..ee89562261b19 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -1363,3 +1363,14 @@ def test_replace_after_convert_dtypes(self):
result = df.replace(1, 10)
expected = pd.DataFrame({"grp": [10, 2, 3, 4, 5]}, dtype="Int64")
tm.assert_frame_equal(result, expected)
+
+ def test_replace_invalid_to_replace(self):
+ # GH 18634
+ # API: replace() should raise an exception if invalid argument is given
+ df = pd.DataFrame({"one": ["a", "b ", "c"], "two": ["d ", "e ", "f "]})
+ msg = (
+ r"Expecting 'to_replace' to be either a scalar, array-like, "
+ r"dict or None, got invalid type.*"
+ )
+ with pytest.raises(TypeError, match=msg):
+ df.replace(lambda x: x.strip())
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index 770ad38b0215e..26eaf53616282 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -362,3 +362,14 @@ def test_replace_no_cast(self, ser, exp):
expected = pd.Series(exp)
tm.assert_series_equal(result, expected)
+
+ def test_replace_invalid_to_replace(self):
+ # GH 18634
+ # API: replace() should raise an exception if invalid argument is given
+ series = pd.Series(["a", "b", "c "])
+ msg = (
+ r"Expecting 'to_replace' to be either a scalar, array-like, "
+ r"dict or None, got invalid type.*"
+ )
+ with pytest.raises(TypeError, match=msg):
+ series.replace(lambda x: x.strip())
| - [x] closes #18634
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
I used `is_scalar()` instead of `is_numeric()` to allow data structures like pd.Timestamp to be used in `replace()`. See tests in pandas/tests/frame/methods/test_replace.py, pandas/tests/series/methods/test_replace.py for examples. | https://api.github.com/repos/pandas-dev/pandas/pulls/31946 | 2020-02-13T06:13:46Z | 2020-03-03T03:24:23Z | 2020-03-03T03:24:23Z | 2020-03-03T04:32:14Z |
CLN: 29547 replace old string formatting 3 | diff --git a/pandas/tests/indexes/interval/test_setops.py b/pandas/tests/indexes/interval/test_setops.py
index 3246ac6bafde9..d9359d717de1d 100644
--- a/pandas/tests/indexes/interval/test_setops.py
+++ b/pandas/tests/indexes/interval/test_setops.py
@@ -180,8 +180,8 @@ def test_set_incompatible_types(self, closed, op_name, sort):
# GH 19016: incompatible dtypes
other = interval_range(Timestamp("20180101"), periods=9, closed=closed)
msg = (
- "can only do {op} between two IntervalIndex objects that have "
+ f"can only do {op_name} between two IntervalIndex objects that have "
"compatible dtypes"
- ).format(op=op_name)
+ )
with pytest.raises(TypeError, match=msg):
set_op(other, sort=sort)
diff --git a/pandas/tests/indexes/multi/test_compat.py b/pandas/tests/indexes/multi/test_compat.py
index 9a76f0623eb31..ef549beccda5d 100644
--- a/pandas/tests/indexes/multi/test_compat.py
+++ b/pandas/tests/indexes/multi/test_compat.py
@@ -29,7 +29,7 @@ def test_numeric_compat(idx):
@pytest.mark.parametrize("method", ["all", "any"])
def test_logical_compat(idx, method):
- msg = "cannot perform {method}".format(method=method)
+ msg = f"cannot perform {method}"
with pytest.raises(TypeError, match=msg):
getattr(idx, method)()
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index fcbadce3d63b1..418f53591b913 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -364,7 +364,7 @@ def test_constructor_year_and_quarter(self):
year = pd.Series([2001, 2002, 2003])
quarter = year - 2000
idx = PeriodIndex(year=year, quarter=quarter)
- strs = ["{t[0]:d}Q{t[1]:d}".format(t=t) for t in zip(quarter, year)]
+ strs = [f"{t[0]:d}Q{t[1]:d}" for t in zip(quarter, year)]
lops = list(map(Period, strs))
p = PeriodIndex(lops)
tm.assert_index_equal(p, idx)
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 0de10b5d82171..8e54561df1624 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -155,7 +155,7 @@ def test_constructor(self):
def test_constructor_iso(self):
# GH #21877
expected = timedelta_range("1s", periods=9, freq="s")
- durations = ["P0DT0H0M{}S".format(i) for i in range(1, 10)]
+ durations = [f"P0DT0H0M{i}S" for i in range(1, 10)]
result = to_timedelta(durations)
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index f783c3516e357..80a4d81b20a13 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -53,8 +53,8 @@ def test_scalar_error(self, index_func):
s.iloc[3.0]
msg = (
- "cannot do positional indexing on {klass} with these "
- r"indexers \[3\.0\] of type float".format(klass=type(i).__name__)
+ fr"cannot do positional indexing on {type(i).__name__} with these "
+ r"indexers \[3\.0\] of type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
@@ -95,10 +95,10 @@ def test_scalar_non_numeric(self, index_func):
error = TypeError
msg = (
r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
r"type float|"
"Cannot index by location index with a "
- "non-integer key".format(klass=type(i).__name__)
+ "non-integer key"
)
with pytest.raises(error, match=msg):
idxr(s)[3.0]
@@ -116,8 +116,8 @@ def test_scalar_non_numeric(self, index_func):
error = TypeError
msg = (
r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ "type float"
)
with pytest.raises(error, match=msg):
s.loc[3.0]
@@ -128,8 +128,8 @@ def test_scalar_non_numeric(self, index_func):
# setting with a float fails with iloc
msg = (
r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
@@ -165,8 +165,8 @@ def test_scalar_non_numeric(self, index_func):
s[3]
msg = (
r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[3.0]
@@ -181,12 +181,10 @@ def test_scalar_with_mixed(self):
for idxr in [lambda x: x, lambda x: x.iloc]:
msg = (
- r"cannot do label indexing "
- r"on {klass} with these indexers \[1\.0\] of "
+ "cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
r"type float|"
- "Cannot index by location index with a non-integer key".format(
- klass=Index.__name__
- )
+ "Cannot index by location index with a non-integer key"
)
with pytest.raises(TypeError, match=msg):
idxr(s2)[1.0]
@@ -203,9 +201,9 @@ def test_scalar_with_mixed(self):
for idxr in [lambda x: x]:
msg = (
- r"cannot do label indexing "
- r"on {klass} with these indexers \[1\.0\] of "
- r"type float".format(klass=Index.__name__)
+ "cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
idxr(s3)[1.0]
@@ -321,9 +319,9 @@ def test_scalar_float(self):
s.iloc[3.0]
msg = (
- r"cannot do positional indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=Float64Index.__name__)
+ "cannot do positional indexing "
+ fr"on {Float64Index.__name__} with these indexers \[3\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s2.iloc[3.0] = 0
@@ -355,8 +353,8 @@ def test_slice_non_numeric(self, index_func):
msg = (
"cannot do positional indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[l]
@@ -365,9 +363,9 @@ def test_slice_non_numeric(self, index_func):
msg = (
"cannot do (slice|positional) indexing "
- r"on {klass} with these indexers "
+ fr"on {type(index).__name__} with these indexers "
r"\[(3|4)(\.0)?\] "
- r"of type (float|int)".format(klass=type(index).__name__)
+ r"of type (float|int)"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
@@ -377,8 +375,8 @@ def test_slice_non_numeric(self, index_func):
msg = (
"cannot do positional indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[l] = 0
@@ -386,9 +384,9 @@ def test_slice_non_numeric(self, index_func):
for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
"cannot do (slice|positional) indexing "
- r"on {klass} with these indexers "
+ fr"on {type(index).__name__} with these indexers "
r"\[(3|4)(\.0)?\] "
- r"of type (float|int)".format(klass=type(index).__name__)
+ r"of type (float|int)"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l] = 0
@@ -427,8 +425,8 @@ def test_slice_integer(self):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -451,8 +449,8 @@ def test_slice_integer(self):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[-6\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[-6\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[slice(-6.0, 6.0)]
@@ -477,8 +475,8 @@ def test_slice_integer(self):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[(2|3)\.5\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(2|3)\.5\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -495,8 +493,8 @@ def test_slice_integer(self):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
@@ -518,8 +516,8 @@ def test_integer_positional_indexing(self):
klass = RangeIndex
msg = (
"cannot do (slice|positional) indexing "
- r"on {klass} with these indexers \[(2|4)\.0\] of "
- "type float".format(klass=klass.__name__)
+ fr"on {klass.__name__} with these indexers \[(2|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
@@ -546,8 +544,8 @@ def f(idxr):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[(0|1)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(0|1)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -561,8 +559,8 @@ def f(idxr):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[-10\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[-10\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[slice(-10.0, 10.0)]
@@ -580,8 +578,8 @@ def f(idxr):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[0\.5\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[0\.5\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -597,8 +595,8 @@ def f(idxr):
# positional indexing
msg = (
"cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
| I splitted PR #31844 in batches, this is the third
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31945 | 2020-02-13T03:45:56Z | 2020-02-13T20:34:48Z | 2020-02-13T20:34:48Z | 2020-02-21T01:30:01Z |
DOC: update ohlc docstring so that it reflects the real use #31919 | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 6b2880810dcb2..cc46485b4a2e8 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1447,7 +1447,7 @@ def last(x):
@Appender(_common_see_also)
def ohlc(self) -> DataFrame:
"""
- Compute sum of values, excluding missing values.
+ Compute open, high, low and close values of a group, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
| - [x] closes #31919
- [x] tests added / passed
The test python scripts/validate_docstrings.py pandas.core.groupby.GroupBy.ohlc returns errors, but that is something that happens with many functions at the moment
```################################################################################
################# Docstring (pandas.core.groupby.GroupBy.ohlc) #################
################################################################################
Compute open, high, low and close values of a group, excluding missing values.
For multiple groupings, the result index will be a MultiIndex
Returns
-------
DataFrame
Open, high, low and close values within each group.
See Also
--------
Series.groupby
DataFrame.groupby
################################################################################
################################## Validation ##################################
################################################################################
3 Errors found:
Missing description for See Also "Series.groupby" reference
Missing description for See Also "DataFrame.groupby" reference
No examples section found``` | https://api.github.com/repos/pandas-dev/pandas/pulls/31941 | 2020-02-13T00:30:02Z | 2020-02-15T18:07:33Z | 2020-02-15T18:07:32Z | 2020-02-15T18:07:40Z |
REF: use iloc instead of _ixs outside of indexing code | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 70e0a129c055f..71eeb61318376 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -379,7 +379,7 @@ def apply_broadcast(self, target: "DataFrame") -> "DataFrame":
@property
def series_generator(self):
- return (self.obj._ixs(i, axis=1) for i in range(len(self.columns)))
+ return (self.obj.iloc[:, i] for i in range(len(self.columns)))
@property
def result_index(self) -> "Index":
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index da152b70abd2e..4e190a86336cf 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -900,7 +900,7 @@ def items(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
yield k, self._get_item_cache(k)
else:
for i, k in enumerate(self.columns):
- yield k, self._ixs(i, axis=1)
+ yield k, self.iloc[:, i]
@Appender(_shared_docs["items"])
def iteritems(self) -> Iterable[Tuple[Optional[Hashable], Series]]:
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 9dd4312a39525..442172bd76948 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -440,13 +440,13 @@ def to_arrays(data, columns, coerce_float=False, dtype=None):
if isinstance(data, ABCDataFrame):
if columns is not None:
arrays = [
- data._ixs(i, axis=1).values
+ data.iloc[:, i].values
for i, col in enumerate(data.columns)
if col in columns
]
else:
columns = data.columns
- arrays = [data._ixs(i, axis=1).values for i in range(len(columns))]
+ arrays = [data.iloc[:, i].values for i in range(len(columns))]
return arrays, columns
| Readers are much more likely to be familiar with iloc than _ixs.
This leaves us with exactly one usage of _ixs. | https://api.github.com/repos/pandas-dev/pandas/pulls/31940 | 2020-02-12T21:18:37Z | 2020-02-17T16:31:20Z | null | 2020-04-05T17:45:44Z |
BUG: Fix construction of Categorical from pd.NA | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index affe019d0ac86..dc47e010dacdc 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -65,6 +65,7 @@ Bug fixes
**Categorical**
- Fixed bug where :meth:`Categorical.from_codes` improperly raised a ``ValueError`` when passed nullable integer codes. (:issue:`31779`)
+- Fixed bug where :meth:`Categorical` constructor would raise a ``TypeError`` when given a numpy array containing ``pd.NA``. (:issue:`31927`)
- Bug in :class:`Categorical` that would ignore or crash when calling :meth:`Series.replace` with a list-like ``to_replace`` (:issue:`31720`)
**I/O**
@@ -85,4 +86,4 @@ Bug fixes
Contributors
~~~~~~~~~~~~
-.. contributors:: v1.0.1..v1.0.2|HEAD
\ No newline at end of file
+.. contributors:: v1.0.1..v1.0.2|HEAD
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index 6671375f628e7..811025a4b5764 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -10,6 +10,7 @@ WARNING: DO NOT edit .pxi FILE directly, .pxi is generated from .pxi.in
# ----------------------------------------------------------------------
from pandas._libs.tslibs.util cimport get_c_string
+from pandas._libs.missing cimport C_NA
{{py:
@@ -1032,8 +1033,12 @@ cdef class PyObjectHashTable(HashTable):
val = values[i]
hash(val)
- if ignore_na and ((val != val or val is None)
- or (use_na_value and val == na_value)):
+ if ignore_na and (
+ (val is C_NA)
+ or (val != val)
+ or (val is None)
+ or (use_na_value and val == na_value)
+ ):
# if missing values do not count as unique values (i.e. if
# ignore_na is True), skip the hashtable entry for them, and
# replace the corresponding label with na_sentinel
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index dbd8fd8df67c1..d5537359d6948 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -458,6 +458,18 @@ def test_constructor_with_categorical_categories(self):
result = Categorical(["a", "b"], categories=CategoricalIndex(["a", "b", "c"]))
tm.assert_categorical_equal(result, expected)
+ @pytest.mark.parametrize("klass", [lambda x: np.array(x, dtype=object), list])
+ def test_construction_with_null(self, klass, nulls_fixture):
+ # https://github.com/pandas-dev/pandas/issues/31927
+ values = klass(["a", nulls_fixture, "b"])
+ result = Categorical(values)
+
+ dtype = CategoricalDtype(["a", "b"])
+ codes = [0, -1, 1]
+ expected = Categorical.from_codes(codes=codes, dtype=dtype)
+
+ tm.assert_categorical_equal(result, expected)
+
def test_from_codes(self):
# too few categories
| - [x] closes #31927
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry | https://api.github.com/repos/pandas-dev/pandas/pulls/31939 | 2020-02-12T21:17:04Z | 2020-02-23T14:57:08Z | 2020-02-23T14:57:08Z | 2020-02-23T15:10:32Z |
CLN: assorted cleanups | diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 5888600d2fa8e..a75536e46e60d 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -283,7 +283,8 @@ def __init__(self, values, dtype=_NS_DTYPE, freq=None, copy=False):
@classmethod
def _simple_new(cls, values, freq=None, dtype=_NS_DTYPE):
assert isinstance(values, np.ndarray)
- if values.dtype == "i8":
+ if values.dtype != _NS_DTYPE:
+ assert values.dtype == "i8"
values = values.view(_NS_DTYPE)
result = object.__new__(cls)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 934c4c6e92bbe..adfb553d40ff0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3504,6 +3504,7 @@ def _slice(self: FrameOrSeries, slobj: slice, axis=0) -> FrameOrSeries:
Slicing with this method is *always* positional.
"""
+ assert isinstance(slobj, slice), type(slobj)
axis = self._get_block_manager_axis(axis)
result = self._constructor(self._data.get_slice(slobj, axis=axis))
result = result.__finalize__(self)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 3dc7dd7d81530..37a4b43648bb1 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1316,7 +1316,6 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_tuple=None):
return blocks
def _make_na_block(self, placement, fill_value=None):
- # TODO: infer dtypes other than float64 from fill_value
if fill_value is None:
fill_value = np.nan
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 5d53856729d0c..37a4a6eddaebe 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -43,9 +43,9 @@ def comp_method_OBJECT_ARRAY(op, x, y):
if isinstance(y, list):
y = construct_1d_object_array_from_listlike(y)
- # TODO: Should the checks below be ABCIndexClass?
if isinstance(y, (np.ndarray, ABCSeries, ABCIndex)):
- # TODO: should this be ABCIndexClass??
+ # Note: these checks can be for ABCIndex and not ABCIndexClass
+ # because that is the only object-dtype class.
if not is_object_dtype(y.dtype):
y = y.astype(np.object_)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 24e794014a15f..8577a7fb904dc 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4378,7 +4378,7 @@ def between(self, left, right, inclusive=True) -> "Series":
# Convert to types that support pd.NA
def _convert_dtypes(
- self: ABCSeries,
+ self,
infer_objects: bool = True,
convert_string: bool = True,
convert_integer: bool = True,
diff --git a/pandas/tests/indexing/test_chaining_and_caching.py b/pandas/tests/indexing/test_chaining_and_caching.py
index e845487ffca9a..17722e949df1e 100644
--- a/pandas/tests/indexing/test_chaining_and_caching.py
+++ b/pandas/tests/indexing/test_chaining_and_caching.py
@@ -346,20 +346,17 @@ def test_chained_getitem_with_lists(self):
# GH6394
# Regression in chained getitem indexing with embedded list-like from
# 0.12
- def check(result, expected):
- tm.assert_numpy_array_equal(result, expected)
- assert isinstance(result, np.ndarray)
df = DataFrame({"A": 5 * [np.zeros(3)], "B": 5 * [np.ones(3)]})
expected = df["A"].iloc[2]
result = df.loc[2, "A"]
- check(result, expected)
+ tm.assert_numpy_array_equal(result, expected)
result2 = df.iloc[2]["A"]
- check(result2, expected)
+ tm.assert_numpy_array_equal(result2, expected)
result3 = df["A"].loc[2]
- check(result3, expected)
+ tm.assert_numpy_array_equal(result3, expected)
result4 = df["A"].iloc[2]
- check(result4, expected)
+ tm.assert_numpy_array_equal(result4, expected)
def test_cache_updating(self):
# GH 4939, make sure to update the cache on setitem
| broken off of other local branches | https://api.github.com/repos/pandas-dev/pandas/pulls/31938 | 2020-02-12T21:09:05Z | 2020-02-13T12:42:54Z | 2020-02-13T12:42:54Z | 2020-02-13T17:35:33Z |
dont skip keyerror for IntervalIndex | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 5ae237eb7dc32..933305d9e3539 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1334,12 +1334,12 @@ def _validate_read_indexer(
not_found = list(set(key) - set(ax))
raise KeyError(f"{not_found} not in index")
- # we skip the warning on Categorical/Interval
+ # we skip the warning on Categorical
# as this check is actually done (check for
# non-missing values), but a bit later in the
# code, so we want to avoid warning & then
# just raising
- if not (ax.is_categorical() or ax.is_interval()):
+ if not ax.is_categorical():
raise KeyError(
"Passing list-likes to .loc or [] with any missing labels "
"is no longer supported, see "
| cc @jschendel this is speculative that this is the desired behavior.
This is our only use of `Index.is_interval()` | https://api.github.com/repos/pandas-dev/pandas/pulls/31936 | 2020-02-12T20:38:17Z | 2020-02-22T17:53:45Z | 2020-02-22T17:53:45Z | 2020-02-22T17:53:56Z |
Clean Up C Warnings | diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index dd1f38ce3a842..5f3d946a1e024 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -1173,12 +1173,12 @@ ctypedef fused out_t:
@cython.boundscheck(False)
@cython.wraparound(False)
-def diff_2d(ndarray[diff_t, ndim=2] arr,
- ndarray[out_t, ndim=2] out,
+def diff_2d(diff_t[:, :] arr,
+ out_t[:, :] out,
Py_ssize_t periods, int axis):
cdef:
Py_ssize_t i, j, sx, sy, start, stop
- bint f_contig = arr.flags.f_contiguous
+ bint f_contig = arr.is_f_contig()
# Disable for unsupported dtype combinations,
# see https://github.com/cython/cython/issues/2646
diff --git a/pandas/_libs/hashing.pyx b/pandas/_libs/hashing.pyx
index 878da670b2f68..2d859db22ea23 100644
--- a/pandas/_libs/hashing.pyx
+++ b/pandas/_libs/hashing.pyx
@@ -5,7 +5,7 @@ import cython
from libc.stdlib cimport malloc, free
import numpy as np
-from numpy cimport uint8_t, uint32_t, uint64_t, import_array
+from numpy cimport ndarray, uint8_t, uint32_t, uint64_t, import_array
import_array()
from pandas._libs.util cimport is_nan
@@ -15,7 +15,7 @@ DEF dROUNDS = 4
@cython.boundscheck(False)
-def hash_object_array(object[:] arr, object key, object encoding='utf8'):
+def hash_object_array(ndarray[object] arr, object key, object encoding='utf8'):
"""
Parameters
----------
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index bf38fcfb6103c..57b4100fbceb0 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -152,7 +152,7 @@ def ensure_timedelta64ns(arr: ndarray, copy: bool=True):
@cython.boundscheck(False)
@cython.wraparound(False)
-def datetime_to_datetime64(object[:] values):
+def datetime_to_datetime64(ndarray[object] values):
"""
Convert ndarray of datetime-like objects to int64 array representing
nanosecond timestamps.
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index f675818599b2c..80b9144042041 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -56,7 +56,7 @@ cdef:
cdef inline int int_max(int a, int b): return a if a >= b else b
cdef inline int int_min(int a, int b): return a if a <= b else b
-cdef inline bint is_monotonic_start_end_bounds(
+cdef bint is_monotonic_start_end_bounds(
ndarray[int64_t, ndim=1] start, ndarray[int64_t, ndim=1] end
):
return is_monotonic(start, False)[0] and is_monotonic(end, False)[0]
| Getting closer to no warnings (at least with clang). Changes in this PR get rid of the following warnings:
```sh
warning: pandas/_libs/window/aggregations.pyx:60:4: Buffer unpacking not optimized away.
warning: pandas/_libs/window/aggregations.pyx:60:4: Buffer unpacking not optimized away.
warning: pandas/_libs/window/aggregations.pyx:60:36: Buffer unpacking not optimized away.
warning: pandas/_libs/window/aggregations.pyx:60:36: Buffer unpacking not optimized away.
pandas/_libs/hashing.c:25100:20: warning: unused function '__pyx_memview_get_object' [-Wunused-function]
static PyObject *__pyx_memview_get_object(const char *itemp) {
^
pandas/_libs/hashing.c:25105:12: warning: unused function '__pyx_memview_set_object' [-Wunused-function]
static int __pyx_memview_set_object(const char *itemp, PyObject *obj) {
pandas/_libs/algos.c:81410:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/algos.c:83045:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/algos.c:83951:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/algos.c:84857:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/algos.c:85034:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/algos.c:85940:3: warning: code will never be executed [-Wunreachable-code]
__Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_arr.rcbuffer->pybuffer);
^~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/tslibs/conversion.c:33113:20: warning: unused function '__pyx_memview_get_object' [-Wunused-function]
static PyObject *__pyx_memview_get_object(const char *itemp) {
^
pandas/_libs/tslibs/conversion.c:33118:12: warning: unused function '__pyx_memview_set_object' [-Wunused-function]
static int __pyx_memview_set_object(const char *itemp, PyObject *obj) {
```
Only a handful left after this | https://api.github.com/repos/pandas-dev/pandas/pulls/31935 | 2020-02-12T20:10:55Z | 2020-02-17T19:50:33Z | 2020-02-17T19:50:33Z | 2023-04-12T20:17:35Z |
CLN: 29547 replace old string formatting 2 | diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 162f3c114fa5d..df40c2e7e2a11 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -840,8 +840,8 @@ def test_inplace_ops_identity2(self, op):
df["a"] = [True, False, True]
df_copy = df.copy()
- iop = "__i{}__".format(op)
- op = "__{}__".format(op)
+ iop = f"__i{op}__"
+ op = f"__{op}__"
# no id change and value is correct
getattr(df, iop)(operand)
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index b3af5a7b7317e..46a4a0a2af4ba 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -765,7 +765,9 @@ def test_unstack_unused_level(self, cols):
tm.assert_frame_equal(result, expected)
def test_unstack_nan_index(self): # GH7466
- cast = lambda val: "{0:1}".format("" if val != val else val)
+ def cast(val):
+ val_str = "" if val != val else val
+ return f"{val_str:1}"
def verify(df):
mk_list = lambda a: list(a) if isinstance(a, tuple) else [a]
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index e89f4ee07ea00..5e06b6402c34f 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -54,7 +54,7 @@ def test_frame_append_datetime64_col_other_units(self):
ns_dtype = np.dtype("M8[ns]")
for unit in units:
- dtype = np.dtype("M8[{unit}]".format(unit=unit))
+ dtype = np.dtype(f"M8[{unit}]")
vals = np.arange(n, dtype=np.int64).view(dtype)
df = DataFrame({"ints": np.arange(n)}, index=np.arange(n))
@@ -70,7 +70,7 @@ def test_frame_append_datetime64_col_other_units(self):
df["dates"] = np.arange(n, dtype=np.int64).view(ns_dtype)
for unit in units:
- dtype = np.dtype("M8[{unit}]".format(unit=unit))
+ dtype = np.dtype(f"M8[{unit}]")
vals = np.arange(n, dtype=np.int64).view(dtype)
tmp = df.copy()
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index 84eee2419f0b8..21ee8649172da 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -248,21 +248,21 @@ def test_round_int64(self, start, index_freq, periods, round_freq):
result = dt.floor(round_freq)
diff = dt.asi8 - result.asi8
mod = result.asi8 % unit
- assert (mod == 0).all(), "floor not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"floor not a {round_freq} multiple"
assert (0 <= diff).all() and (diff < unit).all(), "floor error"
# test ceil
result = dt.ceil(round_freq)
diff = result.asi8 - dt.asi8
mod = result.asi8 % unit
- assert (mod == 0).all(), "ceil not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"ceil not a {round_freq} multiple"
assert (0 <= diff).all() and (diff < unit).all(), "ceil error"
# test round
result = dt.round(round_freq)
diff = abs(result.asi8 - dt.asi8)
mod = result.asi8 % unit
- assert (mod == 0).all(), "round not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"round not a {round_freq} multiple"
assert (diff <= unit // 2).all(), "round error"
if unit % 2 == 0:
assert (
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index df3a49fb7c292..13723f6455bff 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -199,7 +199,7 @@ def test_to_datetime_format_microsecond(self, cache):
# these are locale dependent
lang, _ = locale.getlocale()
month_abbr = calendar.month_abbr[4]
- val = "01-{}-2011 00:00:01.978".format(month_abbr)
+ val = f"01-{month_abbr}-2011 00:00:01.978"
format = "%d-%b-%Y %H:%M:%S.%f"
result = to_datetime(val, format=format, cache=cache)
@@ -551,7 +551,7 @@ def test_to_datetime_dt64s(self, cache):
)
@pytest.mark.parametrize("cache", [True, False])
def test_to_datetime_dt64s_out_of_bounds(self, cache, dt):
- msg = "Out of bounds nanosecond timestamp: {}".format(dt)
+ msg = f"Out of bounds nanosecond timestamp: {dt}"
with pytest.raises(OutOfBoundsDatetime, match=msg):
pd.to_datetime(dt, errors="raise")
with pytest.raises(OutOfBoundsDatetime, match=msg):
diff --git a/pandas/tests/indexes/interval/test_indexing.py b/pandas/tests/indexes/interval/test_indexing.py
index 87b72f702e2aa..0e5721bfd83fd 100644
--- a/pandas/tests/indexes/interval/test_indexing.py
+++ b/pandas/tests/indexes/interval/test_indexing.py
@@ -24,11 +24,7 @@ def test_get_loc_interval(self, closed, side):
for bound in [[0, 1], [1, 2], [2, 3], [3, 4], [0, 2], [2.5, 3], [-1, 4]]:
# if get_loc is supplied an interval, it should only search
# for exact matches, not overlaps or covers, else KeyError.
- msg = re.escape(
- "Interval({bound[0]}, {bound[1]}, closed='{side}')".format(
- bound=bound, side=side
- )
- )
+ msg = re.escape(f"Interval({bound[0]}, {bound[1]}, closed='{side}')")
if closed == side:
if bound == [0, 1]:
assert idx.get_loc(Interval(0, 1, closed=side)) == 0
@@ -86,11 +82,7 @@ def test_get_loc_length_one_interval(self, left, right, closed, other_closed):
else:
with pytest.raises(
KeyError,
- match=re.escape(
- "Interval({left}, {right}, closed='{other_closed}')".format(
- left=left, right=right, other_closed=other_closed
- )
- ),
+ match=re.escape(f"Interval({left}, {right}, closed='{other_closed}')"),
):
index.get_loc(interval)
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index d010060880703..c2b209c810af9 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -845,7 +845,7 @@ def test_set_closed(self, name, closed, new_closed):
def test_set_closed_errors(self, bad_closed):
# GH 21670
index = interval_range(0, 5)
- msg = "invalid option for 'closed': {closed}".format(closed=bad_closed)
+ msg = f"invalid option for 'closed': {bad_closed}"
with pytest.raises(ValueError, match=msg):
index.set_closed(bad_closed)
| I splitted PR #31844 in batches, this is the second
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff` | https://api.github.com/repos/pandas-dev/pandas/pulls/31933 | 2020-02-12T17:51:56Z | 2020-02-13T05:54:38Z | 2020-02-13T05:54:38Z | 2020-02-21T01:30:01Z |
CLN: remove unreachable in Series._reduce | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 24e794014a15f..3d43ed79c2adb 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -34,7 +34,6 @@
ensure_platform_int,
is_bool,
is_categorical_dtype,
- is_datetime64_dtype,
is_dict_like,
is_extension_array_dtype,
is_integer,
@@ -42,7 +41,6 @@
is_list_like,
is_object_dtype,
is_scalar,
- is_timedelta64_dtype,
)
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -64,7 +62,7 @@
from pandas.core import algorithms, base, generic, nanops, ops
from pandas.core.accessor import CachedAccessor
from pandas.core.arrays import ExtensionArray, try_cast_to_ea
-from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
+from pandas.core.arrays.categorical import CategoricalAccessor
from pandas.core.arrays.sparse import SparseAccessor
import pandas.core.common as com
from pandas.core.construction import (
@@ -3848,21 +3846,12 @@ def _reduce(
if axis is not None:
self._get_axis_number(axis)
- if isinstance(delegate, Categorical):
- return delegate._reduce(name, skipna=skipna, **kwds)
- elif isinstance(delegate, ExtensionArray):
+ if isinstance(delegate, ExtensionArray):
# dispatch to ExtensionArray interface
return delegate._reduce(name, skipna=skipna, **kwds)
- elif is_datetime64_dtype(delegate):
- # use DatetimeIndex implementation to handle skipna correctly
- delegate = DatetimeIndex(delegate)
- elif is_timedelta64_dtype(delegate) and hasattr(TimedeltaIndex, name):
- # use TimedeltaIndex to handle skipna correctly
- # TODO: remove hasattr check after TimedeltaIndex has `std` method
- delegate = TimedeltaIndex(delegate)
-
- # dispatch to numpy arrays
- elif isinstance(delegate, np.ndarray):
+
+ else:
+ # dispatch to numpy arrays
if numeric_only:
raise NotImplementedError(
f"Series.{name} does not implement numeric_only."
@@ -3870,19 +3859,6 @@ def _reduce(
with np.errstate(all="ignore"):
return op(delegate, skipna=skipna, **kwds)
- # TODO(EA) dispatch to Index
- # remove once all internals extension types are
- # moved to ExtensionArrays
- return delegate._reduce(
- op=op,
- name=name,
- axis=axis,
- skipna=skipna,
- numeric_only=numeric_only,
- filter_type=filter_type,
- **kwds,
- )
-
def _reindex_indexer(self, new_index, indexer, copy):
if indexer is None:
if copy:
| these are made unreachable now that Series._values returns DTA/TDA for datetime64/timedelta64 dtypes.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31932 | 2020-02-12T16:45:49Z | 2020-02-14T01:27:40Z | 2020-02-14T01:27:40Z | 2020-02-14T01:54:05Z |
Revert 31791 | diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 404f5a477187b..730043e6ec7d7 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -141,7 +141,24 @@ def test_read_non_existant(self, reader, module, error_class, fn_ext):
pytest.importorskip(module)
path = os.path.join(HERE, "data", "does_not_exist." + fn_ext)
- with tm.external_error_raised(error_class):
+ msg1 = r"File (b')?.+does_not_exist\.{}'? does not exist".format(fn_ext)
+ msg2 = fr"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'"
+ msg3 = "Expected object or value"
+ msg4 = "path_or_buf needs to be a string file path or file-like"
+ msg5 = (
+ fr"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: "
+ fr"'.+does_not_exist\.{fn_ext}'"
+ )
+ msg6 = fr"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'"
+ msg7 = (
+ fr"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'"
+ )
+ msg8 = fr"Failed to open local file.+does_not_exist\.{fn_ext}"
+
+ with pytest.raises(
+ error_class,
+ match=fr"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})",
+ ):
reader(path)
@pytest.mark.parametrize(
@@ -167,7 +184,24 @@ def test_read_expands_user_home_dir(
path = os.path.join("~", "does_not_exist." + fn_ext)
monkeypatch.setattr(icom, "_expand_user", lambda x: os.path.join("foo", x))
- with tm.external_error_raised(error_class):
+ msg1 = fr"File (b')?.+does_not_exist\.{fn_ext}'? does not exist"
+ msg2 = fr"\[Errno 2\] No such file or directory: '.+does_not_exist\.{fn_ext}'"
+ msg3 = "Unexpected character found when decoding 'false'"
+ msg4 = "path_or_buf needs to be a string file path or file-like"
+ msg5 = (
+ fr"\[Errno 2\] File .+does_not_exist\.{fn_ext} does not exist: "
+ fr"'.+does_not_exist\.{fn_ext}'"
+ )
+ msg6 = fr"\[Errno 2\] 没有那个文件或目录: '.+does_not_exist\.{fn_ext}'"
+ msg7 = (
+ fr"\[Errno 2\] File o directory non esistente: '.+does_not_exist\.{fn_ext}'"
+ )
+ msg8 = fr"Failed to open local file.+does_not_exist\.{fn_ext}"
+
+ with pytest.raises(
+ error_class,
+ match=fr"({msg1}|{msg2}|{msg3}|{msg4}|{msg5}|{msg6}|{msg7}|{msg8})",
+ ):
reader(path)
@pytest.mark.parametrize(
| supersedes #31800
we never back ported this so just cleaning up master (assuming it now works again) | https://api.github.com/repos/pandas-dev/pandas/pulls/31931 | 2020-02-12T16:15:49Z | 2020-02-15T01:44:56Z | 2020-02-15T01:44:56Z | 2023-04-12T20:16:58Z |
Backport PR #31918 on branch 1.0.x (BUG: fix parquet roundtrip with unsigned integer dtypes) | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index 6d99668684a3b..44125ee30911f 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -33,6 +33,8 @@ Bug fixes
**I/O**
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+- Fixed bug in parquet roundtrip with nullable unsigned integer dtypes (:issue:`31896`).
+
**Experimental dtypes**
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 9a0f5794e7607..96fdd8ee3c679 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -94,6 +94,10 @@ def __from_arrow__(self, array):
import pyarrow
from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask
+ pyarrow_type = pyarrow.from_numpy_dtype(self.type)
+ if not array.type.equals(pyarrow_type):
+ array = array.cast(pyarrow_type)
+
if isinstance(array, pyarrow.Array):
chunks = [array]
else:
diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 857b793e9e9a8..2a6b6718cc149 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -1016,9 +1016,9 @@ def test_arrow_array(data):
assert arr.equals(expected)
-@td.skip_if_no("pyarrow", min_version="0.15.1.dev")
+@td.skip_if_no("pyarrow", min_version="0.16.0")
def test_arrow_roundtrip(data):
- # roundtrip possible from arrow 1.0.0
+ # roundtrip possible from arrow 0.16.0
import pyarrow as pa
df = pd.DataFrame({"a": data})
@@ -1028,6 +1028,19 @@ def test_arrow_roundtrip(data):
tm.assert_frame_equal(result, df)
+@td.skip_if_no("pyarrow", min_version="0.16.0")
+def test_arrow_from_arrow_uint():
+ # https://github.com/pandas-dev/pandas/issues/31896
+ # possible mismatch in types
+ import pyarrow as pa
+
+ dtype = pd.UInt32Dtype()
+ result = dtype.__from_arrow__(pa.array([1, 2, 3, 4, None], type="int64"))
+ expected = pd.array([1, 2, 3, 4, None], dtype="UInt32")
+
+ tm.assert_extension_array_equal(result, expected)
+
+
@pytest.mark.parametrize(
"pandasmethname, kwargs",
[
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index d51c712ed5abd..7bcc354f53be0 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -533,25 +533,28 @@ def test_additional_extension_arrays(self, pa):
df = pd.DataFrame(
{
"a": pd.Series([1, 2, 3], dtype="Int64"),
- "b": pd.Series(["a", None, "c"], dtype="string"),
+ "b": pd.Series([1, 2, 3], dtype="UInt32"),
+ "c": pd.Series(["a", None, "c"], dtype="string"),
}
)
- if LooseVersion(pyarrow.__version__) >= LooseVersion("0.15.1.dev"):
+ if LooseVersion(pyarrow.__version__) >= LooseVersion("0.16.0"):
expected = df
else:
# de-serialized as plain int / object
- expected = df.assign(a=df.a.astype("int64"), b=df.b.astype("object"))
+ expected = df.assign(
+ a=df.a.astype("int64"), b=df.b.astype("int64"), c=df.c.astype("object")
+ )
check_round_trip(df, pa, expected=expected)
df = pd.DataFrame({"a": pd.Series([1, 2, 3, None], dtype="Int64")})
- if LooseVersion(pyarrow.__version__) >= LooseVersion("0.15.1.dev"):
+ if LooseVersion(pyarrow.__version__) >= LooseVersion("0.16.0"):
expected = df
else:
# if missing values in integer, currently de-serialized as float
expected = df.assign(a=df.a.astype("float64"))
check_round_trip(df, pa, expected=expected)
- @td.skip_if_no("pyarrow", min_version="0.15.1.dev")
+ @td.skip_if_no("pyarrow", min_version="0.16.0")
def test_additional_extension_types(self, pa):
# test additional ExtensionArrays that are supported through the
# __arrow_array__ protocol + by defining a custom ExtensionType
| Backport PR #31918: BUG: fix parquet roundtrip with unsigned integer dtypes | https://api.github.com/repos/pandas-dev/pandas/pulls/31928 | 2020-02-12T15:13:37Z | 2020-02-12T15:57:04Z | 2020-02-12T15:57:04Z | 2020-02-12T15:57:04Z |
Backport PR #31877 on branch 1.0.x (BUG: fix infer_dtype for StringDtype) | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index 6d99668684a3b..8c155306c6fcb 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -34,8 +34,10 @@ Bug fixes
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+
**Experimental dtypes**
+- Fix bug in :meth:`DataFrame.convert_dtypes` for columns that were already using the ``"string"`` dtype (:issue:`31731`).
- Fixed bug in setting values using a slice indexer with string dtype (:issue:`31772`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index acd74591134bc..8231471dfbde9 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -961,7 +961,7 @@ _TYPE_MAP = {
'complex64': 'complex',
'complex128': 'complex',
'c': 'complex',
- 'string': 'bytes',
+ 'string': 'string',
'S': 'bytes',
'U': 'string',
'bool': 'boolean',
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 3eab2186ccb94..9eb2db19064c1 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -731,6 +731,7 @@ def any_numpy_dtype(request):
# categoricals are handled separately
_any_skipna_inferred_dtype = [
("string", ["a", np.nan, "c"]),
+ ("string", ["a", pd.NA, "c"]),
("bytes", [b"a", np.nan, b"c"]),
("empty", [np.nan, np.nan, np.nan]),
("empty", []),
@@ -741,6 +742,7 @@ def any_numpy_dtype(request):
("mixed-integer-float", [1, np.nan, 2.0]),
("decimal", [Decimal(1), np.nan, Decimal(2)]),
("boolean", [True, np.nan, False]),
+ ("boolean", [True, pd.NA, False]),
("datetime64", [np.datetime64("2013-01-01"), np.nan, np.datetime64("2018-01-01")]),
("datetime", [pd.Timestamp("20130101"), np.nan, pd.Timestamp("20180101")]),
("date", [date(2013, 1, 1), np.nan, date(2018, 1, 1)]),
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 5eb85de2b90f5..63c484f5ea68f 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1200,6 +1200,24 @@ def test_interval(self):
inferred = lib.infer_dtype(pd.Series(idx), skipna=False)
assert inferred == "interval"
+ @pytest.mark.parametrize("klass", [pd.array, pd.Series])
+ @pytest.mark.parametrize("skipna", [True, False])
+ @pytest.mark.parametrize("data", [["a", "b", "c"], ["a", "b", pd.NA]])
+ def test_string_dtype(self, data, skipna, klass):
+ # StringArray
+ val = klass(data, dtype="string")
+ inferred = lib.infer_dtype(val, skipna=skipna)
+ assert inferred == "string"
+
+ @pytest.mark.parametrize("klass", [pd.array, pd.Series])
+ @pytest.mark.parametrize("skipna", [True, False])
+ @pytest.mark.parametrize("data", [[True, False, True], [True, False, pd.NA]])
+ def test_boolean_dtype(self, data, skipna, klass):
+ # BooleanArray
+ val = klass(data, dtype="boolean")
+ inferred = lib.infer_dtype(val, skipna=skipna)
+ assert inferred == "boolean"
+
class TestNumberScalar:
def test_is_number(self):
diff --git a/pandas/tests/series/test_convert_dtypes.py b/pandas/tests/series/test_convert_dtypes.py
index 923b5a94c5f41..a6b5fed40a9d7 100644
--- a/pandas/tests/series/test_convert_dtypes.py
+++ b/pandas/tests/series/test_convert_dtypes.py
@@ -246,3 +246,12 @@ def test_convert_dtypes(self, data, maindtype, params, answerdict):
# Make sure original not changed
tm.assert_series_equal(series, copy)
+
+ def test_convert_string_dtype(self):
+ # https://github.com/pandas-dev/pandas/issues/31731 -> converting columns
+ # that are already string dtype
+ df = pd.DataFrame(
+ {"A": ["a", "b", pd.NA], "B": ["ä", "ö", "ü"]}, dtype="string"
+ )
+ result = df.convert_dtypes()
+ tm.assert_frame_equal(df, result)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 568b3917ba4cb..8171072686443 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -7,6 +7,7 @@
from pandas._libs import lib
+import pandas as pd
from pandas import DataFrame, Index, MultiIndex, Series, concat, isna, notna
import pandas._testing as tm
import pandas.core.strings as strings
@@ -207,6 +208,9 @@ def test_api_per_dtype(self, index_or_series, dtype, any_skipna_inferred_dtype):
box = index_or_series
inferred_dtype, values = any_skipna_inferred_dtype
+ if dtype == "category" and len(values) and values[1] is pd.NA:
+ pytest.xfail(reason="Categorical does not yet support pd.NA")
+
t = box(values, dtype=dtype) # explicit dtype to avoid casting
# TODO: get rid of these xfails
| Backport PR #31877: BUG: fix infer_dtype for StringDtype | https://api.github.com/repos/pandas-dev/pandas/pulls/31926 | 2020-02-12T15:05:41Z | 2020-02-12T15:54:22Z | 2020-02-12T15:54:22Z | 2020-02-12T15:54:22Z |
Backport PR #31773 on branch 1.0.x (BUG: fix StringArray/PandasArray setitem with slice) | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index b055b44274bd8..688e073b1e419 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -30,6 +30,10 @@ Bug fixes
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+**Experimental dtypes**
+
+- Fixed bug in setting values using a slice indexer with string dtype (:issue:`31772`)
+
.. ---------------------------------------------------------------------------
.. _whatsnew_102.contributors:
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 57cc52ce24f8c..10248074a1d3a 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -244,12 +244,8 @@ def __setitem__(self, key, value):
value = extract_array(value, extract_numpy=True)
key = check_array_indexer(self, key)
- scalar_key = lib.is_scalar(key)
scalar_value = lib.is_scalar(value)
- if not scalar_key and scalar_value:
- key = np.asarray(key)
-
if not scalar_value:
value = np.asarray(value, dtype=self._ndarray.dtype)
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index e0ca603aaa0ed..590bcd586900a 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -173,6 +173,29 @@ def test_setitem_tuple_index(self, data):
s[(0, 1)] = data[1]
self.assert_series_equal(s, expected)
+ def test_setitem_slice(self, data, box_in_series):
+ arr = data[:5].copy()
+ expected = data.take([0, 0, 0, 3, 4])
+ if box_in_series:
+ arr = pd.Series(arr)
+ expected = pd.Series(expected)
+
+ arr[:3] = data[0]
+ self.assert_equal(arr, expected)
+
+ def test_setitem_loc_iloc_slice(self, data):
+ arr = data[:5].copy()
+ s = pd.Series(arr, index=["a", "b", "c", "d", "e"])
+ expected = pd.Series(data.take([0, 0, 0, 3, 4]), index=s.index)
+
+ result = s.copy()
+ result.iloc[:3] = data[0]
+ self.assert_equal(result, expected)
+
+ result = s.copy()
+ result.loc[:"c"] = data[0]
+ self.assert_equal(result, expected)
+
def test_setitem_slice_mismatch_length_raises(self, data):
arr = data[:5]
with pytest.raises(ValueError):
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index 8a820c8746857..76573242a2506 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -396,6 +396,14 @@ def test_setitem_scalar_key_sequence_raise(self, data):
# Failed: DID NOT RAISE <class 'ValueError'>
super().test_setitem_scalar_key_sequence_raise(data)
+ @skip_nested
+ def test_setitem_slice(self, data, box_in_series):
+ super().test_setitem_slice(data, box_in_series)
+
+ @skip_nested
+ def test_setitem_loc_iloc_slice(self, data):
+ super().test_setitem_loc_iloc_slice(data)
+
@skip_nested
class TestParsing(BaseNumPyTests, base.BaseParsingTests):
| Backport PR #31773: BUG: fix StringArray/PandasArray setitem with slice | https://api.github.com/repos/pandas-dev/pandas/pulls/31923 | 2020-02-12T13:14:06Z | 2020-02-12T13:49:09Z | 2020-02-12T13:49:09Z | 2020-02-12T13:49:09Z |
Backport PR #31794 on branch 1.0.x (BUG: Avoid casting Int to object in Categorical.from_codes) | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index b055b44274bd8..556c0cf1b55af 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -26,6 +26,10 @@ Fixed regressions
Bug fixes
~~~~~~~~~
+**Categorical**
+
+- Fixed bug where :meth:`Categorical.from_codes` improperly raised a ``ValueError`` when passed nullable integer codes. (:issue:`31779`)
+
**I/O**
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index aa84edd413bc9..52d9df0c2d508 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -644,7 +644,13 @@ def from_codes(cls, codes, categories=None, ordered=None, dtype=None):
)
raise ValueError(msg)
- codes = np.asarray(codes) # #21767
+ if is_extension_array_dtype(codes) and is_integer_dtype(codes):
+ # Avoid the implicit conversion of Int to object
+ if isna(codes).any():
+ raise ValueError("codes cannot contain NA values")
+ codes = codes.to_numpy(dtype=np.int64)
+ else:
+ codes = np.asarray(codes)
if len(codes) and not is_integer_dtype(codes):
raise ValueError("codes need to be array-like integers")
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index 70e1421c8dcf4..dbd8fd8df67c1 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -560,6 +560,23 @@ def test_from_codes_neither(self):
with pytest.raises(ValueError, match=msg):
Categorical.from_codes([0, 1])
+ def test_from_codes_with_nullable_int(self):
+ codes = pd.array([0, 1], dtype="Int64")
+ categories = ["a", "b"]
+
+ result = Categorical.from_codes(codes, categories=categories)
+ expected = Categorical.from_codes(codes.to_numpy(int), categories=categories)
+
+ tm.assert_categorical_equal(result, expected)
+
+ def test_from_codes_with_nullable_int_na_raises(self):
+ codes = pd.array([0, None], dtype="Int64")
+ categories = ["a", "b"]
+
+ msg = "codes cannot contain NA values"
+ with pytest.raises(ValueError, match=msg):
+ Categorical.from_codes(codes, categories=categories)
+
@pytest.mark.parametrize("dtype", [None, "category"])
def test_from_inferred_categories(self, dtype):
cats = ["a", "b"]
| Backport PR #31794: BUG: Avoid casting Int to object in Categorical.from_codes | https://api.github.com/repos/pandas-dev/pandas/pulls/31921 | 2020-02-12T12:42:34Z | 2020-02-12T13:14:08Z | 2020-02-12T13:14:08Z | 2020-02-12T13:14:08Z |
BUG: fix parquet roundtrip with unsigned integer dtypes | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index f4bb8c580fb08..aa64f4524b877 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -31,6 +31,8 @@ Bug fixes
**I/O**
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+- Fixed bug in parquet roundtrip with nullable unsigned integer dtypes (:issue:`31896`).
+
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 19ab43fc1c248..e0b6947394cc4 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -103,6 +103,10 @@ def __from_arrow__(
import pyarrow # noqa: F811
from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask
+ pyarrow_type = pyarrow.from_numpy_dtype(self.type)
+ if not array.type.equals(pyarrow_type):
+ array = array.cast(pyarrow_type)
+
if isinstance(array, pyarrow.Array):
chunks = [array]
else:
diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 9f0e6407c25f0..9186c33c12c06 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -1036,9 +1036,9 @@ def test_arrow_array(data):
assert arr.equals(expected)
-@td.skip_if_no("pyarrow", min_version="0.15.1.dev")
+@td.skip_if_no("pyarrow", min_version="0.16.0")
def test_arrow_roundtrip(data):
- # roundtrip possible from arrow 1.0.0
+ # roundtrip possible from arrow 0.16.0
import pyarrow as pa
df = pd.DataFrame({"a": data})
@@ -1048,6 +1048,19 @@ def test_arrow_roundtrip(data):
tm.assert_frame_equal(result, df)
+@td.skip_if_no("pyarrow", min_version="0.16.0")
+def test_arrow_from_arrow_uint():
+ # https://github.com/pandas-dev/pandas/issues/31896
+ # possible mismatch in types
+ import pyarrow as pa
+
+ dtype = pd.UInt32Dtype()
+ result = dtype.__from_arrow__(pa.array([1, 2, 3, 4, None], type="int64"))
+ expected = pd.array([1, 2, 3, 4, None], dtype="UInt32")
+
+ tm.assert_extension_array_equal(result, expected)
+
+
@pytest.mark.parametrize(
"pandasmethname, kwargs",
[
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 7ed8d8f22764c..f76db9939641c 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -533,25 +533,28 @@ def test_additional_extension_arrays(self, pa):
df = pd.DataFrame(
{
"a": pd.Series([1, 2, 3], dtype="Int64"),
- "b": pd.Series(["a", None, "c"], dtype="string"),
+ "b": pd.Series([1, 2, 3], dtype="UInt32"),
+ "c": pd.Series(["a", None, "c"], dtype="string"),
}
)
- if LooseVersion(pyarrow.__version__) >= LooseVersion("0.15.1.dev"):
+ if LooseVersion(pyarrow.__version__) >= LooseVersion("0.16.0"):
expected = df
else:
# de-serialized as plain int / object
- expected = df.assign(a=df.a.astype("int64"), b=df.b.astype("object"))
+ expected = df.assign(
+ a=df.a.astype("int64"), b=df.b.astype("int64"), c=df.c.astype("object")
+ )
check_round_trip(df, pa, expected=expected)
df = pd.DataFrame({"a": pd.Series([1, 2, 3, None], dtype="Int64")})
- if LooseVersion(pyarrow.__version__) >= LooseVersion("0.15.1.dev"):
+ if LooseVersion(pyarrow.__version__) >= LooseVersion("0.16.0"):
expected = df
else:
# if missing values in integer, currently de-serialized as float
expected = df.assign(a=df.a.astype("float64"))
check_round_trip(df, pa, expected=expected)
- @td.skip_if_no("pyarrow", min_version="0.15.1.dev")
+ @td.skip_if_no("pyarrow", min_version="0.16.0")
def test_additional_extension_types(self, pa):
# test additional ExtensionArrays that are supported through the
# __arrow_array__ protocol + by defining a custom ExtensionType
| Closes #31896 | https://api.github.com/repos/pandas-dev/pandas/pulls/31918 | 2020-02-12T10:30:30Z | 2020-02-12T15:04:32Z | 2020-02-12T15:04:32Z | 2021-12-15T08:05:53Z |
EHN: fix unsigned int type problem of the result diff() | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 886b0a3c5fec1..189b2ca8cc8e1 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1869,6 +1869,7 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
is_timedelta = False
is_bool = False
+ original_dtype = np.dtype(dtype)
if needs_i8_conversion(arr):
dtype = np.float64
arr = arr.view("i8")
@@ -1928,6 +1929,7 @@ def diff(arr, n: int, axis: int = 0, stacklevel=3):
if is_timedelta:
out_arr = out_arr.astype("int64").view("timedelta64[ns]")
+ out_arr = out_arr.astype(original_dtype)
return out_arr
| An unexpected output type(float64) occurs to diff() when using uint
type. So fix it by restoring original dtype after storing it
Fixes #28909
- [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
What do you think about this problem with the code? @jreback | https://api.github.com/repos/pandas-dev/pandas/pulls/31916 | 2020-02-12T08:45:37Z | 2020-02-13T01:58:27Z | null | 2020-02-19T07:05:17Z |
CLN: Some groupby internals | diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 63087672d1365..7259268ac3f2b 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -169,7 +169,7 @@ def apply(self, f, data: FrameOrSeries, axis: int = 0):
and not sdata.index._has_complex_internals
):
try:
- result_values, mutated = splitter.fast_apply(f, group_keys)
+ result_values, mutated = splitter.fast_apply(f, sdata, group_keys)
except libreduction.InvalidApply as err:
# This Exception is raised if `f` triggers an exception
@@ -925,11 +925,9 @@ def _chop(self, sdata: Series, slice_obj: slice) -> Series:
class FrameSplitter(DataSplitter):
- def fast_apply(self, f, names):
+ def fast_apply(self, f, sdata: FrameOrSeries, names):
# must return keys::list, values::list, mutated::bool
starts, ends = lib.generate_slices(self.slabels, self.ngroups)
-
- sdata = self._get_sorted_data()
return libreduction.apply_frame_axis0(sdata, f, names, starts, ends)
def _chop(self, sdata: DataFrame, slice_obj: slice) -> DataFrame:
@@ -939,11 +937,13 @@ def _chop(self, sdata: DataFrame, slice_obj: slice) -> DataFrame:
return sdata.iloc[:, slice_obj]
-def get_splitter(data: FrameOrSeries, *args, **kwargs) -> DataSplitter:
+def get_splitter(
+ data: FrameOrSeries, labels: np.ndarray, ngroups: int, axis: int = 0
+) -> DataSplitter:
if isinstance(data, Series):
klass: Type[DataSplitter] = SeriesSplitter
else:
# i.e. DataFrame
klass = FrameSplitter
- return klass(data, *args, **kwargs)
+ return klass(data, labels, ngroups, axis)
diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py
index 41ec70468aaeb..18ad5d90b3f60 100644
--- a/pandas/tests/groupby/test_apply.py
+++ b/pandas/tests/groupby/test_apply.py
@@ -108,8 +108,9 @@ def f(g):
splitter = grouper._get_splitter(g._selected_obj, axis=g.axis)
group_keys = grouper._get_group_keys()
+ sdata = splitter._get_sorted_data()
- values, mutated = splitter.fast_apply(f, group_keys)
+ values, mutated = splitter.fast_apply(f, sdata, group_keys)
assert not mutated
| - [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
1. Avoids calling `_get_sorted_data()` twice
2. Eliminates `*args, **kwargs` from an internal function | https://api.github.com/repos/pandas-dev/pandas/pulls/31915 | 2020-02-12T07:03:11Z | 2020-02-17T00:23:55Z | 2020-02-17T00:23:55Z | 2020-02-17T00:23:58Z |
CLN: 29547 replace old string formatting 1 | diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index bd9b77a2bc419..a78e4bb34e42a 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -99,7 +99,7 @@ def assert_frame_equal(cls, left, right, *args, **kwargs):
check_names=kwargs.get("check_names", True),
check_exact=kwargs.get("check_exact", False),
check_categorical=kwargs.get("check_categorical", True),
- obj="{obj}.columns".format(obj=kwargs.get("obj", "DataFrame")),
+ obj=f"{kwargs.get('obj', 'DataFrame')}.columns",
)
decimals = (left.dtypes == "decimal").index
diff --git a/pandas/tests/frame/indexing/test_categorical.py b/pandas/tests/frame/indexing/test_categorical.py
index a29c193676db2..3a472a8b58b6c 100644
--- a/pandas/tests/frame/indexing/test_categorical.py
+++ b/pandas/tests/frame/indexing/test_categorical.py
@@ -14,9 +14,7 @@ def test_assignment(self):
df = DataFrame(
{"value": np.array(np.random.randint(0, 10000, 100), dtype="int32")}
)
- labels = Categorical(
- ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
- )
+ labels = Categorical([f"{i} - {i + 499}" for i in range(0, 10000, 500)])
df = df.sort_values(by=["value"], ascending=True)
s = pd.cut(df.value, range(0, 10500, 500), right=False, labels=labels)
@@ -348,7 +346,7 @@ def test_assigning_ops(self):
def test_functions_no_warnings(self):
df = DataFrame({"value": np.random.randint(0, 100, 20)})
- labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
+ labels = [f"{i} - {i + 9}" for i in range(0, 100, 10)]
with tm.assert_produces_warning(False):
df["group"] = pd.cut(
df.value, range(0, 105, 10), right=False, labels=labels
diff --git a/pandas/tests/frame/methods/test_describe.py b/pandas/tests/frame/methods/test_describe.py
index 127233ed2713e..8a75e80a12f52 100644
--- a/pandas/tests/frame/methods/test_describe.py
+++ b/pandas/tests/frame/methods/test_describe.py
@@ -86,7 +86,7 @@ def test_describe_bool_frame(self):
def test_describe_categorical(self):
df = DataFrame({"value": np.random.randint(0, 10000, 100)})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
diff --git a/pandas/tests/frame/methods/test_duplicated.py b/pandas/tests/frame/methods/test_duplicated.py
index 72eec8753315c..38b9d7fd049ab 100644
--- a/pandas/tests/frame/methods/test_duplicated.py
+++ b/pandas/tests/frame/methods/test_duplicated.py
@@ -22,9 +22,7 @@ def test_duplicated_do_not_fail_on_wide_dataframes():
# gh-21524
# Given the wide dataframe with a lot of columns
# with different (important!) values
- data = {
- "col_{0:02d}".format(i): np.random.randint(0, 1000, 30000) for i in range(100)
- }
+ data = {f"col_{i:02d}": np.random.randint(0, 1000, 30000) for i in range(100)}
df = DataFrame(data).T
result = df.duplicated()
diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py
index 7b0adceb57668..40393721c4ac6 100644
--- a/pandas/tests/frame/methods/test_to_dict.py
+++ b/pandas/tests/frame/methods/test_to_dict.py
@@ -236,9 +236,9 @@ def test_to_dict_numeric_names(self):
def test_to_dict_wide(self):
# GH#24939
- df = DataFrame({("A_{:d}".format(i)): [i] for i in range(256)})
+ df = DataFrame({(f"A_{i:d}"): [i] for i in range(256)})
result = df.to_dict("records")[0]
- expected = {"A_{:d}".format(i): i for i in range(256)}
+ expected = {f"A_{i:d}": i for i in range(256)}
assert result == expected
def test_to_dict_orient_dtype(self):
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 602ea9ca0471a..0c19a38bb5fa2 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -382,8 +382,9 @@ class Thing(frozenset):
# need to stabilize repr for KeyError (due to random order in sets)
def __repr__(self) -> str:
tmp = sorted(self)
+ joined_reprs = ", ".join(map(repr, tmp))
# double curly brace prints one brace in format string
- return "frozenset({{{}}})".format(", ".join(map(repr, tmp)))
+ return f"frozenset({{{joined_reprs}}})"
thing1 = Thing(["One", "red"])
thing2 = Thing(["Two", "blue"])
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 17cc50661e3cb..a021dd91a7d26 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -46,19 +46,19 @@ def test_get_value(self, float_frame):
def test_add_prefix_suffix(self, float_frame):
with_prefix = float_frame.add_prefix("foo#")
- expected = pd.Index(["foo#{c}".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"foo#{c}" for c in float_frame.columns])
tm.assert_index_equal(with_prefix.columns, expected)
with_suffix = float_frame.add_suffix("#foo")
- expected = pd.Index(["{c}#foo".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"{c}#foo" for c in float_frame.columns])
tm.assert_index_equal(with_suffix.columns, expected)
with_pct_prefix = float_frame.add_prefix("%")
- expected = pd.Index(["%{c}".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"%{c}" for c in float_frame.columns])
tm.assert_index_equal(with_pct_prefix.columns, expected)
with_pct_suffix = float_frame.add_suffix("%")
- expected = pd.Index(["{c}%".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"{c}%" for c in float_frame.columns])
tm.assert_index_equal(with_pct_suffix.columns, expected)
def test_get_axis(self, float_frame):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 5f4c78449f71d..8c9b7cd060059 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -278,7 +278,7 @@ def test_constructor_ordereddict(self):
nitems = 100
nums = list(range(nitems))
random.shuffle(nums)
- expected = ["A{i:d}".format(i=i) for i in nums]
+ expected = [f"A{i:d}" for i in nums]
df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems)))
assert expected == list(df.columns)
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 966f0d416676c..8b63f0614eebf 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -702,7 +702,7 @@ def test_astype_categorical(self, dtype):
@pytest.mark.parametrize("cls", [CategoricalDtype, DatetimeTZDtype, IntervalDtype])
def test_astype_categoricaldtype_class_raises(self, cls):
df = DataFrame({"A": ["a", "a", "b", "c"]})
- xpr = "Expected an instance of {}".format(cls.__name__)
+ xpr = f"Expected an instance of {cls.__name__}"
with pytest.raises(TypeError, match=xpr):
df.astype({"A": cls})
@@ -827,7 +827,7 @@ def test_df_where_change_dtype(self):
def test_astype_from_datetimelike_to_objectt(self, dtype, unit):
# tests astype to object dtype
# gh-19223 / gh-12425
- dtype = "{}[{}]".format(dtype, unit)
+ dtype = f"{dtype}[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(object)
@@ -844,7 +844,7 @@ def test_astype_from_datetimelike_to_objectt(self, dtype, unit):
def test_astype_to_datetimelike_unit(self, arr_dtype, dtype, unit):
# tests all units from numeric origination
# gh-19223 / gh-12425
- dtype = "{}[{}]".format(dtype, unit)
+ dtype = f"{dtype}[{unit}]"
arr = np.array([[1, 2, 3]], dtype=arr_dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -856,7 +856,7 @@ def test_astype_to_datetimelike_unit(self, arr_dtype, dtype, unit):
def test_astype_to_datetime_unit(self, unit):
# tests all units from datetime origination
# gh-19223
- dtype = "M8[{}]".format(unit)
+ dtype = f"M8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -868,7 +868,7 @@ def test_astype_to_datetime_unit(self, unit):
def test_astype_to_timedelta_unit_ns(self, unit):
# preserver the timedelta conversion
# gh-19223
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -880,7 +880,7 @@ def test_astype_to_timedelta_unit_ns(self, unit):
def test_astype_to_timedelta_unit(self, unit):
# coerce to float
# gh-19223
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -892,21 +892,21 @@ def test_astype_to_timedelta_unit(self, unit):
def test_astype_to_incorrect_datetimelike(self, unit):
# trying to astype a m to a M, or vice-versa
# gh-19224
- dtype = "M8[{}]".format(unit)
- other = "m8[{}]".format(unit)
+ dtype = f"M8[{unit}]"
+ other = f"m8[{unit}]"
df = DataFrame(np.array([[1, 2, 3]], dtype=dtype))
msg = (
- r"cannot astype a datetimelike from \[datetime64\[ns\]\] to "
- r"\[timedelta64\[{}\]\]"
- ).format(unit)
+ fr"cannot astype a datetimelike from \[datetime64\[ns\]\] to "
+ fr"\[timedelta64\[{unit}\]\]"
+ )
with pytest.raises(TypeError, match=msg):
df.astype(other)
msg = (
- r"cannot astype a timedelta from \[timedelta64\[ns\]\] to "
- r"\[datetime64\[{}\]\]"
- ).format(unit)
+ fr"cannot astype a timedelta from \[timedelta64\[ns\]\] to "
+ fr"\[datetime64\[{unit}\]\]"
+ )
df = DataFrame(np.array([[1, 2, 3]], dtype=other))
with pytest.raises(TypeError, match=msg):
df.astype(dtype)
diff --git a/pandas/tests/frame/test_join.py b/pandas/tests/frame/test_join.py
index c6e28f3c64f12..8c388a887158f 100644
--- a/pandas/tests/frame/test_join.py
+++ b/pandas/tests/frame/test_join.py
@@ -161,7 +161,7 @@ def test_join_overlap(float_frame):
def test_join_period_index(frame_with_period_index):
- other = frame_with_period_index.rename(columns=lambda x: "{key}{key}".format(key=x))
+ other = frame_with_period_index.rename(columns=lambda key: f"{key}{key}")
joined_values = np.concatenate([frame_with_period_index.values] * 2, axis=1)
| I splitted PR #31844 in batches, this is the first one
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/31914 | 2020-02-12T04:56:23Z | 2020-02-12T17:18:50Z | 2020-02-12T17:18:50Z | 2020-02-21T01:30:03Z |
CLN: implement _getitem_tuple_same_dim | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index f498e1696ea5b..b3777e949a08c 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -718,6 +718,25 @@ def _handle_lowerdim_multi_index_axis0(self, tup: Tuple):
return None
+ def _getitem_tuple_same_dim(self, tup: Tuple):
+ """
+ Index with indexers that should return an object of the same dimension
+ as self.obj.
+
+ This is only called after a failed call to _getitem_lowerdim.
+ """
+ retval = self.obj
+ for i, key in enumerate(tup):
+ if com.is_null_slice(key):
+ continue
+
+ retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
+ # We should never have retval.ndim < self.ndim, as that should
+ # be handled by the _getitem_lowerdim call above.
+ assert retval.ndim == self.ndim
+
+ return retval
+
def _getitem_lowerdim(self, tup: Tuple):
# we can directly get the axis result since the axis is specified
@@ -1049,15 +1068,7 @@ def _getitem_tuple(self, tup: Tuple):
if self._multi_take_opportunity(tup):
return self._multi_take(tup)
- # no shortcut needed
- retval = self.obj
- for i, key in enumerate(tup):
- if com.is_null_slice(key):
- continue
-
- retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
-
- return retval
+ return self._getitem_tuple_same_dim(tup)
def _getitem_axis(self, key, axis: int):
key = item_from_zerodim(key)
@@ -1468,25 +1479,7 @@ def _getitem_tuple(self, tup: Tuple):
except IndexingError:
pass
- retval = self.obj
- axis = 0
- for i, key in enumerate(tup):
- if com.is_null_slice(key):
- axis += 1
- continue
-
- retval = getattr(retval, self.name)._getitem_axis(key, axis=axis)
-
- # if the dim was reduced, then pass a lower-dim the next time
- if retval.ndim < self.ndim:
- # TODO: this is never reached in tests; can we confirm that
- # it is impossible?
- axis -= 1
-
- # try to get for the next axis
- axis += 1
-
- return retval
+ return self._getitem_tuple_same_dim(tup)
def _get_list_axis(self, key, axis: int):
"""
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31911 | 2020-02-12T03:42:23Z | 2020-02-12T17:42:37Z | 2020-02-12T17:42:37Z | 2020-02-12T17:49:41Z |
BUG: Handle NA in assert_numpy_array_equal | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 9702eb4615909..eaa73c00705ea 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -571,6 +571,8 @@ def array_equivalent_object(left: object[:], right: object[:]) -> bool:
if PyArray_Check(x) and PyArray_Check(y):
if not array_equivalent_object(x, y):
return False
+ elif (x is C_NA) ^ (y is C_NA):
+ return False
elif not (PyObject_RichCompareBool(x, y, Py_EQ) or
(x is None or is_nan(x)) and (y is None or is_nan(y))):
return False
diff --git a/pandas/tests/util/test_assert_numpy_array_equal.py b/pandas/tests/util/test_assert_numpy_array_equal.py
index c8ae9ebdd8651..d29ddedd2fdd6 100644
--- a/pandas/tests/util/test_assert_numpy_array_equal.py
+++ b/pandas/tests/util/test_assert_numpy_array_equal.py
@@ -1,6 +1,7 @@
import numpy as np
import pytest
+import pandas as pd
from pandas import Timestamp
import pandas._testing as tm
@@ -175,3 +176,38 @@ def test_numpy_array_equal_copy_flag(other_type, check_same):
tm.assert_numpy_array_equal(a, other, check_same=check_same)
else:
tm.assert_numpy_array_equal(a, other, check_same=check_same)
+
+
+def test_numpy_array_equal_contains_na():
+ # https://github.com/pandas-dev/pandas/issues/31881
+ a = np.array([True, False])
+ b = np.array([True, pd.NA], dtype=object)
+
+ msg = """numpy array are different
+
+numpy array values are different \\(50.0 %\\)
+\\[left\\]: \\[True, False\\]
+\\[right\\]: \\[True, <NA>\\]"""
+
+ with pytest.raises(AssertionError, match=msg):
+ tm.assert_numpy_array_equal(a, b)
+
+
+def test_numpy_array_equal_identical_na(nulls_fixture):
+ a = np.array([nulls_fixture], dtype=object)
+
+ tm.assert_numpy_array_equal(a, a)
+
+
+def test_numpy_array_equal_different_na():
+ a = np.array([np.nan], dtype=object)
+ b = np.array([pd.NA], dtype=object)
+
+ msg = """numpy array are different
+
+numpy array values are different \\(100.0 %\\)
+\\[left\\]: \\[nan\\]
+\\[right\\]: \\[<NA>\\]"""
+
+ with pytest.raises(AssertionError, match=msg):
+ tm.assert_numpy_array_equal(a, b)
| - [x] closes #31881
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
I think in the the case where we have NA we want to use identity rather than equality for checking when two arrays are equal. | https://api.github.com/repos/pandas-dev/pandas/pulls/31910 | 2020-02-12T03:29:38Z | 2020-02-13T08:16:34Z | 2020-02-13T08:16:33Z | 2020-02-13T14:14:33Z |
CLN: remove odious kludge | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 44b3c318366d2..f498e1696ea5b 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -755,18 +755,11 @@ def _getitem_lowerdim(self, tup: Tuple):
new_key = tup[:i] + (_NS,) + tup[i + 1 :]
else:
+ # Note: the section.ndim == self.ndim check above
+ # rules out having DataFrame here, so we dont need to worry
+ # about transposing.
new_key = tup[:i] + tup[i + 1 :]
- # unfortunately need an odious kludge here because of
- # DataFrame transposing convention
- if (
- isinstance(section, ABCDataFrame)
- and i > 0
- and len(new_key) == 2
- ):
- a, b = new_key
- new_key = b, a
-
if len(new_key) == 1:
new_key = new_key[0]
| This is a legacy of Panel/ix AFAICT, was introduced in 2011. | https://api.github.com/repos/pandas-dev/pandas/pulls/31907 | 2020-02-12T01:57:37Z | 2020-02-12T04:17:27Z | 2020-02-12T04:17:27Z | 2020-02-12T15:57:58Z |
REF: implement unpack_1tuple to clean up Series.__getitem__ | diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index cadae9da6092f..4fb42fce29e1a 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -270,6 +270,33 @@ def deprecate_ndim_indexing(result):
)
+def unpack_1tuple(tup):
+ """
+ If we have a length-1 tuple/list that contains a slice, unpack to just
+ the slice.
+
+ Notes
+ -----
+ The list case is deprecated.
+ """
+ if len(tup) == 1 and isinstance(tup[0], slice):
+ # if we don't have a MultiIndex, we may still be able to handle
+ # a 1-tuple. see test_1tuple_without_multiindex
+
+ if isinstance(tup, list):
+ # GH#31299
+ warnings.warn(
+ "Indexing with a single-item list containing a "
+ "slice is deprecated and will raise in a future "
+ "version. Pass a tuple instead.",
+ FutureWarning,
+ stacklevel=3,
+ )
+
+ return tup[0]
+ return tup
+
+
# -----------------------------------------------------------
# Public indexer validation
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 256586f3d36a1..15fe0bb98b536 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -72,7 +72,7 @@
sanitize_array,
)
from pandas.core.generic import NDFrame
-from pandas.core.indexers import maybe_convert_indices
+from pandas.core.indexers import maybe_convert_indices, unpack_1tuple
from pandas.core.indexes.accessors import CombinedDatetimelikeProperties
from pandas.core.indexes.api import (
Float64Index,
@@ -851,6 +851,8 @@ def __getitem__(self, key):
key_is_scalar = is_scalar(key)
if key_is_scalar:
key = self.index._convert_scalar_indexer(key, kind="getitem")
+ elif isinstance(key, (list, tuple)):
+ key = unpack_1tuple(key)
if key_is_scalar or isinstance(self.index, MultiIndex):
# Otherwise index.get_value will raise InvalidIndexError
@@ -893,16 +895,7 @@ def _get_with(self, key):
"supported, use the appropriate DataFrame column"
)
elif isinstance(key, tuple):
- try:
- return self._get_values_tuple(key)
- except ValueError:
- # if we don't have a MultiIndex, we may still be able to handle
- # a 1-tuple. see test_1tuple_without_multiindex
- if len(key) == 1:
- key = key[0]
- if isinstance(key, slice):
- return self._get_values(key)
- raise
+ return self._get_values_tuple(key)
if not isinstance(key, (list, np.ndarray, ExtensionArray, Series, Index)):
key = list(key)
@@ -924,26 +917,8 @@ def _get_with(self, key):
else:
return self.iloc[key]
- if isinstance(key, (list, tuple)):
- # TODO: de-dup with tuple case handled above?
+ if isinstance(key, list):
# handle the dup indexing case GH#4246
- if len(key) == 1 and isinstance(key[0], slice):
- # [slice(0, 5, None)] will break if you convert to ndarray,
- # e.g. as requested by np.median
- # FIXME: hack
- if isinstance(key, list):
- # GH#31299
- warnings.warn(
- "Indexing with a single-item list containing a "
- "slice is deprecated and will raise in a future "
- "version. Pass a tuple instead.",
- FutureWarning,
- stacklevel=3,
- )
- # TODO: use a message more like numpy's?
- key = tuple(key)
- return self._get_values(key)
-
return self.loc[key]
return self.reindex(key)
| Doing this up-front instead of in two places makes for a nice cleanup. There will be a small perf penalty for the cases that wouldn't otherwise need to do these checks.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31906 | 2020-02-12T01:55:21Z | 2020-02-17T19:53:58Z | 2020-02-17T19:53:58Z | 2020-02-17T22:16:06Z |
BUG: using loc[int] with object index | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 34a67836f9675..7449c62a5ad31 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -56,6 +56,8 @@ Other API changes
- :meth:`Series.describe` will now show distribution percentiles for ``datetime`` dtypes, statistics ``first`` and ``last``
will now be ``min`` and ``max`` to match with numeric dtypes in :meth:`DataFrame.describe` (:issue:`30164`)
- :meth:`Groupby.groups` now returns an abbreviated representation when called on large dataframes (:issue:`1135`)
+- ``loc`` lookups with an object-dtype :class:`Index` and an integer key will now raise ``KeyError`` instead of ``TypeError`` when key is missing (:issue:`31905`)
+-
Backwards incompatible API changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -160,6 +162,8 @@ Indexing
- Bug in :meth:`DatetimeIndex.get_loc` raising ``KeyError`` with converted-integer key instead of the user-passed key (:issue:`31425`)
- Bug in :meth:`Series.xs` incorrectly returning ``Timestamp`` instead of ``datetime64`` in some object-dtype cases (:issue:`31630`)
- Bug in :meth:`DataFrame.iat` incorrectly returning ``Timestamp`` instead of ``datetime`` in some object-dtype cases (:issue:`32809`)
+- Bug in :meth:`Series.loc` and :meth:`DataFrame.loc` when indexing with an integer key on a object-dtype :class:`Index` that is not all-integers (:issue:`31905`)
+-
Missing
^^^^^^^
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 14ee21ea5614c..32ebecf1f223d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3128,7 +3128,7 @@ def _convert_scalar_indexer(self, key, kind: str_t):
self._invalid_indexer("label", key)
elif kind == "loc" and is_integer(key):
- if not self.holds_integer():
+ if not (is_integer_dtype(self.dtype) or is_object_dtype(self.dtype)):
self._invalid_indexer("label", key)
return key
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9c0ff9780da3e..eac92695ed075 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -969,9 +969,11 @@ def _get_value(self, label, takeable: bool = False):
if takeable:
return self._values[label]
+ # Similar to Index.get_value, but we do not fall back to positional
+ loc = self.index.get_loc(label)
# We assume that _convert_scalar_indexer has already been called,
# with kind="loc", if necessary, by the time we get here
- return self.index.get_value(self, label)
+ return self.index._get_values_for_loc(self, loc, label)
def __setitem__(self, key, value):
key = com.apply_if_callable(key, self)
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index da935b1c911d0..8a8ac584c16c2 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -82,11 +82,7 @@ def test_loc_scalar(self):
with pytest.raises(TypeError, match=msg):
df.loc["d", "C"] = 10
- msg = (
- "cannot do label indexing on CategoricalIndex with these "
- r"indexers \[1\] of type int"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^1$"):
df.loc[1]
def test_getitem_scalar(self):
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 71d85ed8bda9b..276d11a67ad18 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -16,7 +16,7 @@ class TestLoc(Base):
def test_loc_getitem_int(self):
# int label
- self.check_result("loc", 2, typs=["labels"], fails=TypeError)
+ self.check_result("loc", 2, typs=["labels"], fails=KeyError)
def test_loc_getitem_label(self):
@@ -34,7 +34,7 @@ def test_loc_getitem_label_out_of_range(self):
self.check_result(
"loc", 20, typs=["ints", "uints", "mixed"], fails=KeyError,
)
- self.check_result("loc", 20, typs=["labels"], fails=TypeError)
+ self.check_result("loc", 20, typs=["labels"], fails=KeyError)
self.check_result("loc", 20, typs=["ts"], axes=0, fails=TypeError)
self.check_result("loc", 20, typs=["floats"], axes=0, fails=KeyError)
@@ -967,3 +967,11 @@ def test_loc_set_dataframe_multiindex():
result = expected.copy()
result.loc[0, [(0, 1)]] = result.loc[0, [(0, 1)]]
tm.assert_frame_equal(result, expected)
+
+
+def test_loc_mixed_int_float():
+ # GH#19456
+ ser = pd.Series(range(2), pd.Index([1, 2.0], dtype=object))
+
+ result = ser.loc[1]
+ assert result == 0
diff --git a/pandas/tests/indexing/test_scalar.py b/pandas/tests/indexing/test_scalar.py
index c4750778e2eb8..25939e63c256b 100644
--- a/pandas/tests/indexing/test_scalar.py
+++ b/pandas/tests/indexing/test_scalar.py
@@ -138,16 +138,12 @@ def test_series_at_raises_type_error(self):
result = ser.loc["a"]
assert result == 1
- msg = (
- "cannot do label indexing on Index "
- r"with these indexers \[0\] of type int"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^0$"):
ser.at[0]
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^0$"):
ser.loc[0]
- def test_frame_raises_type_error(self):
+ def test_frame_raises_key_error(self):
# GH#31724 .at should match .loc
df = DataFrame({"A": [1, 2, 3]}, index=list("abc"))
result = df.at["a", "A"]
@@ -155,13 +151,9 @@ def test_frame_raises_type_error(self):
result = df.loc["a", "A"]
assert result == 1
- msg = (
- "cannot do label indexing on Index "
- r"with these indexers \[0\] of type int"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^0$"):
df.at["a", 0]
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^0$"):
df.loc["a", 0]
def test_series_at_raises_key_error(self):
| - [x] closes #19456
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This causes us to raise KeyError instead of TypeError in a couple of places, consistent with #31867.
Also note this leaves us with only one non-plotting usage of `holds_integer`, and it wouldnt surprise me if that one is subtly causing problems too. | https://api.github.com/repos/pandas-dev/pandas/pulls/31905 | 2020-02-12T01:32:32Z | 2020-02-22T17:53:09Z | 2020-02-22T17:53:08Z | 2020-02-22T17:53:21Z |
TST: parametrize eval tests | diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 703e05998e93c..bf9eeb532b43b 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -78,45 +78,48 @@ def test_query_numexpr(self):
class TestDataFrameEval:
- def test_ops(self):
+
+ # smaller hits python, larger hits numexpr
+ @pytest.mark.parametrize("n", [4, 4000])
+ @pytest.mark.parametrize(
+ "op_str,op,rop",
+ [
+ ("+", "__add__", "__radd__"),
+ ("-", "__sub__", "__rsub__"),
+ ("*", "__mul__", "__rmul__"),
+ ("/", "__truediv__", "__rtruediv__"),
+ ],
+ )
+ def test_ops(self, op_str, op, rop, n):
# tst ops and reversed ops in evaluation
# GH7198
- # smaller hits python, larger hits numexpr
- for n in [4, 4000]:
-
- df = DataFrame(1, index=range(n), columns=list("abcd"))
- df.iloc[0] = 2
- m = df.mean()
+ df = DataFrame(1, index=range(n), columns=list("abcd"))
+ df.iloc[0] = 2
+ m = df.mean()
- for op_str, op, rop in [
- ("+", "__add__", "__radd__"),
- ("-", "__sub__", "__rsub__"),
- ("*", "__mul__", "__rmul__"),
- ("/", "__truediv__", "__rtruediv__"),
- ]:
-
- base = DataFrame( # noqa
- np.tile(m.values, n).reshape(n, -1), columns=list("abcd")
- )
+ base = DataFrame( # noqa
+ np.tile(m.values, n).reshape(n, -1), columns=list("abcd")
+ )
- expected = eval("base{op}df".format(op=op_str))
+ expected = eval(f"base {op_str} df")
- # ops as strings
- result = eval("m{op}df".format(op=op_str))
- tm.assert_frame_equal(result, expected)
+ # ops as strings
+ result = eval(f"m {op_str} df")
+ tm.assert_frame_equal(result, expected)
- # these are commutative
- if op in ["+", "*"]:
- result = getattr(df, op)(m)
- tm.assert_frame_equal(result, expected)
+ # these are commutative
+ if op in ["+", "*"]:
+ result = getattr(df, op)(m)
+ tm.assert_frame_equal(result, expected)
- # these are not
- elif op in ["-", "/"]:
- result = getattr(df, rop)(m)
- tm.assert_frame_equal(result, expected)
+ # these are not
+ elif op in ["-", "/"]:
+ result = getattr(df, rop)(m)
+ tm.assert_frame_equal(result, expected)
+ def test_dataframe_sub_numexpr_path(self):
# GH7192: Note we need a large number of rows to ensure this
# goes through the numexpr path
df = DataFrame(dict(A=np.random.randn(25000)))
@@ -451,9 +454,7 @@ def test_date_query_with_non_date(self):
for op in ["<", ">", "<=", ">="]:
with pytest.raises(TypeError):
- df.query(
- "dates {op} nondate".format(op=op), parser=parser, engine=engine
- )
+ df.query(f"dates {op} nondate", parser=parser, engine=engine)
def test_query_syntax_error(self):
engine, parser = self.engine, self.parser
@@ -687,10 +688,9 @@ def test_inf(self):
n = 10
df = DataFrame({"a": np.random.rand(n), "b": np.random.rand(n)})
df.loc[::2, 0] = np.inf
- ops = "==", "!="
- d = dict(zip(ops, (operator.eq, operator.ne)))
+ d = {"==": operator.eq, "!=": operator.ne}
for op, f in d.items():
- q = "a {op} inf".format(op=op)
+ q = f"a {op} inf"
expected = df[f(df.a, np.inf)]
result = df.query(q, engine=self.engine, parser=self.parser)
tm.assert_frame_equal(result, expected)
@@ -854,7 +854,7 @@ def test_str_query_method(self, parser, engine):
ops = 2 * ([eq] + [ne])
for lhs, op, rhs in zip(lhs, ops, rhs):
- ex = "{lhs} {op} {rhs}".format(lhs=lhs, op=op, rhs=rhs)
+ ex = f"{lhs} {op} {rhs}"
msg = r"'(Not)?In' nodes are not implemented"
with pytest.raises(NotImplementedError, match=msg):
df.query(
@@ -895,7 +895,7 @@ def test_str_list_query_method(self, parser, engine):
ops = 2 * ([eq] + [ne])
for lhs, op, rhs in zip(lhs, ops, rhs):
- ex = "{lhs} {op} {rhs}".format(lhs=lhs, op=op, rhs=rhs)
+ ex = f"{lhs} {op} {rhs}"
with pytest.raises(NotImplementedError):
df.query(ex, engine=engine, parser=parser)
else:
@@ -1042,7 +1042,7 @@ def test_invalid_type_for_operator_raises(self, parser, engine, op):
msg = r"unsupported operand type\(s\) for .+: '.+' and '.+'"
with pytest.raises(TypeError, match=msg):
- df.eval("a {0} b".format(op), engine=engine, parser=parser)
+ df.eval(f"a {op} b", engine=engine, parser=parser)
class TestDataFrameQueryBacktickQuoting:
| https://api.github.com/repos/pandas-dev/pandas/pulls/31901 | 2020-02-11T22:49:55Z | 2020-02-12T01:11:13Z | 2020-02-12T01:11:12Z | 2020-02-12T01:15:54Z | |
TST: parametrize generic/internals tests | diff --git a/pandas/tests/generic/test_frame.py b/pandas/tests/generic/test_frame.py
index d8f4257566f84..dca65152e82db 100644
--- a/pandas/tests/generic/test_frame.py
+++ b/pandas/tests/generic/test_frame.py
@@ -32,19 +32,20 @@ def test_rename_mi(self):
)
df.rename(str.lower)
- def test_set_axis_name(self):
+ @pytest.mark.parametrize("func", ["_set_axis_name", "rename_axis"])
+ def test_set_axis_name(self, func):
df = pd.DataFrame([[1, 2], [3, 4]])
- funcs = ["_set_axis_name", "rename_axis"]
- for func in funcs:
- result = methodcaller(func, "foo")(df)
- assert df.index.name is None
- assert result.index.name == "foo"
- result = methodcaller(func, "cols", axis=1)(df)
- assert df.columns.name is None
- assert result.columns.name == "cols"
+ result = methodcaller(func, "foo")(df)
+ assert df.index.name is None
+ assert result.index.name == "foo"
- def test_set_axis_name_mi(self):
+ result = methodcaller(func, "cols", axis=1)(df)
+ assert df.columns.name is None
+ assert result.columns.name == "cols"
+
+ @pytest.mark.parametrize("func", ["_set_axis_name", "rename_axis"])
+ def test_set_axis_name_mi(self, func):
df = DataFrame(
np.empty((3, 3)),
index=MultiIndex.from_tuples([("A", x) for x in list("aBc")]),
@@ -52,15 +53,14 @@ def test_set_axis_name_mi(self):
)
level_names = ["L1", "L2"]
- funcs = ["_set_axis_name", "rename_axis"]
- for func in funcs:
- result = methodcaller(func, level_names)(df)
- assert result.index.names == level_names
- assert result.columns.names == [None, None]
- result = methodcaller(func, level_names, axis=1)(df)
- assert result.columns.names == ["L1", "L2"]
- assert result.index.names == [None, None]
+ result = methodcaller(func, level_names)(df)
+ assert result.index.names == level_names
+ assert result.columns.names == [None, None]
+
+ result = methodcaller(func, level_names, axis=1)(df)
+ assert result.columns.names == ["L1", "L2"]
+ assert result.index.names == [None, None]
def test_nonzero_single_element(self):
@@ -185,36 +185,35 @@ def test_deepcopy_empty(self):
# formerly in Generic but only test DataFrame
class TestDataFrame2:
- def test_validate_bool_args(self):
+ @pytest.mark.parametrize("value", [1, "True", [1, 2, 3], 5.0])
+ def test_validate_bool_args(self, value):
df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
- invalid_values = [1, "True", [1, 2, 3], 5.0]
- for value in invalid_values:
- with pytest.raises(ValueError):
- super(DataFrame, df).rename_axis(
- mapper={"a": "x", "b": "y"}, axis=1, inplace=value
- )
+ with pytest.raises(ValueError):
+ super(DataFrame, df).rename_axis(
+ mapper={"a": "x", "b": "y"}, axis=1, inplace=value
+ )
- with pytest.raises(ValueError):
- super(DataFrame, df).drop("a", axis=1, inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df).drop("a", axis=1, inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df)._consolidate(inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df)._consolidate(inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df).fillna(value=0, inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df).fillna(value=0, inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df).replace(to_replace=1, value=7, inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df).replace(to_replace=1, value=7, inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df).interpolate(inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df).interpolate(inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df)._where(cond=df.a > 2, inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df)._where(cond=df.a > 2, inplace=value)
- with pytest.raises(ValueError):
- super(DataFrame, df).mask(cond=df.a > 2, inplace=value)
+ with pytest.raises(ValueError):
+ super(DataFrame, df).mask(cond=df.a > 2, inplace=value)
def test_unexpected_keyword(self):
# GH8597
@@ -243,23 +242,10 @@ class TestToXArray:
and LooseVersion(xarray.__version__) < LooseVersion("0.10.0"),
reason="xarray >= 0.10.0 required",
)
- @pytest.mark.parametrize(
- "index",
- [
- "FloatIndex",
- "IntIndex",
- "StringIndex",
- "UnicodeIndex",
- "DateIndex",
- "PeriodIndex",
- "CategoricalIndex",
- "TimedeltaIndex",
- ],
- )
+ @pytest.mark.parametrize("index", tm.all_index_generator(3))
def test_to_xarray_index_types(self, index):
from xarray import Dataset
- index = getattr(tm, f"make{index}")
df = DataFrame(
{
"a": list("abc"),
@@ -273,7 +259,7 @@ def test_to_xarray_index_types(self, index):
}
)
- df.index = index(3)
+ df.index = index
df.index.name = "foo"
df.columns.name = "bar"
result = df.to_xarray()
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 1f4fd90d9b059..121d395730b67 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -258,39 +258,31 @@ def test_metadata_propagation(self):
self.check_metadata(v1 & v2)
self.check_metadata(v1 | v2)
- def test_head_tail(self):
+ @pytest.mark.parametrize("index", tm.all_index_generator(10))
+ def test_head_tail(self, index):
# GH5370
o = self._construct(shape=10)
- # check all index types
- for index in [
- tm.makeFloatIndex,
- tm.makeIntIndex,
- tm.makeStringIndex,
- tm.makeUnicodeIndex,
- tm.makeDateIndex,
- tm.makePeriodIndex,
- ]:
- axis = o._get_axis_name(0)
- setattr(o, axis, index(len(getattr(o, axis))))
+ axis = o._get_axis_name(0)
+ setattr(o, axis, index)
- o.head()
+ o.head()
- self._compare(o.head(), o.iloc[:5])
- self._compare(o.tail(), o.iloc[-5:])
+ self._compare(o.head(), o.iloc[:5])
+ self._compare(o.tail(), o.iloc[-5:])
- # 0-len
- self._compare(o.head(0), o.iloc[0:0])
- self._compare(o.tail(0), o.iloc[0:0])
+ # 0-len
+ self._compare(o.head(0), o.iloc[0:0])
+ self._compare(o.tail(0), o.iloc[0:0])
- # bounded
- self._compare(o.head(len(o) + 1), o)
- self._compare(o.tail(len(o) + 1), o)
+ # bounded
+ self._compare(o.head(len(o) + 1), o)
+ self._compare(o.tail(len(o) + 1), o)
- # neg index
- self._compare(o.head(-3), o.head(7))
- self._compare(o.tail(-3), o.tail(7))
+ # neg index
+ self._compare(o.head(-3), o.head(7))
+ self._compare(o.tail(-3), o.tail(7))
def test_sample(self):
# Fixes issue: 2419
@@ -469,16 +461,16 @@ def test_stat_unexpected_keyword(self):
with pytest.raises(TypeError, match=errmsg):
obj.any(epic=starwars) # logical_function
- def test_api_compat(self):
+ @pytest.mark.parametrize("func", ["sum", "cumsum", "any", "var"])
+ def test_api_compat(self, func):
# GH 12021
# compat for __name__, __qualname__
obj = self._construct(5)
- for func in ["sum", "cumsum", "any", "var"]:
- f = getattr(obj, func)
- assert f.__name__ == func
- assert f.__qualname__.endswith(func)
+ f = getattr(obj, func)
+ assert f.__name__ == func
+ assert f.__qualname__.endswith(func)
def test_stat_non_defaults_args(self):
obj = self._construct(5)
@@ -511,19 +503,17 @@ def test_truncate_out_of_bounds(self):
self._compare(big.truncate(before=0, after=3e6), big)
self._compare(big.truncate(before=-1, after=2e6), big)
- def test_copy_and_deepcopy(self):
+ @pytest.mark.parametrize(
+ "func",
+ [copy, deepcopy, lambda x: x.copy(deep=False), lambda x: x.copy(deep=True)],
+ )
+ @pytest.mark.parametrize("shape", [0, 1, 2])
+ def test_copy_and_deepcopy(self, shape, func):
# GH 15444
- for shape in [0, 1, 2]:
- obj = self._construct(shape)
- for func in [
- copy,
- deepcopy,
- lambda x: x.copy(deep=False),
- lambda x: x.copy(deep=True),
- ]:
- obj_copy = func(obj)
- assert obj_copy is not obj
- self._compare(obj_copy, obj)
+ obj = self._construct(shape)
+ obj_copy = func(obj)
+ assert obj_copy is not obj
+ self._compare(obj_copy, obj)
@pytest.mark.parametrize(
"periods,fill_method,limit,exp",
diff --git a/pandas/tests/generic/test_series.py b/pandas/tests/generic/test_series.py
index ce0daf8522687..5aafd83da78fd 100644
--- a/pandas/tests/generic/test_series.py
+++ b/pandas/tests/generic/test_series.py
@@ -38,29 +38,29 @@ def test_rename_mi(self):
)
s.rename(str.lower)
- def test_set_axis_name(self):
+ @pytest.mark.parametrize("func", ["rename_axis", "_set_axis_name"])
+ def test_set_axis_name(self, func):
s = Series([1, 2, 3], index=["a", "b", "c"])
- funcs = ["rename_axis", "_set_axis_name"]
name = "foo"
- for func in funcs:
- result = methodcaller(func, name)(s)
- assert s.index.name is None
- assert result.index.name == name
- def test_set_axis_name_mi(self):
+ result = methodcaller(func, name)(s)
+ assert s.index.name is None
+ assert result.index.name == name
+
+ @pytest.mark.parametrize("func", ["rename_axis", "_set_axis_name"])
+ def test_set_axis_name_mi(self, func):
s = Series(
[11, 21, 31],
index=MultiIndex.from_tuples(
[("A", x) for x in ["a", "B", "c"]], names=["l1", "l2"]
),
)
- funcs = ["rename_axis", "_set_axis_name"]
- for func in funcs:
- result = methodcaller(func, ["L1", "L2"])(s)
- assert s.index.name is None
- assert s.index.names == ["l1", "l2"]
- assert result.index.name is None
- assert result.index.names, ["L1", "L2"]
+
+ result = methodcaller(func, ["L1", "L2"])(s)
+ assert s.index.name is None
+ assert s.index.names == ["l1", "l2"]
+ assert result.index.name is None
+ assert result.index.names, ["L1", "L2"]
def test_set_axis_name_raises(self):
s = pd.Series([1])
@@ -230,24 +230,11 @@ class TestToXArray:
and LooseVersion(xarray.__version__) < LooseVersion("0.10.0"),
reason="xarray >= 0.10.0 required",
)
- @pytest.mark.parametrize(
- "index",
- [
- "FloatIndex",
- "IntIndex",
- "StringIndex",
- "UnicodeIndex",
- "DateIndex",
- "PeriodIndex",
- "TimedeltaIndex",
- "CategoricalIndex",
- ],
- )
+ @pytest.mark.parametrize("index", tm.all_index_generator(6))
def test_to_xarray_index_types(self, index):
from xarray import DataArray
- index = getattr(tm, f"make{index}")
- s = Series(range(6), index=index(6))
+ s = Series(range(6), index=index)
s.index.name = "foo"
result = s.to_xarray()
repr(result)
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index aa966caa63238..fe161a0da791a 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -376,9 +376,6 @@ def test_pickle(self, mgr):
mgr2 = tm.round_trip_pickle(mgr)
tm.assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
- # share ref_items
- # assert mgr2.blocks[0].ref_items is mgr2.blocks[1].ref_items
-
# GH2431
assert hasattr(mgr2, "_is_consolidated")
assert hasattr(mgr2, "_known_consolidated")
@@ -789,40 +786,39 @@ def test_equals(self):
bm2 = BlockManager(bm1.blocks[::-1], bm1.axes)
assert bm1.equals(bm2)
- def test_equals_block_order_different_dtypes(self):
- # GH 9330
-
- mgr_strings = [
+ @pytest.mark.parametrize(
+ "mgr_string",
+ [
"a:i8;b:f8", # basic case
"a:i8;b:f8;c:c8;d:b", # many types
"a:i8;e:dt;f:td;g:string", # more types
"a:i8;b:category;c:category2;d:category2", # categories
"c:sparse;d:sparse_na;b:f8", # sparse
- ]
-
- for mgr_string in mgr_strings:
- bm = create_mgr(mgr_string)
- block_perms = itertools.permutations(bm.blocks)
- for bm_perm in block_perms:
- bm_this = BlockManager(bm_perm, bm.axes)
- assert bm.equals(bm_this)
- assert bm_this.equals(bm)
+ ],
+ )
+ def test_equals_block_order_different_dtypes(self, mgr_string):
+ # GH 9330
+ bm = create_mgr(mgr_string)
+ block_perms = itertools.permutations(bm.blocks)
+ for bm_perm in block_perms:
+ bm_this = BlockManager(bm_perm, bm.axes)
+ assert bm.equals(bm_this)
+ assert bm_this.equals(bm)
def test_single_mgr_ctor(self):
mgr = create_single_mgr("f8", num_rows=5)
assert mgr.as_array().tolist() == [0.0, 1.0, 2.0, 3.0, 4.0]
- def test_validate_bool_args(self):
- invalid_values = [1, "True", [1, 2, 3], 5.0]
+ @pytest.mark.parametrize("value", [1, "True", [1, 2, 3], 5.0])
+ def test_validate_bool_args(self, value):
bm1 = create_mgr("a,b,c: i8-1; d,e,f: i8-2")
- for value in invalid_values:
- msg = (
- 'For argument "inplace" expected type bool, '
- f"received type {type(value).__name__}."
- )
- with pytest.raises(ValueError, match=msg):
- bm1.replace_list([1], [2], inplace=value)
+ msg = (
+ 'For argument "inplace" expected type bool, '
+ f"received type {type(value).__name__}."
+ )
+ with pytest.raises(ValueError, match=msg):
+ bm1.replace_list([1], [2], inplace=value)
class TestIndexing:
@@ -851,7 +847,8 @@ class TestIndexing:
# MANAGERS = [MANAGERS[6]]
- def test_get_slice(self):
+ @pytest.mark.parametrize("mgr", MANAGERS)
+ def test_get_slice(self, mgr):
def assert_slice_ok(mgr, axis, slobj):
mat = mgr.as_array()
@@ -870,35 +867,33 @@ def assert_slice_ok(mgr, axis, slobj):
)
tm.assert_index_equal(mgr.axes[axis][slobj], sliced.axes[axis])
- for mgr in self.MANAGERS:
- for ax in range(mgr.ndim):
- # slice
- assert_slice_ok(mgr, ax, slice(None))
- assert_slice_ok(mgr, ax, slice(3))
- assert_slice_ok(mgr, ax, slice(100))
- assert_slice_ok(mgr, ax, slice(1, 4))
- assert_slice_ok(mgr, ax, slice(3, 0, -2))
-
- # boolean mask
- assert_slice_ok(mgr, ax, np.array([], dtype=np.bool_))
- assert_slice_ok(mgr, ax, np.ones(mgr.shape[ax], dtype=np.bool_))
- assert_slice_ok(mgr, ax, np.zeros(mgr.shape[ax], dtype=np.bool_))
-
- if mgr.shape[ax] >= 3:
- assert_slice_ok(mgr, ax, np.arange(mgr.shape[ax]) % 3 == 0)
- assert_slice_ok(
- mgr, ax, np.array([True, True, False], dtype=np.bool_)
- )
-
- # fancy indexer
- assert_slice_ok(mgr, ax, [])
- assert_slice_ok(mgr, ax, list(range(mgr.shape[ax])))
-
- if mgr.shape[ax] >= 3:
- assert_slice_ok(mgr, ax, [0, 1, 2])
- assert_slice_ok(mgr, ax, [-1, -2, -3])
-
- def test_take(self):
+ for ax in range(mgr.ndim):
+ # slice
+ assert_slice_ok(mgr, ax, slice(None))
+ assert_slice_ok(mgr, ax, slice(3))
+ assert_slice_ok(mgr, ax, slice(100))
+ assert_slice_ok(mgr, ax, slice(1, 4))
+ assert_slice_ok(mgr, ax, slice(3, 0, -2))
+
+ # boolean mask
+ assert_slice_ok(mgr, ax, np.array([], dtype=np.bool_))
+ assert_slice_ok(mgr, ax, np.ones(mgr.shape[ax], dtype=np.bool_))
+ assert_slice_ok(mgr, ax, np.zeros(mgr.shape[ax], dtype=np.bool_))
+
+ if mgr.shape[ax] >= 3:
+ assert_slice_ok(mgr, ax, np.arange(mgr.shape[ax]) % 3 == 0)
+ assert_slice_ok(mgr, ax, np.array([True, True, False], dtype=np.bool_))
+
+ # fancy indexer
+ assert_slice_ok(mgr, ax, [])
+ assert_slice_ok(mgr, ax, list(range(mgr.shape[ax])))
+
+ if mgr.shape[ax] >= 3:
+ assert_slice_ok(mgr, ax, [0, 1, 2])
+ assert_slice_ok(mgr, ax, [-1, -2, -3])
+
+ @pytest.mark.parametrize("mgr", MANAGERS)
+ def test_take(self, mgr):
def assert_take_ok(mgr, axis, indexer):
mat = mgr.as_array()
taken = mgr.take(indexer, axis)
@@ -907,18 +902,19 @@ def assert_take_ok(mgr, axis, indexer):
)
tm.assert_index_equal(mgr.axes[axis].take(indexer), taken.axes[axis])
- for mgr in self.MANAGERS:
- for ax in range(mgr.ndim):
- # take/fancy indexer
- assert_take_ok(mgr, ax, indexer=[])
- assert_take_ok(mgr, ax, indexer=[0, 0, 0])
- assert_take_ok(mgr, ax, indexer=list(range(mgr.shape[ax])))
+ for ax in range(mgr.ndim):
+ # take/fancy indexer
+ assert_take_ok(mgr, ax, indexer=[])
+ assert_take_ok(mgr, ax, indexer=[0, 0, 0])
+ assert_take_ok(mgr, ax, indexer=list(range(mgr.shape[ax])))
- if mgr.shape[ax] >= 3:
- assert_take_ok(mgr, ax, indexer=[0, 1, 2])
- assert_take_ok(mgr, ax, indexer=[-1, -2, -3])
+ if mgr.shape[ax] >= 3:
+ assert_take_ok(mgr, ax, indexer=[0, 1, 2])
+ assert_take_ok(mgr, ax, indexer=[-1, -2, -3])
- def test_reindex_axis(self):
+ @pytest.mark.parametrize("mgr", MANAGERS)
+ @pytest.mark.parametrize("fill_value", [None, np.nan, 100.0])
+ def test_reindex_axis(self, fill_value, mgr):
def assert_reindex_axis_is_ok(mgr, axis, new_labels, fill_value):
mat = mgr.as_array()
indexer = mgr.axes[axis].get_indexer_for(new_labels)
@@ -931,33 +927,27 @@ def assert_reindex_axis_is_ok(mgr, axis, new_labels, fill_value):
)
tm.assert_index_equal(reindexed.axes[axis], new_labels)
- for mgr in self.MANAGERS:
- for ax in range(mgr.ndim):
- for fill_value in (None, np.nan, 100.0):
- assert_reindex_axis_is_ok(mgr, ax, pd.Index([]), fill_value)
- assert_reindex_axis_is_ok(mgr, ax, mgr.axes[ax], fill_value)
- assert_reindex_axis_is_ok(
- mgr, ax, mgr.axes[ax][[0, 0, 0]], fill_value
- )
- assert_reindex_axis_is_ok(
- mgr, ax, pd.Index(["foo", "bar", "baz"]), fill_value
- )
- assert_reindex_axis_is_ok(
- mgr, ax, pd.Index(["foo", mgr.axes[ax][0], "baz"]), fill_value
- )
+ for ax in range(mgr.ndim):
+ assert_reindex_axis_is_ok(mgr, ax, pd.Index([]), fill_value)
+ assert_reindex_axis_is_ok(mgr, ax, mgr.axes[ax], fill_value)
+ assert_reindex_axis_is_ok(mgr, ax, mgr.axes[ax][[0, 0, 0]], fill_value)
+ assert_reindex_axis_is_ok(
+ mgr, ax, pd.Index(["foo", "bar", "baz"]), fill_value
+ )
+ assert_reindex_axis_is_ok(
+ mgr, ax, pd.Index(["foo", mgr.axes[ax][0], "baz"]), fill_value
+ )
+
+ if mgr.shape[ax] >= 3:
+ assert_reindex_axis_is_ok(mgr, ax, mgr.axes[ax][:-3], fill_value)
+ assert_reindex_axis_is_ok(mgr, ax, mgr.axes[ax][-3::-1], fill_value)
+ assert_reindex_axis_is_ok(
+ mgr, ax, mgr.axes[ax][[0, 1, 2, 0, 1, 2]], fill_value
+ )
- if mgr.shape[ax] >= 3:
- assert_reindex_axis_is_ok(
- mgr, ax, mgr.axes[ax][:-3], fill_value
- )
- assert_reindex_axis_is_ok(
- mgr, ax, mgr.axes[ax][-3::-1], fill_value
- )
- assert_reindex_axis_is_ok(
- mgr, ax, mgr.axes[ax][[0, 1, 2, 0, 1, 2]], fill_value
- )
-
- def test_reindex_indexer(self):
+ @pytest.mark.parametrize("mgr", MANAGERS)
+ @pytest.mark.parametrize("fill_value", [None, np.nan, 100.0])
+ def test_reindex_indexer(self, fill_value, mgr):
def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value):
mat = mgr.as_array()
reindexed_mat = algos.take_nd(mat, indexer, axis, fill_value=fill_value)
@@ -969,60 +959,42 @@ def assert_reindex_indexer_is_ok(mgr, axis, new_labels, indexer, fill_value):
)
tm.assert_index_equal(reindexed.axes[axis], new_labels)
- for mgr in self.MANAGERS:
- for ax in range(mgr.ndim):
- for fill_value in (None, np.nan, 100.0):
- assert_reindex_indexer_is_ok(mgr, ax, pd.Index([]), [], fill_value)
- assert_reindex_indexer_is_ok(
- mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value
- )
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- pd.Index(["foo"] * mgr.shape[ax]),
- np.arange(mgr.shape[ax]),
- fill_value,
- )
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- mgr.axes[ax][::-1],
- np.arange(mgr.shape[ax]),
- fill_value,
- )
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- mgr.axes[ax],
- np.arange(mgr.shape[ax])[::-1],
- fill_value,
- )
- assert_reindex_indexer_is_ok(
- mgr, ax, pd.Index(["foo", "bar", "baz"]), [0, 0, 0], fill_value
- )
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- pd.Index(["foo", "bar", "baz"]),
- [-1, 0, -1],
- fill_value,
- )
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- pd.Index(["foo", mgr.axes[ax][0], "baz"]),
- [-1, -1, -1],
- fill_value,
- )
+ for ax in range(mgr.ndim):
+ assert_reindex_indexer_is_ok(mgr, ax, pd.Index([]), [], fill_value)
+ assert_reindex_indexer_is_ok(
+ mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax]), fill_value
+ )
+ assert_reindex_indexer_is_ok(
+ mgr,
+ ax,
+ pd.Index(["foo"] * mgr.shape[ax]),
+ np.arange(mgr.shape[ax]),
+ fill_value,
+ )
+ assert_reindex_indexer_is_ok(
+ mgr, ax, mgr.axes[ax][::-1], np.arange(mgr.shape[ax]), fill_value,
+ )
+ assert_reindex_indexer_is_ok(
+ mgr, ax, mgr.axes[ax], np.arange(mgr.shape[ax])[::-1], fill_value,
+ )
+ assert_reindex_indexer_is_ok(
+ mgr, ax, pd.Index(["foo", "bar", "baz"]), [0, 0, 0], fill_value
+ )
+ assert_reindex_indexer_is_ok(
+ mgr, ax, pd.Index(["foo", "bar", "baz"]), [-1, 0, -1], fill_value,
+ )
+ assert_reindex_indexer_is_ok(
+ mgr,
+ ax,
+ pd.Index(["foo", mgr.axes[ax][0], "baz"]),
+ [-1, -1, -1],
+ fill_value,
+ )
- if mgr.shape[ax] >= 3:
- assert_reindex_indexer_is_ok(
- mgr,
- ax,
- pd.Index(["foo", "bar", "baz"]),
- [0, 1, 2],
- fill_value,
- )
+ if mgr.shape[ax] >= 3:
+ assert_reindex_indexer_is_ok(
+ mgr, ax, pd.Index(["foo", "bar", "baz"]), [0, 1, 2], fill_value,
+ )
# test_get_slice(slice_like, axis)
# take(indexer, axis)
| https://api.github.com/repos/pandas-dev/pandas/pulls/31900 | 2020-02-11T22:21:50Z | 2020-02-12T21:47:27Z | 2020-02-12T21:47:27Z | 2020-02-12T21:53:46Z | |
DOC: Update sort_index docs | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1101374f94b8c..46ef05ff0df41 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4989,8 +4989,9 @@ def sort_index(
and 1 identifies the columns.
level : int or level name or list of ints or list of level names
If not None, sort on values in specified index level(s).
- ascending : bool, default True
- Sort ascending vs. descending.
+ ascending : bool or list of bools, default True
+ Sort ascending vs. descending. When the index is a MultiIndex the
+ sort direction can be controlled for each level individually.
inplace : bool, default False
If True, perform operation in-place.
kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7d74d32bf5e14..eab2bfca635f2 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2981,11 +2981,11 @@ def sort_index(
self,
axis=0,
level=None,
- ascending=True,
- inplace=False,
- kind="quicksort",
- na_position="last",
- sort_remaining=True,
+ ascending: bool = True,
+ inplace: bool = False,
+ kind: str = "quicksort",
+ na_position: str = "last",
+ sort_remaining: bool = True,
ignore_index: bool = False,
):
"""
@@ -3000,8 +3000,9 @@ def sort_index(
Axis to direct sorting. This can only be 0 for Series.
level : int, optional
If not None, sort on values in specified index level(s).
- ascending : bool, default true
- Sort ascending vs. descending.
+ ascending : bool or list of bools, default True
+ Sort ascending vs. descending. When the index is a MultiIndex the
+ sort direction can be controlled for each level individually.
inplace : bool, default False
If True, perform operation in-place.
kind : {'quicksort', 'mergesort', 'heapsort'}, default 'quicksort'
| - [x] closes #31880 | https://api.github.com/repos/pandas-dev/pandas/pulls/31898 | 2020-02-11T21:44:43Z | 2020-02-19T01:30:09Z | 2020-02-19T01:30:09Z | 2020-02-19T01:36:21Z |
BUG: fix length_of_indexer with boolean mask | diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index fe475527f4596..90a58a6308f40 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -219,7 +219,7 @@ def maybe_convert_indices(indices, n: int):
def length_of_indexer(indexer, target=None) -> int:
"""
- Return the length of a single non-tuple indexer which could be a slice.
+ Return the expected length of target[indexer]
Returns
-------
@@ -245,6 +245,12 @@ def length_of_indexer(indexer, target=None) -> int:
step = -step
return (stop - start + step - 1) // step
elif isinstance(indexer, (ABCSeries, ABCIndexClass, np.ndarray, list)):
+ if isinstance(indexer, list):
+ indexer = np.array(indexer)
+
+ if indexer.dtype == bool:
+ # GH#25774
+ return indexer.sum()
return len(indexer)
elif not is_list_like_indexer(indexer):
return 1
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b3777e949a08c..2b97d265350c7 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1688,31 +1688,17 @@ def _setitem_with_indexer(self, indexer, value):
plane_indexer = tuple([idx]) + indexer[info_axis + 1 :]
lplane_indexer = length_of_indexer(plane_indexer[0], index)
+ # lplane_indexer gives the expected length of obj[idx]
# require that we are setting the right number of values that
# we are indexing
- if (
- is_list_like_indexer(value)
- and np.iterable(value)
- and lplane_indexer != len(value)
- ):
-
- if len(obj[idx]) != len(value):
- raise ValueError(
- "cannot set using a multi-index "
- "selection indexer with a different "
- "length than the value"
- )
-
- # make sure we have an ndarray
- value = getattr(value, "values", value).ravel()
+ if is_list_like_indexer(value) and lplane_indexer != len(value):
- # we can directly set the series here
- obj._consolidate_inplace()
- obj = obj.copy()
- obj._data = obj._data.setitem(indexer=tuple([idx]), value=value)
- self.obj[item] = obj
- return
+ raise ValueError(
+ "cannot set using a multi-index "
+ "selection indexer with a different "
+ "length than the value"
+ )
# non-mi
else:
diff --git a/pandas/tests/indexing/test_indexers.py b/pandas/tests/indexing/test_indexers.py
new file mode 100644
index 0000000000000..173f33b19f8d5
--- /dev/null
+++ b/pandas/tests/indexing/test_indexers.py
@@ -0,0 +1,11 @@
+# Tests aimed at pandas.core.indexers
+import numpy as np
+
+from pandas.core.indexers import length_of_indexer
+
+
+def test_length_of_indexer():
+ arr = np.zeros(4, dtype=bool)
+ arr[0] = 1
+ result = length_of_indexer(arr)
+ assert result == 1
| - [x] closes #25774
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Allows for a nice cleanup in _setitem_with_indexer. Note this will conflict with #31887; the order doesn't matter. | https://api.github.com/repos/pandas-dev/pandas/pulls/31897 | 2020-02-11T21:01:49Z | 2020-02-18T00:15:46Z | 2020-02-18T00:15:46Z | 2020-02-18T00:46:10Z |
CLN: D202 No blank lines allowed after function docstring | diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index 8b6116d3abd60..c283baeb9d412 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -550,7 +550,6 @@ def _select_options(pat: str) -> List[str]:
if pat=="all", returns all registered options
"""
-
# short-circuit for exact key
if pat in _registered_options:
return [pat]
@@ -573,7 +572,6 @@ def _get_root(key: str) -> Tuple[Dict[str, Any], str]:
def _is_deprecated(key: str) -> bool:
""" Returns True if the given option has been deprecated """
-
key = key.lower()
return key in _deprecated_options
@@ -586,7 +584,6 @@ def _get_deprecated_option(key: str):
-------
DeprecatedOption (namedtuple) if key is deprecated, None otherwise
"""
-
try:
d = _deprecated_options[key]
except KeyError:
@@ -611,7 +608,6 @@ def _translate_key(key: str) -> str:
if key id deprecated and a replacement key defined, will return the
replacement key, otherwise returns `key` as - is
"""
-
d = _get_deprecated_option(key)
if d:
return d.rkey or key
@@ -627,7 +623,6 @@ def _warn_if_deprecated(key: str) -> bool:
-------
bool - True if `key` is deprecated, False otherwise.
"""
-
d = _get_deprecated_option(key)
if d:
if d.msg:
@@ -649,7 +644,6 @@ def _warn_if_deprecated(key: str) -> bool:
def _build_option_description(k: str) -> str:
""" Builds a formatted description of a registered option and prints it """
-
o = _get_registered_option(k)
d = _get_deprecated_option(k)
@@ -674,7 +668,6 @@ def _build_option_description(k: str) -> str:
def pp_options_list(keys: Iterable[str], width=80, _print: bool = False):
""" Builds a concise listing of available options, grouped by prefix """
-
from textwrap import wrap
from itertools import groupby
@@ -738,7 +731,6 @@ def config_prefix(prefix):
will register options "display.font.color", "display.font.size", set the
value of "display.font.size"... and so on.
"""
-
# Note: reset_option relies on set_option, and on key directly
# it does not fit in to this monkey-patching scheme
@@ -801,7 +793,6 @@ def is_instance_factory(_type) -> Callable[[Any], None]:
ValueError if x is not an instance of `_type`
"""
-
if isinstance(_type, (tuple, list)):
_type = tuple(_type)
type_repr = "|".join(map(str, _type))
@@ -848,7 +839,6 @@ def is_nonnegative_int(value: Optional[int]) -> None:
ValueError
When the value is not None or is a negative integer
"""
-
if value is None:
return
diff --git a/pandas/_config/localization.py b/pandas/_config/localization.py
index 0d68e78372d8a..66865e1afb952 100644
--- a/pandas/_config/localization.py
+++ b/pandas/_config/localization.py
@@ -61,7 +61,6 @@ def can_set_locale(lc: str, lc_var: int = locale.LC_ALL) -> bool:
bool
Whether the passed locale can be set
"""
-
try:
with set_locale(lc, lc_var=lc_var):
pass
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 9e71524263a18..46ed65c87e8dd 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -1508,7 +1508,6 @@ def assert_sp_array_equal(
create a new BlockIndex for that array, with consolidated
block indices.
"""
-
_check_isinstance(left, right, pd.arrays.SparseArray)
assert_numpy_array_equal(left.sp_values, right.sp_values, check_dtype=check_dtype)
@@ -1876,7 +1875,6 @@ def makeCustomIndex(
if unspecified, string labels will be generated.
"""
-
if ndupe_l is None:
ndupe_l = [1] * nlevels
assert is_sequence(ndupe_l) and len(ndupe_l) <= nlevels
@@ -2025,7 +2023,6 @@ def makeCustomDataframe(
>> a=mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
"""
-
assert c_idx_nlevels > 0
assert r_idx_nlevels > 0
assert r_idx_type is None or (
@@ -2229,7 +2226,6 @@ def can_connect(url, error_classes=None):
Return True if no IOError (unable to connect) or URLError (bad url) was
raised
"""
-
if error_classes is None:
error_classes = _get_default_network_errors()
@@ -2603,7 +2599,6 @@ def test_parallel(num_threads=2, kwargs_list=None):
https://github.com/scikit-image/scikit-image/pull/1519
"""
-
assert num_threads > 0
has_kwargs_list = kwargs_list is not None
if has_kwargs_list:
@@ -2685,7 +2680,6 @@ def set_timezone(tz: str):
...
'EDT'
"""
-
import os
import time
diff --git a/pandas/compat/numpy/function.py b/pandas/compat/numpy/function.py
index 05ecccc67daef..ccc970fb453c2 100644
--- a/pandas/compat/numpy/function.py
+++ b/pandas/compat/numpy/function.py
@@ -99,7 +99,6 @@ def validate_argmin_with_skipna(skipna, args, kwargs):
'skipna' parameter is either an instance of ndarray or
is None, since 'skipna' itself should be a boolean
"""
-
skipna, args = process_skipna(skipna, args)
validate_argmin(args, kwargs)
return skipna
@@ -113,7 +112,6 @@ def validate_argmax_with_skipna(skipna, args, kwargs):
'skipna' parameter is either an instance of ndarray or
is None, since 'skipna' itself should be a boolean
"""
-
skipna, args = process_skipna(skipna, args)
validate_argmax(args, kwargs)
return skipna
@@ -151,7 +149,6 @@ def validate_argsort_with_ascending(ascending, args, kwargs):
either integer type or is None, since 'ascending' itself should
be a boolean
"""
-
if is_integer(ascending) or ascending is None:
args = (ascending,) + args
ascending = True
@@ -173,7 +170,6 @@ def validate_clip_with_axis(axis, args, kwargs):
so check if the 'axis' parameter is an instance of ndarray, since
'axis' itself should either be an integer or None
"""
-
if isinstance(axis, ndarray):
args = (axis,) + args
axis = None
@@ -298,7 +294,6 @@ def validate_take_with_convert(convert, args, kwargs):
ndarray or 'None', so check if the 'convert' parameter is either
an instance of ndarray or is None
"""
-
if isinstance(convert, ndarray) or convert is None:
args = (convert,) + args
convert = True
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 0a1a1376bfc8d..3f4acca8bce18 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -229,7 +229,6 @@ def load(fh, encoding: Optional[str] = None, is_verbose: bool = False):
encoding : an optional encoding
is_verbose : show exception output
"""
-
try:
fh.seek(0)
if encoding is not None:
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 7463b2b579c0c..dd329c1b00dbb 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -118,7 +118,6 @@ def ip():
Will raise a skip if IPython is not installed.
"""
-
pytest.importorskip("IPython", minversion="6.0.0")
from IPython.core.interactiveshell import InteractiveShell
@@ -679,7 +678,6 @@ def any_nullable_int_dtype(request):
* 'UInt64'
* 'Int64'
"""
-
return request.param
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 886b0a3c5fec1..af06a559ded69 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -85,7 +85,6 @@ def _ensure_data(values, dtype=None):
values : ndarray
pandas_dtype : str or dtype
"""
-
# we check some simple dtypes first
if is_object_dtype(dtype):
return ensure_object(np.asarray(values)), "object"
@@ -182,7 +181,6 @@ def _reconstruct_data(values, dtype, original):
-------
Index for extension types, otherwise ndarray casted to dtype
"""
-
if is_extension_array_dtype(dtype):
values = dtype.construct_array_type()._from_sequence(values)
elif is_bool_dtype(dtype):
@@ -368,7 +366,6 @@ def unique(values):
>>> pd.unique([('a', 'b'), ('b', 'a'), ('a', 'c'), ('b', 'a')])
array([('a', 'b'), ('b', 'a'), ('a', 'c')], dtype=object)
"""
-
values = _ensure_arraylike(values)
if is_extension_array_dtype(values):
@@ -796,7 +793,6 @@ def duplicated(values, keep="first") -> np.ndarray:
-------
duplicated : ndarray
"""
-
values, _ = _ensure_data(values)
ndtype = values.dtype.name
f = getattr(htable, f"duplicated_{ndtype}")
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 81e1d84880f60..70e0a129c055f 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -35,7 +35,6 @@ def frame_apply(
kwds=None,
):
""" construct and return a row or column based frame apply object """
-
axis = obj._get_axis_number(axis)
klass: Type[FrameApply]
if axis == 0:
@@ -144,7 +143,6 @@ def agg_axis(self) -> "Index":
def get_result(self):
""" compute the results """
-
# dispatch to agg
if is_list_like(self.f) or is_dict_like(self.f):
return self.obj.aggregate(self.f, axis=self.axis, *self.args, **self.kwds)
@@ -193,7 +191,6 @@ def apply_empty_result(self):
we will try to apply the function to an empty
series in order to see if this is a reduction function
"""
-
# we are not asked to reduce or infer reduction
# so just return a copy of the existing object
if self.result_type not in ["reduce", None]:
@@ -396,7 +393,6 @@ def wrap_results_for_axis(
self, results: ResType, res_index: "Index"
) -> "DataFrame":
""" return the results for the rows """
-
result = self.obj._constructor(data=results)
if not isinstance(results[0], ABCSeries):
@@ -457,7 +453,6 @@ def wrap_results_for_axis(
def infer_to_same_shape(self, results: ResType, res_index: "Index") -> "DataFrame":
""" infer the results to the same shape as the input object """
-
result = self.obj._constructor(data=results)
result = result.T
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index d26ff7490e714..a0b9402fd97cc 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -695,7 +695,6 @@ def _set_categories(self, categories, fastpath=False):
[a, c]
Categories (2, object): [a, c]
"""
-
if fastpath:
new_dtype = CategoricalDtype._from_fastpath(categories, self.ordered)
else:
@@ -1221,7 +1220,6 @@ def shape(self):
-------
shape : tuple
"""
-
return tuple([len(self._codes)])
def shift(self, periods, fill_value=None):
@@ -1378,7 +1376,6 @@ def isna(self):
Categorical.notna : Boolean inverse of Categorical.isna.
"""
-
ret = self._codes == -1
return ret
@@ -1928,7 +1925,6 @@ def _repr_categories_info(self) -> str:
"""
Returns a string representation of the footer.
"""
-
category_strs = self._repr_categories()
dtype = str(self.categories.dtype)
levheader = f"Categories ({len(self.categories)}, {dtype}): "
@@ -2254,7 +2250,6 @@ def unique(self):
Series.unique
"""
-
# unlike np.unique, unique1d does not sort
unique_codes = unique1d(self.codes)
cat = self.copy()
@@ -2314,7 +2309,6 @@ def is_dtype_equal(self, other):
-------
bool
"""
-
try:
return hash(self.dtype) == hash(other.dtype)
except (AttributeError, TypeError):
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 03c8e48c6e699..07aa8d49338c8 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -500,7 +500,6 @@ def __getitem__(self, key):
This getitem defers to the underlying array, which by-definition can
only handle list-likes, slices, and integer scalars
"""
-
is_int = lib.is_integer(key)
if lib.is_scalar(key) and not is_int:
raise IndexError(
@@ -892,7 +891,6 @@ def _maybe_mask_results(self, result, fill_value=iNaT, convert=None):
This is an internal routine.
"""
-
if self._hasnans:
if convert:
result = result.astype(convert)
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 4bfd5f5770b69..6cd3a41dd957a 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -147,7 +147,6 @@ def safe_cast(values, dtype, copy: bool):
ints.
"""
-
try:
return values.astype(dtype, casting="safe", copy=copy)
except TypeError:
@@ -601,7 +600,6 @@ def _maybe_mask_result(self, result, mask, other, op_name):
other : scalar or array-like
op_name : str
"""
-
# if we have a float operand we are by-definition
# a float result
# or our op is a divide
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 8383b783d90e7..8141e2c78a7e2 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -624,7 +624,6 @@ def _addsub_int_array(
-------
result : PeriodArray
"""
-
assert op in [operator.add, operator.sub]
if op is operator.sub:
other = -other
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 8008805ddcf87..b17a4647ffc9f 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -1503,7 +1503,6 @@ def make_sparse(arr, kind="block", fill_value=None, dtype=None, copy=False):
-------
(sparse_values, index, fill_value) : (ndarray, SparseIndex, Scalar)
"""
-
arr = com.values_from_object(arr)
if arr.ndim > 1:
diff --git a/pandas/core/arrays/sparse/scipy_sparse.py b/pandas/core/arrays/sparse/scipy_sparse.py
index b67f2c9f52c76..eff9c03386a38 100644
--- a/pandas/core/arrays/sparse/scipy_sparse.py
+++ b/pandas/core/arrays/sparse/scipy_sparse.py
@@ -31,7 +31,6 @@ def _to_ijv(ss, row_levels=(0,), column_levels=(1,), sort_labels=False):
def get_indexers(levels):
""" Return sparse coords and dense labels for subset levels """
-
# TODO: how to do this better? cleanly slice nonnull_labels given the
# coord
values_ilabels = [tuple(x[i] for i in levels) for x in nonnull_labels.index]
@@ -90,7 +89,6 @@ def _sparse_series_to_coo(ss, row_levels=(0,), column_levels=(1,), sort_labels=F
levels row_levels, column_levels as the row and column
labels respectively. Returns the sparse_matrix, row and column labels.
"""
-
import scipy.sparse
if ss.index.nlevels < 2:
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 00c7a41477017..550ce74de5357 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -337,7 +337,6 @@ def apply_if_callable(maybe_callable, obj, **kwargs):
obj : NDFrame
**kwargs
"""
-
if callable(maybe_callable):
return maybe_callable(obj, **kwargs)
@@ -412,7 +411,6 @@ def random_state(state=None):
-------
np.random.RandomState
"""
-
if is_integer(state):
return np.random.RandomState(state)
elif isinstance(state, np.random.RandomState):
diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
index c26208d3b4465..c59952bea8dc0 100644
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -599,7 +599,6 @@ def visit_Assign(self, node, **kwargs):
might or might not exist in the resolvers
"""
-
if len(node.targets) != 1:
raise SyntaxError("can only assign a single expression")
if not isinstance(node.targets[0], ast.Name):
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 9f209cccd5be6..19f151846a080 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -95,7 +95,6 @@ def _disallow_scalar_only_bool_ops(self):
def prune(self, klass):
def pr(left, right):
""" create and return a new specialized BinOp from myself """
-
if left is None:
return right
elif right is None:
@@ -476,7 +475,6 @@ def _validate_where(w):
------
TypeError : An invalid data type was passed in for w (e.g. dict).
"""
-
if not (isinstance(w, (PyTablesExpr, str)) or is_list_like(w)):
raise TypeError(
"where must be passed as a string, PyTablesExpr, "
@@ -574,7 +572,6 @@ def __repr__(self) -> str:
def evaluate(self):
""" create and return the numexpr condition and filter """
-
try:
self.condition = self.terms.prune(ConditionBinOp)
except AttributeError:
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 6120bc92adbfc..011c09c9ca1ef 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -78,7 +78,6 @@
def maybe_convert_platform(values):
""" try to do platform conversion, allow ndarray or list here """
-
if isinstance(values, (list, tuple, range)):
values = construct_1d_object_array_from_listlike(values)
if getattr(values, "dtype", None) == np.object_:
@@ -97,7 +96,6 @@ def is_nested_object(obj) -> bool:
This may not be necessarily be performant.
"""
-
if isinstance(obj, ABCSeries) and is_object_dtype(obj):
if any(isinstance(v, ABCSeries) for v in obj.values):
@@ -525,7 +523,6 @@ def _ensure_dtype_type(value, dtype):
-------
object
"""
-
# Start with exceptions in which we do _not_ cast to numpy types
if is_extension_array_dtype(dtype):
return value
@@ -566,7 +563,6 @@ def infer_dtype_from_scalar(val, pandas_dtype: bool = False):
If False, scalar belongs to pandas extension types is inferred as
object
"""
-
dtype = np.object_
# a 1-element ndarray
@@ -823,7 +819,6 @@ def astype_nansafe(arr, dtype, copy: bool = True, skipna: bool = False):
ValueError
The dtype was a datetime64/timedelta64 dtype, but it had no unit.
"""
-
# dispatch on extension dtype if needed
if is_extension_array_dtype(dtype):
return dtype.construct_array_type()._from_sequence(arr, dtype=dtype, copy=copy)
@@ -965,7 +960,6 @@ def soft_convert_objects(
copy: bool = True,
):
""" if we have an object dtype, try to coerce dates and/or numbers """
-
validate_bool_kwarg(datetime, "datetime")
validate_bool_kwarg(numeric, "numeric")
validate_bool_kwarg(timedelta, "timedelta")
@@ -1053,7 +1047,6 @@ def convert_dtypes(
dtype
new dtype
"""
-
if convert_string or convert_integer or convert_boolean:
try:
inferred_dtype = lib.infer_dtype(input_array)
@@ -1133,7 +1126,6 @@ def maybe_infer_to_datetimelike(value, convert_dates: bool = False):
leave inferred dtype 'date' alone
"""
-
# TODO: why not timedelta?
if isinstance(
value, (ABCDatetimeIndex, ABCPeriodIndex, ABCDatetimeArray, ABCPeriodArray)
@@ -1373,7 +1365,6 @@ def find_common_type(types):
numpy.find_common_type
"""
-
if len(types) == 0:
raise ValueError("no types given")
@@ -1420,7 +1411,6 @@ def cast_scalar_to_array(shape, value, dtype=None):
ndarray of shape, filled with value, of specified / inferred dtype
"""
-
if dtype is None:
dtype, fill_value = infer_dtype_from_scalar(value)
else:
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index f8e14d1cbc9e9..c0420244f671e 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -92,7 +92,6 @@ def ensure_float(arr):
float_arr : The original array cast to the float dtype if
possible. Otherwise, the original array is returned.
"""
-
if issubclass(arr.dtype.type, (np.integer, np.bool_)):
arr = arr.astype(float)
return arr
@@ -132,7 +131,6 @@ def ensure_categorical(arr):
cat_arr : The original array cast as a Categorical. If it already
is a Categorical, we return as is.
"""
-
if not is_categorical(arr):
from pandas import Categorical
@@ -325,7 +323,6 @@ def is_scipy_sparse(arr) -> bool:
>>> is_scipy_sparse(pd.arrays.SparseArray([1, 2, 3]))
False
"""
-
global _is_scipy_sparse
if _is_scipy_sparse is None:
@@ -367,7 +364,6 @@ def is_categorical(arr) -> bool:
>>> is_categorical(pd.CategoricalIndex([1, 2, 3]))
True
"""
-
return isinstance(arr, ABCCategorical) or is_categorical_dtype(arr)
@@ -398,7 +394,6 @@ def is_datetime64_dtype(arr_or_dtype) -> bool:
>>> is_datetime64_dtype([1, 2, 3])
False
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.datetime64))
@@ -434,7 +429,6 @@ def is_datetime64tz_dtype(arr_or_dtype) -> bool:
>>> is_datetime64tz_dtype(s)
True
"""
-
if arr_or_dtype is None:
return False
return DatetimeTZDtype.is_dtype(arr_or_dtype)
@@ -467,7 +461,6 @@ def is_timedelta64_dtype(arr_or_dtype) -> bool:
>>> is_timedelta64_dtype('0 days')
False
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.timedelta64))
@@ -498,7 +491,6 @@ def is_period_dtype(arr_or_dtype) -> bool:
>>> is_period_dtype(pd.PeriodIndex([], freq="A"))
True
"""
-
# TODO: Consider making Period an instance of PeriodDtype
if arr_or_dtype is None:
return False
@@ -534,7 +526,6 @@ def is_interval_dtype(arr_or_dtype) -> bool:
>>> is_interval_dtype(pd.IntervalIndex([interval]))
True
"""
-
# TODO: Consider making Interval an instance of IntervalDtype
if arr_or_dtype is None:
return False
@@ -568,7 +559,6 @@ def is_categorical_dtype(arr_or_dtype) -> bool:
>>> is_categorical_dtype(pd.CategoricalIndex([1, 2, 3]))
True
"""
-
if arr_or_dtype is None:
return False
return CategoricalDtype.is_dtype(arr_or_dtype)
@@ -602,7 +592,6 @@ def is_string_dtype(arr_or_dtype) -> bool:
>>> is_string_dtype(pd.Series([1, 2]))
False
"""
-
# TODO: gh-15585: consider making the checks stricter.
def condition(dtype) -> bool:
return dtype.kind in ("O", "S", "U") and not is_excluded_dtype(dtype)
@@ -641,7 +630,6 @@ def is_period_arraylike(arr) -> bool:
>>> is_period_arraylike(pd.PeriodIndex(["2017-01-01"], freq="D"))
True
"""
-
if isinstance(arr, (ABCPeriodIndex, ABCPeriodArray)):
return True
elif isinstance(arr, (np.ndarray, ABCSeries)):
@@ -673,7 +661,6 @@ def is_datetime_arraylike(arr) -> bool:
>>> is_datetime_arraylike(pd.DatetimeIndex([1, 2, 3]))
True
"""
-
if isinstance(arr, ABCDatetimeIndex):
return True
elif isinstance(arr, (np.ndarray, ABCSeries)):
@@ -711,7 +698,6 @@ def is_dtype_equal(source, target) -> bool:
>>> is_dtype_equal(DatetimeTZDtype(tz="UTC"), "datetime64")
False
"""
-
try:
source = _get_dtype(source)
target = _get_dtype(target)
@@ -770,7 +756,6 @@ def is_any_int_dtype(arr_or_dtype) -> bool:
>>> is_any_int_dtype(pd.Index([1, 2.])) # float
False
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.integer, np.timedelta64))
@@ -825,7 +810,6 @@ def is_integer_dtype(arr_or_dtype) -> bool:
>>> is_integer_dtype(pd.Index([1, 2.])) # float
False
"""
-
return _is_dtype_type(arr_or_dtype, classes_and_not_datetimelike(np.integer))
@@ -882,7 +866,6 @@ def is_signed_integer_dtype(arr_or_dtype) -> bool:
>>> is_signed_integer_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
"""
-
return _is_dtype_type(arr_or_dtype, classes_and_not_datetimelike(np.signedinteger))
@@ -982,7 +965,6 @@ def is_int64_dtype(arr_or_dtype) -> bool:
>>> is_int64_dtype(np.array([1, 2], dtype=np.uint32)) # unsigned
False
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.int64))
@@ -1137,7 +1119,6 @@ def is_datetime_or_timedelta_dtype(arr_or_dtype) -> bool:
>>> is_datetime_or_timedelta_dtype(np.array([], dtype=np.datetime64))
True
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.datetime64, np.timedelta64))
@@ -1198,7 +1179,6 @@ def is_numeric_v_string_like(a, b):
>>> is_numeric_v_string_like(np.array(["foo"]), np.array(["foo"]))
False
"""
-
is_a_array = isinstance(a, np.ndarray)
is_b_array = isinstance(b, np.ndarray)
@@ -1260,7 +1240,6 @@ def is_datetimelike_v_numeric(a, b):
>>> is_datetimelike_v_numeric(np.array([dt]), np.array([dt]))
False
"""
-
if not hasattr(a, "dtype"):
a = np.asarray(a)
if not hasattr(b, "dtype"):
@@ -1311,7 +1290,6 @@ def needs_i8_conversion(arr_or_dtype) -> bool:
>>> needs_i8_conversion(pd.DatetimeIndex([1, 2, 3], tz="US/Eastern"))
True
"""
-
if arr_or_dtype is None:
return False
return (
@@ -1358,7 +1336,6 @@ def is_numeric_dtype(arr_or_dtype) -> bool:
>>> is_numeric_dtype(np.array([], dtype=np.timedelta64))
False
"""
-
return _is_dtype_type(
arr_or_dtype, classes_and_not_datetimelike(np.number, np.bool_)
)
@@ -1392,7 +1369,6 @@ def is_string_like_dtype(arr_or_dtype) -> bool:
>>> is_string_like_dtype(pd.Series([1, 2]))
False
"""
-
return _is_dtype(arr_or_dtype, lambda dtype: dtype.kind in ("S", "U"))
@@ -1638,7 +1614,6 @@ def is_complex_dtype(arr_or_dtype) -> bool:
>>> is_complex_dtype(np.array([1 + 1j, 5]))
True
"""
-
return _is_dtype_type(arr_or_dtype, classes(np.complexfloating))
@@ -1657,7 +1632,6 @@ def _is_dtype(arr_or_dtype, condition) -> bool:
bool
"""
-
if arr_or_dtype is None:
return False
try:
@@ -1686,7 +1660,6 @@ def _get_dtype(arr_or_dtype) -> DtypeObj:
------
TypeError : The passed in object is None.
"""
-
if arr_or_dtype is None:
raise TypeError("Cannot deduce dtype from null object")
@@ -1717,7 +1690,6 @@ def _is_dtype_type(arr_or_dtype, condition) -> bool:
-------
bool : if the condition is satisfied for the arr_or_dtype
"""
-
if arr_or_dtype is None:
return condition(type(None))
@@ -1767,7 +1739,6 @@ def infer_dtype_from_object(dtype):
-------
dtype_object : The extracted numpy dtype.type-style object.
"""
-
if isinstance(dtype, type) and issubclass(dtype, np.generic):
# Type object from a dtype
return dtype
@@ -1827,7 +1798,6 @@ def _validate_date_like_dtype(dtype) -> None:
ValueError : The dtype is an illegal date-like dtype (e.g. the
the frequency provided is too specific)
"""
-
try:
typ = np.datetime_data(dtype)[0]
except ValueError as e:
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index fdc2eeb34b4ed..e53eb3b4d8e71 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -38,7 +38,6 @@ def get_dtype_kinds(l):
-------
a set of kinds that exist in this list of arrays
"""
-
typs = set()
for arr in l:
@@ -85,7 +84,6 @@ def concat_compat(to_concat, axis: int = 0):
-------
a single array, preserving the combined dtypes
"""
-
# filter empty arrays
# 1-d dtypes always are included here
def is_nonempty(x) -> bool:
@@ -153,7 +151,6 @@ def concat_categorical(to_concat, axis: int = 0):
Categorical
A single array, preserving the combined dtypes
"""
-
# we could have object blocks and categoricals here
# if we only have a single categoricals then combine everything
# else its a non-compat categorical
@@ -381,7 +378,6 @@ def concat_datetime(to_concat, axis=0, typs=None):
-------
a single array, preserving the combined dtypes
"""
-
if typs is None:
typs = get_dtype_kinds(to_concat)
@@ -466,7 +462,6 @@ def _concat_sparse(to_concat, axis=0, typs=None):
-------
a single array, preserving the combined dtypes
"""
-
from pandas.core.arrays import SparseArray
fill_values = [x.fill_value for x in to_concat if isinstance(x, SparseArray)]
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 8aaebe89871b6..d93ad973ff02d 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -831,7 +831,6 @@ def __new__(cls, freq=None):
----------
freq : frequency
"""
-
if isinstance(freq, PeriodDtype):
return freq
@@ -930,7 +929,6 @@ def is_dtype(cls, dtype) -> bool:
Return a boolean if we if the passed type is an actual dtype that we
can match (via string or type)
"""
-
if isinstance(dtype, str):
# PeriodDtype can be instantiated from freq string like "U",
# but doesn't regard freq str like "U" as dtype.
@@ -1139,7 +1137,6 @@ def is_dtype(cls, dtype) -> bool:
Return a boolean if we if the passed type is an actual dtype that we
can match (via string or type)
"""
-
if isinstance(dtype, str):
if dtype.lower().startswith("interval"):
try:
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index a9cd696633273..56b880dca1241 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -65,7 +65,6 @@ def is_number(obj) -> bool:
>>> pd.api.types.is_number("5")
False
"""
-
return isinstance(obj, (Number, np.number))
@@ -91,7 +90,6 @@ def _iterable_not_string(obj) -> bool:
>>> _iterable_not_string(1)
False
"""
-
return isinstance(obj, abc.Iterable) and not isinstance(obj, str)
@@ -124,7 +122,6 @@ def is_file_like(obj) -> bool:
>>> is_file_like([1, 2, 3])
False
"""
-
if not (hasattr(obj, "read") or hasattr(obj, "write")):
return False
@@ -177,7 +174,6 @@ def is_re_compilable(obj) -> bool:
>>> is_re_compilable(1)
False
"""
-
try:
re.compile(obj)
except TypeError:
@@ -215,7 +211,6 @@ def is_array_like(obj) -> bool:
>>> is_array_like(("a", "b"))
False
"""
-
return is_list_like(obj) and hasattr(obj, "dtype")
@@ -321,7 +316,6 @@ def is_named_tuple(obj) -> bool:
>>> is_named_tuple((1, 2))
False
"""
-
return isinstance(obj, tuple) and hasattr(obj, "_fields")
@@ -386,7 +380,6 @@ def is_sequence(obj) -> bool:
>>> is_sequence(iter(l))
False
"""
-
try:
iter(obj) # Can iterate over it.
len(obj) # Has a length associated with it.
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 0bc754b3e8fb3..ee74b02af9516 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -430,7 +430,6 @@ def array_equivalent(left, right, strict_nan: bool = False) -> bool:
... np.array([1, 2, np.nan]))
False
"""
-
left, right = np.asarray(left), np.asarray(right)
# shape compat
@@ -504,7 +503,6 @@ def _infer_fill_value(val):
scalar/ndarray/list-like if we are a NaT, return the correct dtyped
element to provide proper block construction
"""
-
if not is_list_like(val):
val = [val]
val = np.array(val, copy=False)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0fca02f110031..99568d47b777a 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5013,7 +5013,6 @@ def sort_index(
sorted_obj : DataFrame or None
DataFrame with sorted index if inplace=False, None otherwise.
"""
-
# TODO: this can be combined with Series.sort_index impl as
# almost identical
@@ -7040,7 +7039,6 @@ def applymap(self, func) -> "DataFrame":
0 1.000000 4.494400
1 11.262736 20.857489
"""
-
# if we have a dtype == 'M8[ns]', provide boxed values
def infer(x):
if x.empty:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index dfafb1057a543..480b03a956356 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -259,7 +259,6 @@ def attrs(self, value: Mapping[Optional[Hashable], Any]) -> None:
def _validate_dtype(self, dtype):
""" validate the passed dtype """
-
if dtype is not None:
dtype = pandas_dtype(dtype)
@@ -351,7 +350,6 @@ def _construct_axes_from_arguments(
supplied; useful to distinguish when a user explicitly passes None
in scenarios where None has special meaning.
"""
-
# construct the args
args = list(args)
for a in self._AXIS_ORDERS:
@@ -2246,7 +2244,6 @@ def to_json(
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
"""
-
from pandas.io import json
if date_format is None and orient == "table":
@@ -3082,7 +3079,6 @@ def to_csv(
>>> df.to_csv('out.zip', index=False,
... compression=compression_opts) # doctest: +SKIP
"""
-
df = self if isinstance(self, ABCDataFrame) else self.to_frame()
from pandas.io.formats.csvs import CSVFormatter
@@ -3161,7 +3157,6 @@ def _maybe_update_cacher(
verify_is_copy : bool, default True
Provide is_copy checks.
"""
-
cacher = getattr(self, "_cacher", None)
if cacher is not None:
ref = cacher[1]()
@@ -3575,7 +3570,6 @@ def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
df.iloc[0:5]['group'] = 'a'
"""
-
# return early if the check is not needed
if not (force or self._is_copy):
return
@@ -4417,7 +4411,6 @@ def _reindex_with_indexers(
allow_dups: bool_t = False,
) -> FrameOrSeries:
"""allow_dups indicates an internal call here """
-
# reindex doing multiple operations on different axes if indicated
new_data = self._data
for axis in sorted(reindexers.keys()):
@@ -4613,7 +4606,6 @@ def head(self: FrameOrSeries, n: int = 5) -> FrameOrSeries:
4 monkey
5 parrot
"""
-
return self.iloc[:n]
def tail(self: FrameOrSeries, n: int = 5) -> FrameOrSeries:
@@ -4686,7 +4678,6 @@ def tail(self: FrameOrSeries, n: int = 5) -> FrameOrSeries:
7 whale
8 zebra
"""
-
if n == 0:
return self.iloc[0:0]
return self.iloc[-n:]
@@ -4801,7 +4792,6 @@ def sample(
falcon 2 2 10
fish 0 0 8
"""
-
if axis is None:
axis = self._stat_axis_number
@@ -5087,7 +5077,6 @@ def __getattr__(self, name: str):
"""After regular attribute access, try looking up the name
This allows simpler access to columns for interactive use.
"""
-
# Note: obj.x will always call obj.__getattribute__('x') prior to
# calling obj.__getattr__('x').
@@ -5106,7 +5095,6 @@ def __setattr__(self, name: str, value) -> None:
"""After regular attribute access, try setting the name
This allows simpler access to columns for interactive use.
"""
-
# first try regular attribute access via __getattribute__, so that
# e.g. ``obj.x`` and ``obj.x = 4`` will always reference/modify
# the same attribute.
@@ -5209,7 +5197,6 @@ def _is_numeric_mixed_type(self):
def _check_inplace_setting(self, value) -> bool_t:
""" check whether we allow in-place setting with this type of value """
-
if self._is_mixed_type:
if not self._is_numeric_mixed_type:
@@ -7916,7 +7903,6 @@ def resample(
2000-01-03 32 150
2000-01-04 36 90
"""
-
from pandas.core.resample import get_resampler
axis = self._get_axis_number(axis)
@@ -8930,7 +8916,6 @@ def tshift(
attributes of the index. If neither of those attributes exist, a
ValueError is thrown
"""
-
index = self._get_axis(axis)
if freq is None:
freq = getattr(index, "freq", None)
@@ -9919,7 +9904,6 @@ def _add_numeric_operations(cls):
"""
Add the operations to the cls; evaluate the doc strings again
"""
-
axis_descr, name, name2 = _doc_parms(cls)
cls.any = _make_logical_function(
@@ -10157,7 +10141,6 @@ def _add_series_or_dataframe_operations(cls):
Add the series or dataframe only operations to the cls; evaluate
the doc strings again.
"""
-
from pandas.core.window import EWM, Expanding, Rolling, Window
@Appender(Rolling.__doc__)
@@ -10271,7 +10254,6 @@ def _find_valid_index(self, how: str):
-------
idx_first_valid : type of index
"""
-
idxpos = find_valid_index(self._values, how)
if idxpos is None:
return None
diff --git a/pandas/core/groupby/categorical.py b/pandas/core/groupby/categorical.py
index 399ed9ddc9ba1..c71ebee397bbd 100644
--- a/pandas/core/groupby/categorical.py
+++ b/pandas/core/groupby/categorical.py
@@ -41,7 +41,6 @@ def recode_for_groupby(c: Categorical, sort: bool, observed: bool):
Categorical or None
If we are observed, return the original categorical, otherwise None
"""
-
# we only care about observed values
if observed:
unique_codes = unique1d(c.codes)
@@ -90,7 +89,6 @@ def recode_from_groupby(c: Categorical, sort: bool, ci):
-------
CategoricalIndex
"""
-
# we re-order to the original category orderings
if sort:
return ci.set_categories(c.categories)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index f194c774cf329..37b6429167646 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1571,7 +1571,6 @@ def filter(self, func, dropna=True, *args, **kwargs):
3 bar 4 1.0
5 bar 6 9.0
"""
-
indices = []
obj = self._selected_obj
@@ -1626,7 +1625,6 @@ def _gotitem(self, key, ndim: int, subset=None):
subset : object, default None
subset to act on
"""
-
if ndim == 2:
if subset is None:
subset = self.obj
@@ -1844,7 +1842,6 @@ def nunique(self, dropna: bool = True):
4 ham 5 x
5 ham 5 y
"""
-
obj = self._selected_obj
def groupby_series(obj, col=None):
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 153bf386d4f33..426b3b47d9530 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1174,7 +1174,6 @@ def count(self):
Series or DataFrame
Count of values within each group.
"""
-
# defined here for API doc
raise NotImplementedError
@@ -1277,7 +1276,6 @@ def std(self, ddof: int = 1):
Series or DataFrame
Standard deviation of values within each group.
"""
-
# TODO: implement at Cython level?
return np.sqrt(self.var(ddof=ddof))
@@ -1458,7 +1456,6 @@ def ohlc(self) -> DataFrame:
DataFrame
Open, high, low and close values within each group.
"""
-
return self._apply_to_column_groupbys(lambda x: x._cython_agg_general("ohlc"))
@Appender(DataFrame.describe.__doc__)
@@ -1764,7 +1761,6 @@ def nth(self, n: Union[int, List[int]], dropna: Optional[str] = None) -> DataFra
1 1 2.0
4 2 5.0
"""
-
valid_containers = (set, list, tuple)
if not isinstance(n, (valid_containers, int)):
raise TypeError("n needs to be an int or a list/set/tuple of ints")
@@ -2034,7 +2030,6 @@ def ngroup(self, ascending: bool = True):
5 0
dtype: int64
"""
-
with _group_selection_context(self):
index = self._selected_obj.index
result = Series(self.grouper.group_info[0], index)
@@ -2095,7 +2090,6 @@ def cumcount(self, ascending: bool = True):
5 0
dtype: int64
"""
-
with _group_selection_context(self):
index = self._selected_obj.index
cumcounts = self._cumcount_array(ascending=ascending)
@@ -2348,7 +2342,6 @@ def shift(self, periods=1, freq=None, axis=0, fill_value=None):
Series or DataFrame
Object shifted within each group.
"""
-
if freq is not None or axis != 0 or not isna(fill_value):
return self.apply(lambda x: x.shift(periods, freq, axis, fill_value))
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index f0c6eedf5cee4..8a42a8fa297cd 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -130,7 +130,6 @@ def _get_grouper(self, obj, validate: bool = True):
-------
a tuple of binner, grouper, obj (possibly sorted)
"""
-
self._set_grouper(obj)
self.grouper, _, self.obj = get_grouper(
self.obj,
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 4e593ce543ea6..63087672d1365 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -433,7 +433,6 @@ def _cython_operation(
Names is only useful when dealing with 2D results, like ohlc
(see self._name_functions).
"""
-
assert kind in ["transform", "aggregate"]
orig_values = values
@@ -748,7 +747,6 @@ def __init__(
@cache_readonly
def groups(self):
""" dict {group name -> group labels} """
-
# this is mainly for compat
# GH 3881
result = {
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 719bf13cbd313..f3bae63aa7e03 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -912,7 +912,6 @@ def _format_data(self, name=None) -> str_t:
"""
Return the formatted data as a unicode string.
"""
-
# do we want to justify (only do so for non-objects)
is_justify = True
@@ -1003,7 +1002,6 @@ def to_native_types(self, slicer=None, **kwargs):
numpy.ndarray
Formatted values.
"""
-
values = self
if slicer is not None:
values = values[slicer]
@@ -1092,7 +1090,6 @@ def to_series(self, index=None, name=None):
Series
The dtype will be based on the type of the Index values.
"""
-
from pandas import Series
if index is None:
@@ -1153,7 +1150,6 @@ def to_frame(self, index: bool = True, name=None):
1 Bear
2 Cow
"""
-
from pandas import DataFrame
if name is None:
@@ -1294,7 +1290,6 @@ def set_names(self, names, level=None, inplace: bool = False):
( 'cobra', 2019)],
names=['species', 'year'])
"""
-
if level is not None and not isinstance(self, ABCMultiIndex):
raise ValueError("Level must be None for non-MultiIndex")
@@ -2548,7 +2543,6 @@ def _union(self, other, sort):
-------
Index
"""
-
if not len(other) or self.equals(other):
return self._get_reconciled_name_object(other)
@@ -3306,7 +3300,6 @@ def _can_reindex(self, indexer):
------
ValueError if its a duplicate axis
"""
-
# trying to reindex on an axis with duplicates
if not self.is_unique and len(indexer):
raise ValueError("cannot reindex from a duplicate axis")
@@ -3391,7 +3384,6 @@ def _reindex_non_unique(self, target):
Indices of output values in original index.
"""
-
target = ensure_index(target)
indexer, missing = self.get_indexer_non_unique(target)
check = indexer != -1
@@ -4182,7 +4174,6 @@ def append(self, other):
-------
appended : Index
"""
-
to_concat = [self]
if isinstance(other, (list, tuple)):
@@ -4725,7 +4716,6 @@ def groupby(self, values) -> PrettyDict[Hashable, np.ndarray]:
dict
{group name -> group labels}
"""
-
# TODO: if we are a MultiIndex, we can do better
# that converting to tuples
if isinstance(values, ABCMultiIndex):
@@ -4757,7 +4747,6 @@ def map(self, mapper, na_action=None):
If the function returns a tuple with more than one element
a MultiIndex will be returned.
"""
-
from pandas.core.indexes.multi import MultiIndex
new_values = super()._map_values(mapper, na_action=na_action)
@@ -4923,7 +4912,6 @@ def _maybe_cast_indexer(self, key):
If we have a float key and are not a floating index, then try to cast
to an int if equivalent.
"""
-
if not self.is_floating():
return com.cast_scalar_indexer(key)
return key
@@ -5740,7 +5728,6 @@ def _try_convert_to_int_array(
------
ValueError if the conversion was not successful.
"""
-
if not is_unsigned_integer_dtype(dtype):
# skip int64 conversion attempt if uint-like dtype is passed, as
# this could return Int64Index when UInt64Index is what's desired
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 7373f41daefa4..bb62d500311df 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -215,7 +215,6 @@ def _create_from_codes(self, codes, dtype=None, name=None):
-------
CategoricalIndex
"""
-
if dtype is None:
dtype = self.dtype
if name is None:
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 941b6c876bb36..d505778d18c52 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -386,7 +386,6 @@ def _convert_scalar_indexer(self, key, kind: str):
key : label of the slice bound
kind : {'loc', 'getitem'}
"""
-
assert kind in ["loc", "getitem"]
if not is_scalar(key):
@@ -556,7 +555,6 @@ def _concat_same_dtype(self, to_concat, name):
"""
Concatenate to_concat which has the same class.
"""
-
new_data = type(self._data)._concat_same_type(to_concat)
return self._simple_new(new_data, name=name)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index b67d0dcea0ac6..e303e487b1a7d 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -939,7 +939,6 @@ def date_range(
DatetimeIndex(['2017-01-02', '2017-01-03', '2017-01-04'],
dtype='datetime64[ns]', freq='D')
"""
-
if freq is None and com.any_none(periods, start, end):
freq = "D"
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 04b4b275bf90a..daccb35864e98 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -39,7 +39,6 @@ def inherit_from_data(name: str, delegate, cache: bool = False, wrap: bool = Fal
-------
attribute, method, property, or cache_readonly
"""
-
attr = getattr(delegate, name)
if isinstance(attr, property):
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 9c4cd6cf72d35..6ea4250e4acf4 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -550,7 +550,6 @@ def _can_reindex(self, indexer: np.ndarray) -> None:
------
ValueError if its a duplicate axis
"""
-
# trying to reindex on an axis with duplicates
if self.is_overlapping and len(indexer):
raise ValueError("cannot reindex from an overlapping axis")
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index ac151daac951a..e560cdb150a1b 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -139,7 +139,6 @@ def _codes_to_ints(self, codes):
int, or 1-dimensional array of dtype object
Integer(s) representing one combination (each).
"""
-
# Shift the representation of each level by the pre-calculated number
# of bits. Since this can overflow uint64, first make sure we are
# working with Python integers:
@@ -1115,7 +1114,6 @@ def _nbytes(self, deep: bool = False) -> int:
*this is in internal routine*
"""
-
# for implementations with no useful getsizeof (PyPy)
objsize = 24
@@ -1405,7 +1403,6 @@ def is_monotonic_increasing(self) -> bool:
return if the index is monotonic increasing (only equal or
increasing) values.
"""
-
if all(x.is_monotonic for x in self.levels):
# If each level is sorted, we can operate on the codes directly. GH27495
return libalgos.is_lexsorted(
@@ -1466,7 +1463,6 @@ def _hashed_indexing_key(self, key):
-----
we need to stringify if we have mixed levels
"""
-
if not isinstance(key, tuple):
return hash_tuples(key)
@@ -1526,7 +1522,6 @@ def _get_level_values(self, level, unique=False):
-------
values : ndarray
"""
-
lev = self.levels[level]
level_codes = self.codes[level]
name = self._names[level]
@@ -1609,7 +1604,6 @@ def to_frame(self, index=True, name=None):
--------
DataFrame
"""
-
from pandas import DataFrame
if name is not None:
@@ -1736,7 +1730,6 @@ def _sort_levels_monotonic(self):
('b', 'bb')],
)
"""
-
if self.is_lexsorted() and self.is_monotonic:
return self
@@ -1805,7 +1798,6 @@ def remove_unused_levels(self):
>>> mi2.levels
FrozenList([[1], ['a', 'b']])
"""
-
new_levels = []
new_codes = []
@@ -1870,7 +1862,6 @@ def __reduce__(self):
def __setstate__(self, state):
"""Necessary for making this object picklable"""
-
if isinstance(state, dict):
levels = state.get("levels")
codes = state.get("codes")
@@ -2486,7 +2477,6 @@ def get_slice_bound(
MultiIndex.get_locs : Get location for a label/slice/list/mask or a
sequence of such.
"""
-
if not isinstance(label, tuple):
label = (label,)
return self._partial_tup_index(label, side=side)
@@ -2596,7 +2586,6 @@ def _get_loc_single_level_index(self, level_index: Index, key: Hashable) -> int:
--------
Index.get_loc : The get_loc method for (single-level) index.
"""
-
if is_scalar(key) and isna(key):
return -1
else:
@@ -2751,7 +2740,6 @@ def get_loc_level(self, key, level=0, drop_level: bool = True):
>>> mi.get_loc_level(['b', 'e'])
(1, None)
"""
-
# different name to distinguish from maybe_droplevels
def maybe_mi_droplevels(indexer, levels, drop_level: bool):
if not drop_level:
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 536aa53c95fba..8e0f96a1dac7b 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -462,7 +462,6 @@ def split_and_operate(self, mask, f, inplace: bool):
-------
list of blocks
"""
-
if mask is None:
mask = np.broadcast_to(True, shape=self.shape)
@@ -519,7 +518,6 @@ def _maybe_downcast(self, blocks: List["Block"], downcast=None) -> List["Block"]
def downcast(self, dtypes=None):
""" try to downcast each item to the dict of dtypes if present """
-
# turn it off completely
if dtypes is False:
return self
@@ -663,7 +661,6 @@ def convert(
of the block (if copy = True) by definition we are not an ObjectBlock
here!
"""
-
return self.copy() if copy else self
def _can_hold_element(self, element: Any) -> bool:
@@ -709,7 +706,6 @@ def replace(
blocks here this is just a call to putmask. regex is not used here.
It is used in ObjectBlocks. It is here for API compatibility.
"""
-
inplace = validate_bool_kwarg(inplace, "inplace")
original_to_replace = to_replace
@@ -945,7 +941,6 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0, transpose=False)
-------
a list of new blocks, the result of the putmask
"""
-
new_values = self.values if inplace else self.values.copy()
new = getattr(new, "values", new)
@@ -1055,7 +1050,6 @@ def coerce_to_target_dtype(self, other):
we can also safely try to coerce to the same dtype
and will receive the same block
"""
-
# if we cannot then coerce to object
dtype, _ = infer_dtype_from(other, pandas_dtype=True)
@@ -1188,7 +1182,6 @@ def _interpolate_with_fill(
downcast=None,
):
""" fillna but using the interpolate machinery """
-
inplace = validate_bool_kwarg(inplace, "inplace")
# if we are coercing, then don't force the conversion
@@ -1232,7 +1225,6 @@ def _interpolate(
**kwargs,
):
""" interpolate using scipy wrappers """
-
inplace = validate_bool_kwarg(inplace, "inplace")
data = self.values if inplace else self.values.copy()
@@ -1280,7 +1272,6 @@ def take_nd(self, indexer, axis, new_mgr_locs=None, fill_tuple=None):
Take values according to indexer and return them as a block.bb
"""
-
# algos.take_nd dispatches for DatetimeTZBlock, CategoricalBlock
# so need to preserve types
# sparse is treated like an ndarray, but needs .get_values() shaping
@@ -1319,7 +1310,6 @@ def diff(self, n: int, axis: int = 1) -> List["Block"]:
def shift(self, periods, axis=0, fill_value=None):
""" shift the block by periods, possibly upcast """
-
# convert integer to float if necessary. need to do a lot more than
# that, handle boolean etc also
new_values, fill_value = maybe_upcast(self.values, fill_value)
@@ -1579,7 +1569,6 @@ def _replace_coerce(
-------
A new block if there is anything to replace or the original block.
"""
-
if mask.any():
if not regex:
self = self.coerce_to_target_dtype(value)
@@ -1861,7 +1850,6 @@ def _can_hold_element(self, element: Any) -> bool:
def _slice(self, slicer):
""" return a slice of my values """
-
# slice the category
# return same dims as we currently have
@@ -2064,7 +2052,6 @@ def to_native_types(
**kwargs,
):
""" convert to our native types format, slicing if desired """
-
values = self.values
if slicer is not None:
values = values[:, slicer]
@@ -2251,7 +2238,6 @@ def to_native_types(
self, slicer=None, na_rep=None, date_format=None, quoting=None, **kwargs
):
""" convert to our native types format, slicing if desired """
-
values = self.values
i8values = self.values.view("i8")
@@ -2529,7 +2515,6 @@ def should_store(self, value):
def to_native_types(self, slicer=None, na_rep=None, quoting=None, **kwargs):
""" convert to our native types format, slicing if desired """
-
values = self.values
if slicer is not None:
values = values[:, slicer]
@@ -2622,7 +2607,6 @@ def convert(
can return multiple blocks!
"""
-
# operate column-by-column
def f(mask, val, idx):
shape = val.shape
@@ -2924,7 +2908,6 @@ def to_dense(self):
def to_native_types(self, slicer=None, na_rep="", quoting=None, **kwargs):
""" convert to our native types format, slicing if desired """
-
values = self.values
if slicer is not None:
# Categorical is always one dimension
@@ -3060,7 +3043,6 @@ def make_block(values, placement, klass=None, ndim=None, dtype=None):
def _extend_blocks(result, blocks=None):
""" return a new extended blocks, given the result """
-
if blocks is None:
blocks = []
if isinstance(result, list):
@@ -3156,7 +3138,6 @@ def _putmask_smart(v, mask, n):
--------
ndarray.putmask
"""
-
# we cannot use np.asarray() here as we cannot have conversions
# that numpy does when numeric are mixed with strings
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index c75373b82305c..fdb57562e46ad 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -409,7 +409,6 @@ def _trim_join_unit(join_unit, length):
Extra items that didn't fit are returned as a separate block.
"""
-
if 0 not in join_unit.indexers:
extra_indexers = join_unit.indexers
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 798386825d802..9dd4312a39525 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -78,7 +78,6 @@ def masked_rec_array_to_mgr(data, index, columns, dtype, copy: bool):
"""
Extract from a masked rec array and create the manager.
"""
-
# essentially process a record array then fill it
fill_value = data.fill_value
fdata = ma.getdata(data)
@@ -555,7 +554,6 @@ def _list_of_dict_to_arrays(data, columns, coerce_float=False, dtype=None):
tuple
arrays, columns
"""
-
if columns is None:
gen = (list(x.keys()) for x in data)
sort = not any(isinstance(d, dict) for d in data)
@@ -603,7 +601,6 @@ def sanitize_index(data, index: Index):
Sanitize an index type to return an ndarray of the underlying, pass
through a non-Index.
"""
-
if len(data) != len(index):
raise ValueError("Length of values does not match length of index")
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 08ae0b02169d4..7f8fdc886313e 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -357,7 +357,6 @@ def apply(self, f, filter=None, **kwargs):
-------
BlockManager
"""
-
result_blocks = []
# filter kwarg is used in replace-* family of methods
@@ -453,7 +452,6 @@ def quantile(
-------
Block Manager (new object)
"""
-
# Series dispatches to DataFrame for quantile, which allows us to
# simplify some of the code here and in the blocks
assert self.ndim >= 2
@@ -569,7 +567,6 @@ def replace(self, value, **kwargs):
def replace_list(self, src_list, dest_list, inplace=False, regex=False):
""" do a list replace """
-
inplace = validate_bool_kwarg(inplace, "inplace")
# figure out our mask a-priori to avoid repeated replacements
@@ -1246,7 +1243,6 @@ def _slice_take_blocks_ax0(self, slice_or_indexer, fill_tuple=None):
-------
new_blocks : list of Block
"""
-
allow_fill = fill_tuple is not None
sl_type, slobj, sllen = _preprocess_slice_or_indexer(
@@ -1777,7 +1773,6 @@ def _simple_blockify(tuples, dtype):
def _multi_blockify(tuples, dtype=None):
""" return an array of blocks that potentially have different dtypes """
-
# group by dtype
grouper = itertools.groupby(tuples, lambda x: x[2].dtype)
@@ -1843,7 +1838,6 @@ def _consolidate(blocks):
"""
Merge blocks having same dtype, exclude non-consolidating blocks
"""
-
# sort by _can_consolidate, dtype
gkey = lambda x: x._consolidate_key
grouper = itertools.groupby(sorted(blocks, key=gkey), gkey)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 2bf2be082f639..422afd061762b 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -224,7 +224,6 @@ def _maybe_get_mask(
-------
Optional[np.ndarray]
"""
-
if mask is None:
if is_bool_dtype(values.dtype) or is_integer_dtype(values.dtype):
# Boolean data cannot contain nulls, so signal via mask being None
@@ -279,7 +278,6 @@ def _get_values(
fill_value : Any
fill value used
"""
-
# In _get_values is only called from within nanops, and in all cases
# with scalar fill_value. This guarantee is important for the
# maybe_upcast_putmask call below
@@ -338,7 +336,6 @@ def _na_ok_dtype(dtype) -> bool:
def _wrap_results(result, dtype: Dtype, fill_value=None):
""" wrap our results if needed """
-
if is_datetime64_dtype(dtype) or is_datetime64tz_dtype(dtype):
if fill_value is None:
# GH#24293
@@ -833,7 +830,6 @@ def nansem(
>>> nanops.nansem(s)
0.5773502691896258
"""
-
# This checks if non-numeric-like data is passed with numeric_only=False
# and raises a TypeError otherwise
nanvar(values, axis, skipna, ddof=ddof, mask=mask)
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 0312c11a6d590..f3c1a609d50a1 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -254,7 +254,6 @@ def _get_opstr(op):
-------
op_str : string or None
"""
-
return {
operator.add: "+",
radd: "+",
@@ -430,7 +429,6 @@ def column_op(a, b):
def _align_method_SERIES(left, right, align_asobject=False):
""" align lhs and rhs Series """
-
# ToDo: Different from _align_method_FRAME, list, tuple and ndarray
# are not coerced here
# because Series has inconsistencies described in #13637
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 3302ed9c219e6..5d53856729d0c 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -175,7 +175,6 @@ def arithmetic_op(
ndarrray or ExtensionArray
Or a 2-tuple of these in the case of divmod or rdivmod.
"""
-
from pandas.core.ops import maybe_upcast_for_op
# NB: We assume that extract_array has already been called
@@ -218,7 +217,6 @@ def comparison_op(
-------
ndarrray or ExtensionArray
"""
-
# NB: We assume extract_array has already been called on left and right
lvalues = left
rvalues = right
@@ -322,7 +320,6 @@ def logical_op(
-------
ndarrray or ExtensionArray
"""
-
fill_int = lambda x: x
def fill_bool(x, left=None):
diff --git a/pandas/core/ops/common.py b/pandas/core/ops/common.py
index f4b16cf4a0cf2..5c83591b0e71e 100644
--- a/pandas/core/ops/common.py
+++ b/pandas/core/ops/common.py
@@ -43,7 +43,6 @@ def _unpack_zerodim_and_defer(method, name: str):
-------
method
"""
-
is_cmp = name.strip("__") in {"eq", "ne", "lt", "le", "gt", "ge"}
@wraps(method)
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 94ff1f0056663..98910a9baf962 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -183,7 +183,6 @@ def _get_binner(self):
Create the BinGrouper, assume that self.set_grouper(obj)
has already been called.
"""
-
binner, bins, binlabels = self._get_binner_for_time()
assert len(bins) == len(binlabels)
bin_grouper = BinGrouper(bins, binlabels, indexer=self.groupby.indexer)
@@ -345,7 +344,6 @@ def _groupby_and_aggregate(self, how, grouper=None, *args, **kwargs):
"""
Re-evaluate the obj with a groupby aggregation.
"""
-
if grouper is None:
self._set_binner()
grouper = self.grouper
@@ -397,7 +395,6 @@ def _apply_loffset(self, result):
result : Series or DataFrame
the result of resample
"""
-
needs_offset = (
isinstance(self.loffset, (DateOffset, timedelta, np.timedelta64))
and isinstance(result.index, DatetimeIndex)
@@ -1158,7 +1155,6 @@ def _downsample(self, how, **kwargs):
how : string / cython mapped function
**kwargs : kw args passed to how function
"""
-
# we may need to actually resample as if we are timestamps
if self.kind == "timestamp":
return super()._downsample(how, **kwargs)
@@ -1202,7 +1198,6 @@ def _upsample(self, method, limit=None, fill_value=None):
.fillna
"""
-
# we may need to actually resample as if we are timestamps
if self.kind == "timestamp":
return super()._upsample(method, limit=limit, fill_value=fill_value)
@@ -1277,7 +1272,6 @@ def get_resampler_for_grouping(
"""
Return our appropriate resampler when grouping as well.
"""
-
# .resample uses 'on' similar to how .groupby uses 'key'
kwargs["key"] = kwargs.pop("on", None)
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 480c5279ad3f6..49ac1b6cfa52b 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -108,7 +108,6 @@ def _groupby_and_merge(
check_duplicates: bool, default True
should we check & clean duplicates
"""
-
pieces = []
if not isinstance(by, (list, tuple)):
by = [by]
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index b047e163c5565..b04e4e1ac4d48 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -567,7 +567,6 @@ def crosstab(
b 0 1 0
c 0 0 0
"""
-
index = com.maybe_make_list(index)
columns = com.maybe_make_list(columns)
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index a18b45a077be0..e499158a13b0c 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -513,7 +513,6 @@ def _format_labels(
bins, precision: int, right: bool = True, include_lowest: bool = False, dtype=None
):
""" based on the dtype, return our labels """
-
closed = "right" if right else "left"
if is_datetime64tz_dtype(dtype):
@@ -544,7 +543,6 @@ def _preprocess_for_cut(x):
input to array, strip the index information and store it
separately
"""
-
# Check that the passed array is a Pandas or Numpy object
# We don't want to strip away a Pandas data-type here (e.g. datetimetz)
ndim = getattr(x, "ndim", None)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7d74d32bf5e14..1f2ea9990c90f 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -396,7 +396,6 @@ def _set_axis(self, axis, labels, fastpath: bool = False) -> None:
"""
Override generic, we want to set the _typ here.
"""
-
if not fastpath:
labels = ensure_index(labels)
@@ -540,7 +539,6 @@ def _internal_get_values(self):
numpy.ndarray
Data of the Series.
"""
-
return self._data.get_values()
# ops
@@ -1402,7 +1400,6 @@ def to_string(
str or None
String representation of Series if ``buf=None``, otherwise None.
"""
-
formatter = fmt.SeriesFormatter(
self,
name=name,
@@ -2171,7 +2168,6 @@ def quantile(self, q=0.5, interpolation="linear"):
0.75 3.25
dtype: float64
"""
-
validate_percentile(q)
# We dispatch to DataFrame so that core.internals only has to worry
@@ -2583,7 +2579,6 @@ def _binop(self, other, func, level=None, fill_value=None):
-------
Series
"""
-
if not isinstance(other, Series):
raise AssertionError("Other operand must be Series")
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 51c154aa47518..5496eca46b992 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -376,7 +376,6 @@ def compress_group_index(group_index, sort: bool = True):
space can be huge, so this function compresses it, by computing offsets
(comp_ids) into the list of unique labels (obs_group_ids).
"""
-
size_hint = min(len(group_index), hashtable._SIZE_HINT_LIMIT)
table = hashtable.Int64HashTable(size_hint)
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index a4648186477d6..3a7e3fdab5dca 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -687,7 +687,6 @@ def str_replace(arr, pat, repl, n=-1, case=None, flags=0, regex=True):
2 NaN
dtype: object
"""
-
# Check whether repl is valid (GH 13438, GH 15055)
if not (isinstance(repl, str) or callable(repl)):
raise TypeError("repl must be a string or callable")
@@ -1085,7 +1084,6 @@ def str_extractall(arr, pat, flags=0):
B 0 b 1
C 0 NaN 1
"""
-
regex = re.compile(pat, flags=flags)
# the regex must contain capture groups.
if regex.groups == 0:
@@ -1358,7 +1356,6 @@ def str_find(arr, sub, start=0, end=None, side="left"):
Series or Index
Indexes where substring is found.
"""
-
if not isinstance(sub, str):
msg = f"expected a string object, not {type(sub).__name__}"
raise TypeError(msg)
@@ -1930,7 +1927,6 @@ def forbid_nonstring_types(forbidden, name=None):
TypeError
If the inferred type of the underlying data is in `forbidden`.
"""
-
# deal with None
forbidden = [] if forbidden is None else forbidden
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index 3f0cfce39f6f9..1d933cf431b4b 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -111,7 +111,6 @@ def to_timedelta(arg, unit="ns", errors="raise"):
def _coerce_scalar_to_timedelta_type(r, unit="ns", errors="raise"):
"""Convert string 'r' to a timedelta object."""
-
try:
result = Timedelta(r, unit)
except ValueError:
@@ -128,7 +127,6 @@ def _coerce_scalar_to_timedelta_type(r, unit="ns", errors="raise"):
def _convert_listlike(arg, unit="ns", errors="raise", name=None):
"""Convert a list of objects to a timedelta index object."""
-
if isinstance(arg, (list, tuple)) or not hasattr(arg, "dtype"):
# This is needed only to ensure that in the case where we end up
# returning arg (errors == "ignore"), and where the input is a
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 3366f10b92604..160d328ec16ec 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -269,7 +269,6 @@ def hash_array(
-------
1d uint64 numpy array of hash values, same length as the vals
"""
-
if not hasattr(vals, "dtype"):
raise TypeError("must pass a ndarray-like")
dtype = vals.dtype
@@ -340,7 +339,6 @@ def _hash_scalar(
-------
1d uint64 numpy array of hash value, of length 1
"""
-
if isna(val):
# this is to be consistent with the _hash_categorical implementation
return np.array([np.iinfo(np.uint64).max], dtype="u8")
diff --git a/pandas/core/window/numba_.py b/pandas/core/window/numba_.py
index 127957943d2ff..d6e8194c861fa 100644
--- a/pandas/core/window/numba_.py
+++ b/pandas/core/window/numba_.py
@@ -110,7 +110,6 @@ def generate_numba_apply_func(
-------
Numba function
"""
-
if engine_kwargs is None:
engine_kwargs = {}
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 5c18796deb07a..f29cd428b7bad 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -149,7 +149,6 @@ def _create_blocks(self):
"""
Split data into blocks & return conformed data.
"""
-
obj = self._selected_obj
# filter out the on from the object
@@ -172,7 +171,6 @@ def _gotitem(self, key, ndim, subset=None):
subset : object, default None
subset to act on
"""
-
# create a new object to prevent aliasing
if subset is None:
subset = self.obj
@@ -238,7 +236,6 @@ def __repr__(self) -> str:
"""
Provide a nice str repr of our rolling object.
"""
-
attrs_list = (
f"{attr_name}={getattr(self, attr_name)}"
for attr_name in self._attributes
@@ -284,7 +281,6 @@ def _wrap_result(self, result, block=None, obj=None):
"""
Wrap a single result.
"""
-
if obj is None:
obj = self._selected_obj
index = obj.index
@@ -310,7 +306,6 @@ def _wrap_results(self, results, blocks, obj, exclude=None) -> FrameOrSeries:
obj : conformed data (may be resampled)
exclude: list of columns to exclude, default to None
"""
-
from pandas import Series, concat
final = []
@@ -1021,7 +1016,6 @@ def _get_window(
window : ndarray
the window, weights
"""
-
window = self.window
if isinstance(window, (list, tuple, np.ndarray)):
return com.asarray_tuplesafe(window).astype(float)
diff --git a/pandas/io/clipboard/__init__.py b/pandas/io/clipboard/__init__.py
index 6d76d7de407b1..f4bd14ad5c679 100644
--- a/pandas/io/clipboard/__init__.py
+++ b/pandas/io/clipboard/__init__.py
@@ -500,7 +500,6 @@ def determine_clipboard():
Determine the OS/platform and set the copy() and paste() functions
accordingly.
"""
-
global Foundation, AppKit, qtpy, PyQt4, PyQt5
# Setup for the CYGWIN platform:
diff --git a/pandas/io/common.py b/pandas/io/common.py
index c4772895afd1e..beb6c9d97aff3 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -296,7 +296,6 @@ def infer_compression(
------
ValueError on invalid compression specified.
"""
-
# No compression has been explicitly specified
if compression is None:
return None
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index be52523e486af..ab2d97e6026d1 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -51,7 +51,6 @@ def _convert_to_style(cls, style_dict):
----------
style_dict : style dictionary to convert
"""
-
from openpyxl.style import Style
xls_style = Style()
@@ -92,7 +91,6 @@ def _convert_to_style_kwargs(cls, style_dict):
value has been replaced with a native openpyxl style object of the
appropriate class.
"""
-
_style_key_map = {"borders": "border"}
style_kwargs = {}
@@ -128,7 +126,6 @@ def _convert_to_color(cls, color_spec):
-------
color : openpyxl.styles.Color
"""
-
from openpyxl.styles import Color
if isinstance(color_spec, str):
@@ -164,7 +161,6 @@ def _convert_to_font(cls, font_dict):
-------
font : openpyxl.styles.Font
"""
-
from openpyxl.styles import Font
_font_key_map = {
@@ -202,7 +198,6 @@ def _convert_to_stop(cls, stop_seq):
-------
stop : list of openpyxl.styles.Color
"""
-
return map(cls._convert_to_color, stop_seq)
@classmethod
@@ -230,7 +225,6 @@ def _convert_to_fill(cls, fill_dict):
-------
fill : openpyxl.styles.Fill
"""
-
from openpyxl.styles import PatternFill, GradientFill
_pattern_fill_key_map = {
@@ -286,7 +280,6 @@ def _convert_to_side(cls, side_spec):
-------
side : openpyxl.styles.Side
"""
-
from openpyxl.styles import Side
_side_key_map = {"border_style": "style"}
@@ -329,7 +322,6 @@ def _convert_to_border(cls, border_dict):
-------
border : openpyxl.styles.Border
"""
-
from openpyxl.styles import Border
_border_key_map = {"diagonalup": "diagonalUp", "diagonaldown": "diagonalDown"}
@@ -365,7 +357,6 @@ def _convert_to_alignment(cls, alignment_dict):
-------
alignment : openpyxl.styles.Alignment
"""
-
from openpyxl.styles import Alignment
return Alignment(**alignment_dict)
@@ -399,7 +390,6 @@ def _convert_to_protection(cls, protection_dict):
Returns
-------
"""
-
from openpyxl.styles import Protection
return Protection(**protection_dict)
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index e7a132b73e076..16f800a6de2c9 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -60,7 +60,6 @@ def _parse_cell(cell_contents, cell_typ):
"""
converts the contents of the cell into a pandas appropriate object
"""
-
if cell_typ == XL_CELL_DATE:
# Use the newer xlrd datetime handling.
diff --git a/pandas/io/excel/_xlsxwriter.py b/pandas/io/excel/_xlsxwriter.py
index 6d9ff9be5249a..85a1bb031f457 100644
--- a/pandas/io/excel/_xlsxwriter.py
+++ b/pandas/io/excel/_xlsxwriter.py
@@ -85,7 +85,6 @@ def convert(cls, style_dict, num_format_str=None):
style_dict : style dictionary to convert
num_format_str : optional number format string
"""
-
# Create a XlsxWriter format object.
props = {}
@@ -191,7 +190,6 @@ def save(self):
"""
Save workbook to disk.
"""
-
return self.book.close()
def write_cells(
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 35a6870c1194b..55d534f975b68 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -932,7 +932,6 @@ def to_latex(
"""
Render a DataFrame to a LaTeX tabular/longtable environment output.
"""
-
from pandas.io.formats.latex import LatexFormatter
return LatexFormatter(
@@ -1135,7 +1134,6 @@ def format_array(
-------
List[str]
"""
-
fmt_klass: Type[GenericArrayFormatter]
if is_datetime64_dtype(values.dtype):
fmt_klass = Datetime64Formatter
@@ -1296,9 +1294,7 @@ def _value_formatter(
float_format: Optional[float_format_type] = None,
threshold: Optional[Union[float, int]] = None,
) -> Callable:
- """Returns a function to be applied on each value to format it
- """
-
+ """Returns a function to be applied on each value to format it"""
# the float_format parameter supersedes self.float_format
if float_format is None:
float_format = self.float_format
@@ -1346,7 +1342,6 @@ def get_result_as_array(self) -> np.ndarray:
Returns the float values converted into strings using
the parameters given at initialisation, as a numpy array
"""
-
if self.formatter is not None:
return np.array([self.formatter(x) for x in self.values])
@@ -1461,7 +1456,6 @@ def __init__(
def _format_strings(self) -> List[str]:
""" we by definition have DO NOT have a TZ """
-
values = self.values
if not isinstance(values, DatetimeIndex):
@@ -1541,7 +1535,6 @@ def format_percentiles(
>>> format_percentiles([0, 0.5, 0.02001, 0.5, 0.666666, 0.9999])
['0%', '50%', '2.0%', '50%', '66.67%', '99.99%']
"""
-
percentiles = np.asarray(percentiles)
# It checks for np.NaN as well
@@ -1642,7 +1635,6 @@ def _get_format_datetime64_from_values(
values: Union[np.ndarray, DatetimeArray, DatetimeIndex], date_format: Optional[str]
) -> Optional[str]:
""" given values and a date_format, return a string format """
-
if isinstance(values, np.ndarray) and values.ndim > 1:
# We don't actually care about the order of values, and DatetimeIndex
# only accepts 1D values
@@ -1657,7 +1649,6 @@ def _get_format_datetime64_from_values(
class Datetime64TZFormatter(Datetime64Formatter):
def _format_strings(self) -> List[str]:
""" we by definition have a TZ """
-
values = self.values.astype(object)
is_dates_only = _is_dates_only(values)
formatter = self.formatter or _get_format_datetime64(
@@ -1698,7 +1689,6 @@ def _get_format_timedelta64(
If box, then show the return in quotes
"""
-
values_int = values.astype(np.int64)
consider_values = values_int != iNaT
@@ -1913,7 +1903,6 @@ def set_eng_float_format(accuracy: int = 3, use_eng_prefix: bool = False) -> Non
See also EngFormatter.
"""
-
set_option("display.float_format", EngFormatter(accuracy, use_eng_prefix))
set_option("display.column_space", max(12, accuracy + 9))
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 8ab56437d5c05..935762598f78a 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -56,7 +56,6 @@ def write_result(self, buf: IO[str]) -> None:
Render a DataFrame to a LaTeX tabular, longtable, or table/tabular
environment output.
"""
-
# string representation of the columns
if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
info_line = "Empty {name}\nColumns: {col}\nIndex: {idx}".format(
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 565752e269d79..eca5a3fb18e60 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -802,7 +802,6 @@ def where(
--------
Styler.applymap
"""
-
if other is None:
other = ""
diff --git a/pandas/io/html.py b/pandas/io/html.py
index c676bfb1f0c74..ee8e96b4b3344 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -395,7 +395,6 @@ def _parse_thead_tbody_tfoot(self, table_html):
- Move rows from bottom of body to footer only if
all elements inside row are <th>
"""
-
header_rows = self._parse_thead_tr(table_html)
body_rows = self._parse_tbody_tr(table_html)
footer_rows = self._parse_tfoot_tr(table_html)
@@ -435,7 +434,6 @@ def _expand_colspan_rowspan(self, rows):
Any cell with ``rowspan`` or ``colspan`` will have its contents copied
to subsequent cells.
"""
-
all_texts = [] # list of rows, each a list of str
remainder = [] # list of (index, text, nrows)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 04fd17a00041b..39ee097bc743b 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -266,7 +266,6 @@ def __init__(
to know what the index is, forces orient to records, and forces
date_format to 'iso'.
"""
-
super().__init__(
obj,
orient,
@@ -572,7 +571,6 @@ def read_json(
"data": [{"index": "row 1", "col 1": "a", "col 2": "b"},
{"index": "row 2", "col 1": "c", "col 2": "d"}]}'
"""
-
if orient == "table" and dtype:
raise ValueError("cannot pass both dtype and orient='table'")
if orient == "table" and convert_axes:
@@ -886,7 +884,6 @@ def _try_convert_data(self, name, data, use_dtypes=True, convert_dates=True):
"""
Try to parse a ndarray like into a column by inferring dtype.
"""
-
# don't try to coerce, unless a force conversion
if use_dtypes:
if not self.dtype:
@@ -963,7 +960,6 @@ def _try_convert_to_date(self, data):
Try to coerce object in epoch/iso formats and integer/float in epoch
formats. Return a boolean if parsing was successful.
"""
-
# no conversion on empty
if not len(data):
return data, False
@@ -1117,7 +1113,6 @@ def _process_converter(self, f, filt=None):
"""
Take a conversion function and possibly recreate the frame.
"""
-
if filt is None:
filt = lambda col, c: True
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index b638bdc0bc1eb..08dca6b573a2f 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -18,7 +18,6 @@ def convert_to_line_delimits(s):
"""
Helper function that converts JSON lists to line delimited JSON.
"""
-
# Determine we have a JSON list to turn to lines otherwise just return the
# json object, only lists can
if not s[0] == "[" and s[-1] == "]":
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index bbefe447cb7fe..ea79efd0579e5 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -42,7 +42,6 @@ def read_orc(
-------
DataFrame
"""
-
# we require a newer version of pyarrow than we support for parquet
import pyarrow
diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py
index 926635062d853..9ae9729fc05ee 100644
--- a/pandas/io/parquet.py
+++ b/pandas/io/parquet.py
@@ -13,7 +13,6 @@
def get_engine(engine: str) -> "BaseImpl":
""" return our implementation """
-
if engine == "auto":
engine = get_option("io.parquet.engine")
@@ -297,6 +296,5 @@ def read_parquet(path, engine: str = "auto", columns=None, **kwargs):
-------
DataFrame
"""
-
impl = get_engine(engine)
return impl.read(path, columns=columns, **kwargs)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 5f754aa07a5e1..1272914bd53eb 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -407,7 +407,6 @@ def _validate_names(names):
ValueError
If names are not unique.
"""
-
if names is not None:
if len(names) != len(set(names)):
raise ValueError("Duplicate names are not allowed.")
@@ -760,7 +759,6 @@ def read_fwf(
--------
>>> pd.read_fwf('data.csv') # doctest: +SKIP
"""
-
# Check input arguments.
if colspecs is None and widths is None:
raise ValueError("Must specify either colspecs or widths")
@@ -1252,7 +1250,6 @@ def _validate_skipfooter_arg(skipfooter):
------
ValueError : 'skipfooter' was not a non-negative integer.
"""
-
if not is_integer(skipfooter):
raise ValueError("skipfooter must be an integer")
@@ -1796,7 +1793,6 @@ def _cast_types(self, values, cast_type, column):
-------
converted : ndarray
"""
-
if is_categorical_dtype(cast_type):
known_cats = (
isinstance(cast_type, CategoricalDtype)
@@ -2864,7 +2860,6 @@ def _alert_malformed(self, msg, row_num):
Because this row number is displayed, we 1-index,
even though we 0-index internally.
"""
-
if self.error_bad_lines:
raise ParserError(msg)
elif self.warn_bad_lines:
@@ -2883,7 +2878,6 @@ def _next_iter_line(self, row_num):
----------
row_num : The row number of the line being parsed.
"""
-
try:
return next(self.data)
except csv.Error as e:
@@ -2943,7 +2937,6 @@ def _remove_empty_lines(self, lines):
filtered_lines : array-like
The same array of lines with the "empty" ones removed.
"""
-
ret = []
for l in lines:
# Remove empty lines and lines with only one whitespace value
@@ -3515,7 +3508,6 @@ def _get_na_values(col, na_values, na_fvalues, keep_default_na):
1) na_values : the string NaN values for that column.
2) na_fvalues : the float NaN values for that column.
"""
-
if isinstance(na_values, dict):
if col in na_values:
return na_values[col], na_fvalues[col]
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 570d1f9a89159..d21dc58d15f40 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -114,7 +114,6 @@ def _ensure_term(where, scope_level: int):
that are passed
create the terms here with a frame_level=2 (we are 2 levels down)
"""
-
# only consider list/tuple here as an ndarray is automatically a coordinate
# list
level = scope_level + 1
@@ -246,7 +245,6 @@ def to_hdf(
encoding: str = "UTF-8",
):
""" store this object, close it if we opened it """
-
if append:
f = lambda store: store.append(
key,
@@ -362,7 +360,6 @@ def read_hdf(
>>> df.to_hdf('./store.h5', 'data')
>>> reread = pd.read_hdf('./store.h5')
"""
-
if mode not in ["r", "r+", "a"]:
raise ValueError(
f"mode {mode} is not allowed while performing a read. "
@@ -903,7 +900,6 @@ def select_as_multiple(
raises TypeError if keys is not a list or tuple
raises ValueError if the tables are not ALL THE SAME DIMENSIONS
"""
-
# default to single select
where = _ensure_term(where, scope_level=1)
if isinstance(keys, (list, tuple)) and len(keys) == 1:
@@ -1303,7 +1299,6 @@ def create_table_index(
------
TypeError: raises if the node is not a table
"""
-
# version requirements
_tables()
s = self.get_storer(key)
@@ -1523,7 +1518,6 @@ def _check_if_open(self):
def _validate_format(self, format: str) -> str:
""" validate / deprecate formats """
-
# validate
try:
format = _FORMAT_MAP[format.lower()]
@@ -1541,7 +1535,6 @@ def _create_storer(
errors: str = "strict",
) -> Union["GenericFixed", "Table"]:
""" return a suitable class to operate """
-
cls: Union[Type["GenericFixed"], Type["Table"]]
if value is not None and not isinstance(value, (Series, DataFrame)):
@@ -2027,7 +2020,6 @@ def validate_and_set(self, handler: "AppendableTable", append: bool):
def validate_col(self, itemsize=None):
""" validate this column: return the compared against itemsize """
-
# validate this column for string truncation (or reset to the max size)
if _ensure_decoded(self.kind) == "string":
c = self.col
@@ -2059,7 +2051,6 @@ def update_info(self, info):
set/update the info for this indexable with the key/value
if there is a conflict raise/warn as needed
"""
-
for key in self._info_fields:
value = getattr(self, key, None)
@@ -2242,7 +2233,6 @@ def _get_atom(cls, values: Union[np.ndarray, ABCExtensionArray]) -> "Col":
"""
Get an appropriately typed and shaped pytables.Col object for values.
"""
-
dtype = values.dtype
itemsize = dtype.itemsize
@@ -2608,7 +2598,6 @@ def infer_axes(self):
infer the axes of my storer
return a boolean indicating if we have a valid storer or not
"""
-
s = self.storable
if s is None:
return False
@@ -2894,7 +2883,6 @@ def read_index_node(
def write_array_empty(self, key: str, value: ArrayLike):
""" write a 0-len array """
-
# ugly hack for length 0 axes
arr = np.empty((1,) * value.ndim)
self._handle.create_array(self.group, key, arr)
@@ -3304,7 +3292,6 @@ def data_orientation(self):
def queryables(self) -> Dict[str, Any]:
""" return a dict of the kinds allowable columns for this object """
-
# mypy doesn't recognize DataFrame._AXIS_NAMES, so we re-write it here
axis_names = {0: "index", 1: "columns"}
@@ -3513,7 +3500,6 @@ def create_index(self, columns=None, optlevel=None, kind: Optional[str] = None):
Cannot index Time64Col or ComplexCol.
Pytables must be >= 3.0.
"""
-
if not self.infer_axes():
return
if columns is False:
@@ -3580,7 +3566,6 @@ def _read_axes(
-------
List[Tuple[index_values, column_values]]
"""
-
# create the selection
selection = Selection(self, where=where, start=start, stop=stop)
values = selection.select()
@@ -3608,7 +3593,6 @@ def validate_data_columns(self, data_columns, min_itemsize, non_index_axes):
"""take the input data_columns and min_itemize and create a data
columns spec
"""
-
if not len(non_index_axes):
return []
@@ -3675,7 +3659,6 @@ def _create_axes(
min_itemsize: Dict[str, int] or None, default None
The min itemsize for a column in bytes.
"""
-
if not isinstance(obj, DataFrame):
group = self.group._v_name
raise TypeError(
@@ -3929,7 +3912,6 @@ def get_blk_items(mgr, blocks):
def process_axes(self, obj, selection: "Selection", columns=None):
""" process axes filters """
-
# make a copy to avoid side effects
if columns is not None:
columns = list(columns)
@@ -3994,7 +3976,6 @@ def create_description(
expectedrows: Optional[int],
) -> Dict[str, Any]:
""" create the description of the table from the axes & values """
-
# provided expected rows if its passed
if expectedrows is None:
expectedrows = max(self.nrows_expected, 10000)
@@ -4024,7 +4005,6 @@ def read_coordinates(
"""select coordinates (row numbers) from a table; return the
coordinates object
"""
-
# validate the version
self.validate_version(where)
@@ -4054,7 +4034,6 @@ def read_column(
"""return a single column from the table, generally only indexables
are interesting
"""
-
# validate the version
self.validate_version()
@@ -4186,7 +4165,6 @@ def write_data(self, chunksize: Optional[int], dropna: bool = False):
"""
we form the data into a 2-d including indexes,values,mask write chunk-by-chunk
"""
-
names = self.dtype.names
nrows = self.nrows_expected
@@ -4259,7 +4237,6 @@ def write_data_chunk(
mask : an array of the masks
values : an array of the values
"""
-
# 0 len
for v in values:
if not np.prod(v.shape):
@@ -4854,7 +4831,6 @@ def _convert_string_array(data: np.ndarray, encoding: str, errors: str) -> np.nd
-------
np.ndarray[fixed-length-string]
"""
-
# encode if needed
if len(data):
data = (
diff --git a/pandas/io/sas/sas_xport.py b/pandas/io/sas/sas_xport.py
index 3cf7fd885e564..461d393dc4521 100644
--- a/pandas/io/sas/sas_xport.py
+++ b/pandas/io/sas/sas_xport.py
@@ -198,7 +198,6 @@ def _parse_float_vec(vec):
Parse a vector of float values representing IBM 8 byte floats into
native 8 byte floats.
"""
-
dtype = np.dtype(">u4,>u4")
vec1 = vec.view(dtype=dtype)
xport1 = vec1["f0"]
@@ -411,7 +410,6 @@ def _record_count(self) -> int:
Side effect: returns file position to record_start.
"""
-
self.filepath_or_buffer.seek(0, 2)
total_records_length = self.filepath_or_buffer.tell() - self.record_start
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 36291b2faeed0..69e5a973ff706 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -120,7 +120,6 @@ def _parse_date_columns(data_frame, parse_dates):
def _wrap_result(data, columns, index_col=None, coerce_float=True, parse_dates=None):
"""Wrap result set of query in a DataFrame."""
-
frame = DataFrame.from_records(data, columns=columns, coerce_float=coerce_float)
frame = _parse_date_columns(frame, parse_dates)
@@ -228,7 +227,6 @@ def read_sql_table(
--------
>>> pd.read_sql_table('table_name', 'postgres:///db_name') # doctest:+SKIP
"""
-
con = _engine_builder(con)
if not _is_sqlalchemy_connectable(con):
raise NotImplementedError(
@@ -758,7 +756,6 @@ def _query_iterator(
self, result, chunksize, columns, coerce_float=True, parse_dates=None
):
"""Return generator through chunked result set."""
-
while True:
data = result.fetchmany(chunksize)
if not data:
@@ -1149,7 +1146,6 @@ def _query_iterator(
result, chunksize, columns, index_col=None, coerce_float=True, parse_dates=None
):
"""Return generator through chunked result set"""
-
while True:
data = result.fetchmany(chunksize)
if not data:
@@ -1606,7 +1602,6 @@ def _query_iterator(
cursor, chunksize, columns, index_col=None, coerce_float=True, parse_dates=None
):
"""Return generator through chunked result set"""
-
while True:
data = cursor.fetchmany(chunksize)
if type(data) == tuple:
@@ -1781,6 +1776,5 @@ def get_schema(frame, name, keys=None, con=None, dtype=None):
be a SQLAlchemy type, or a string for sqlite3 fallback connection.
"""
-
pandas_sql = pandasSQL_builder(con=con)
return pandas_sql._create_sql_schema(frame, name, keys=keys, dtype=dtype)
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 4e1fcb97e7891..cf3251faae979 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2131,7 +2131,6 @@ def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
"""Check for categorical columns, retain categorical information for
Stata file and convert categorical data to int
"""
-
is_cat = [is_categorical_dtype(data[col]) for col in data]
self._is_col_cat = is_cat
self._value_labels: List[StataValueLabel] = []
@@ -2771,7 +2770,6 @@ def generate_table(self) -> Tuple[Dict[str, Tuple[int, int]], DataFrame]:
* 118: 6
* 119: 5
"""
-
gso_table = self._gso_table
gso_df = self.df
columns = list(gso_df.columns)
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 770f89324badb..c399e5b9b7017 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -982,7 +982,6 @@ def __init__(
def _get_default_locs(self, vmin, vmax):
"""Returns the default locations of ticks."""
-
if self.plot_obj.date_axis_info is None:
self.plot_obj.date_axis_info = self.finder(vmin, vmax, self.freq)
@@ -1063,7 +1062,6 @@ def __init__(self, freq, minor_locator=False, dynamic_mode=True, plot_obj=None):
def _set_default_format(self, vmin, vmax):
"""Returns the default ticks spacing."""
-
if self.plot_obj.date_axis_info is None:
self.plot_obj.date_axis_info = self.finder(vmin, vmax, self.freq)
info = self.plot_obj.date_axis_info
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 3f47d325d86ef..63d0b8abe59d9 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -432,7 +432,6 @@ def _add_table(self):
def _post_plot_logic_common(self, ax, data):
"""Common post process for each axes"""
-
if self.orientation == "vertical" or self.orientation is None:
self._apply_axis_properties(ax.xaxis, rot=self.rot, fontsize=self.fontsize)
self._apply_axis_properties(ax.yaxis, fontsize=self.fontsize)
@@ -515,7 +514,6 @@ def _apply_axis_properties(self, axis, rot=None, fontsize=None):
multiple times per draw. It's therefore beneficial for us to avoid
accessing unless we will act on the Tick.
"""
-
if rot is not None or fontsize is not None:
# rot=0 is a valid setting, hence the explicit None check
labels = axis.get_majorticklabels() + axis.get_minorticklabels()
@@ -756,7 +754,6 @@ def _parse_errorbars(self, label, err):
key in the plotted DataFrame
str: the name of the column within the plotted DataFrame
"""
-
if err is None:
return None
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 4c917b9bb42d2..8da2797835080 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -156,7 +156,6 @@ def get_is_dtype_funcs():
begin with 'is_' and end with 'dtype'
"""
-
fnames = [f for f in dir(com) if (f.startswith("is_") and f.endswith("dtype"))]
return [getattr(com, fname) for fname in fnames]
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 25b2997eb088f..54662c94baa05 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -60,7 +60,6 @@ def assert_stat_op_calc(
skipna_alternative : function, default None
NaN-safe version of alternative
"""
-
f = getattr(frame, opname)
if check_dates:
@@ -150,7 +149,6 @@ def assert_stat_op_api(opname, float_frame, float_string_frame, has_numeric_only
has_numeric_only : bool, default False
Whether the method "opname" has the kwarg "numeric_only"
"""
-
# make sure works on mixed-type frame
getattr(float_string_frame, opname)(axis=0)
getattr(float_string_frame, opname)(axis=1)
@@ -178,7 +176,6 @@ def assert_bool_op_calc(opname, alternative, frame, has_skipna=True):
has_skipna : bool, default True
Whether the method "opname" has the kwarg "skip_na"
"""
-
f = getattr(frame, opname)
if has_skipna:
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 1f4fd90d9b059..02d803795e79c 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -28,7 +28,6 @@ def _construct(self, shape, value=None, dtype=None, **kwargs):
if value is specified use that if its a scalar
if value is an array, repeat it as needed
"""
-
if isinstance(shape, int):
shape = tuple([shape] * self._ndim)
if value is not None:
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 8967ef06f50fb..0ad829dd4de7a 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -520,7 +520,6 @@ def _check_cython_group_transform_cumulative(pd_op, np_op, dtype):
dtype : type
The specified dtype of the data.
"""
-
is_datetimelike = False
data = np.array([[1], [2], [3], [4]], dtype=dtype)
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index cd8e8c3542cce..7574e4501f5aa 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -791,7 +791,6 @@ def test_dti_tz_constructors(self, tzstr):
""" Test different DatetimeIndex constructions with timezone
Follow-up of GH#4229
"""
-
arr = ["11/10/2005 08:00:00", "11/10/2005 09:00:00"]
idx1 = to_datetime(arr).tz_localize(tzstr)
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index 0e5d1d45ad6db..24616f05c19ce 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -704,7 +704,6 @@ def test_len_specialised(self, step):
)
def appends(self, request):
"""Inputs and expected outputs for RangeIndex.append test"""
-
return request.param
def test_append(self, appends):
diff --git a/pandas/tests/indexes/ranges/test_setops.py b/pandas/tests/indexes/ranges/test_setops.py
index 5bedc4089feba..8e749e0752087 100644
--- a/pandas/tests/indexes/ranges/test_setops.py
+++ b/pandas/tests/indexes/ranges/test_setops.py
@@ -225,7 +225,6 @@ def test_union_noncomparable(self, sort):
)
def unions(self, request):
"""Inputs and expected outputs for RangeIndex.union tests"""
-
return request.param
def test_union_sorted(self, unions):
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index 6f6981a30d7e4..9d55609d5db00 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -105,7 +105,6 @@ def generate_indices(self, f, values=False):
if values is True , use the axis values
is False, use the range
"""
-
axes = f.axes
if values:
axes = (list(range(len(ax))) for ax in axes)
@@ -114,7 +113,6 @@ def generate_indices(self, f, values=False):
def get_value(self, name, f, i, values=False):
""" return the value for the location i """
-
# check against values
if values:
return f.values[i]
@@ -150,7 +148,6 @@ def check_result(
):
def _eq(axis, obj, key):
""" compare equal for these 2 keys """
-
axified = _axify(obj, key, axis)
try:
getattr(obj, method).__getitem__(axified)
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index b3f6d65da5db5..f783c3516e357 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -505,7 +505,6 @@ def test_integer_positional_indexing(self):
""" make sure that we are raising on positional indexing
w.r.t. an integer index
"""
-
s = Series(range(2, 6), index=range(2, 6))
result = s[2:4]
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index 67b767a337a89..f7583c93b9288 100755
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -136,7 +136,6 @@ def _create_sp_frame():
def create_data():
""" create the pickle data """
-
data = {
"A": [0.0, 1.0, 2.0, 3.0, np.nan],
"B": [0, 1, 0, 1, 0],
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index bedd60084124c..e966db7a1cc71 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -33,7 +33,6 @@ def _clean_dict(d):
-------
cleaned_dict : dict
"""
-
return {str(k): v for k, v in d.items()}
diff --git a/pandas/tests/io/test_clipboard.py b/pandas/tests/io/test_clipboard.py
index 652cacaf14ffb..3458cfb6ad254 100644
--- a/pandas/tests/io/test_clipboard.py
+++ b/pandas/tests/io/test_clipboard.py
@@ -114,7 +114,6 @@ def mock_clipboard(monkeypatch, request):
This returns the local dictionary, for direct manipulation by
tests.
"""
-
# our local clipboard for tests
_mock_data = {}
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 7ed8d8f22764c..3e0d0448579db 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -151,7 +151,6 @@ def check_round_trip(
repeat: int, optional
How many times to repeat the test
"""
-
write_kwargs = write_kwargs or {"compression": None}
read_kwargs = read_kwargs or {}
diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py
index a604d90acc854..ea0ec8ad98ffe 100644
--- a/pandas/tests/plotting/common.py
+++ b/pandas/tests/plotting/common.py
@@ -92,7 +92,6 @@ def _check_legend_labels(self, axes, labels=None, visible=True):
expected legend visibility. labels are checked only when visible is
True
"""
-
if visible and (labels is None):
raise ValueError("labels must be specified when visible is True")
axes = self._flatten_visible(axes)
@@ -190,7 +189,6 @@ def _check_colors(
Series used for color grouping key
used for andrew_curves, parallel_coordinates, radviz test
"""
-
from matplotlib.lines import Line2D
from matplotlib.collections import Collection, PolyCollection, LineCollection
diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py
index 3aa7765954634..bf998a6e83909 100644
--- a/pandas/tests/resample/test_time_grouper.py
+++ b/pandas/tests/resample/test_time_grouper.py
@@ -119,7 +119,6 @@ def test_aaa_group_order():
def test_aggregate_normal(resample_method):
"""Check TimeGrouper's aggregation is identical as normal groupby."""
-
if resample_method == "ohlc":
pytest.xfail(reason="DataError: No numeric types to aggregate")
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 8037095aff0b9..9b5dea7663396 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -35,7 +35,6 @@ def setup_method(self, datapath):
def test_examples1(self):
""" doc-string examples """
-
left = pd.DataFrame({"a": [1, 5, 10], "left_val": ["a", "b", "c"]})
right = pd.DataFrame({"a": [1, 2, 3, 6, 7], "right_val": [1, 2, 3, 6, 7]})
@@ -48,7 +47,6 @@ def test_examples1(self):
def test_examples2(self):
""" doc-string examples """
-
trades = pd.DataFrame(
{
"time": pd.to_datetime(
diff --git a/pandas/tests/reshape/merge/test_merge_index_as_string.py b/pandas/tests/reshape/merge/test_merge_index_as_string.py
index 9075a4e791583..08614d04caf4b 100644
--- a/pandas/tests/reshape/merge/test_merge_index_as_string.py
+++ b/pandas/tests/reshape/merge/test_merge_index_as_string.py
@@ -82,7 +82,6 @@ def compute_expected(df_left, df_right, on=None, left_on=None, right_on=None, ho
DataFrame
The expected merge result
"""
-
# Handle on param if specified
if on is not None:
left_on, right_on = on, on
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py
index 83e4ee25558b5..fd18d37ab13b6 100644
--- a/pandas/tests/window/moments/test_moments_rolling.py
+++ b/pandas/tests/window/moments/test_moments_rolling.py
@@ -1335,7 +1335,6 @@ def test_rolling_kurt_eq_value_fperr(self):
def test_rolling_max_gh6297(self):
"""Replicate result expected in GH #6297"""
-
indices = [datetime(1975, 1, i) for i in range(1, 6)]
# So that we can have 2 datapoints on one of the days
indices.append(datetime(1975, 1, 3, 6, 0))
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index e05cce9c49f4b..df1e750b32138 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -308,7 +308,6 @@ def apply_index(self, i):
-------
y : DatetimeIndex
"""
-
if type(self) is not DateOffset:
raise NotImplementedError(
f"DateOffset subclass {type(self).__name__} "
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 0aab5a9c4113d..0508f0d522b9f 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -55,7 +55,6 @@ def deprecate(
The message to display in the warning.
Default is '{name} is deprecated. Use {alt_name} instead.'
"""
-
alt_name = alt_name or alternative.__name__
klass = klass or FutureWarning
warning_msg = msg or f"{name} is deprecated, use {alt_name} instead"
@@ -163,7 +162,6 @@ def deprecate_kwarg(
future version please takes steps to stop use of 'cols'
should raise warning
"""
-
if mapping is not None and not hasattr(mapping, "get") and not callable(mapping):
raise TypeError(
"mapping from old to new argument values must be dict or callable!"
| This one could conflict which the unchecked rule blank line before comment | https://api.github.com/repos/pandas-dev/pandas/pulls/31895 | 2020-02-11T19:26:39Z | 2020-02-12T16:10:25Z | 2020-02-12T16:10:25Z | 2020-02-12T20:59:38Z |
DOC: Fix divmod return values | diff --git a/pandas/core/ops/docstrings.py b/pandas/core/ops/docstrings.py
index e3db65f11a332..203ea3946d1b2 100644
--- a/pandas/core/ops/docstrings.py
+++ b/pandas/core/ops/docstrings.py
@@ -34,6 +34,7 @@ def _make_flex_doc(op_name, typ):
op_name=op_name,
equiv=equiv,
reverse=op_desc["reverse"],
+ series_returns=op_desc["series_returns"],
)
if op_desc["series_examples"]:
doc = doc_no_examples + op_desc["series_examples"]
@@ -233,6 +234,10 @@ def _make_flex_doc(op_name, typ):
dtype: float64
"""
+_returns_series = """Series\n The result of the operation."""
+
+_returns_tuple = """2-Tuple of Series\n The result of the operation."""
+
_op_descriptions: Dict[str, Dict[str, Optional[str]]] = {
# Arithmetic Operators
"add": {
@@ -240,18 +245,21 @@ def _make_flex_doc(op_name, typ):
"desc": "Addition",
"reverse": "radd",
"series_examples": _add_example_SERIES,
+ "series_returns": _returns_series,
},
"sub": {
"op": "-",
"desc": "Subtraction",
"reverse": "rsub",
"series_examples": _sub_example_SERIES,
+ "series_returns": _returns_series,
},
"mul": {
"op": "*",
"desc": "Multiplication",
"reverse": "rmul",
"series_examples": _mul_example_SERIES,
+ "series_returns": _returns_series,
"df_examples": None,
},
"mod": {
@@ -259,12 +267,14 @@ def _make_flex_doc(op_name, typ):
"desc": "Modulo",
"reverse": "rmod",
"series_examples": _mod_example_SERIES,
+ "series_returns": _returns_series,
},
"pow": {
"op": "**",
"desc": "Exponential power",
"reverse": "rpow",
"series_examples": _pow_example_SERIES,
+ "series_returns": _returns_series,
"df_examples": None,
},
"truediv": {
@@ -272,6 +282,7 @@ def _make_flex_doc(op_name, typ):
"desc": "Floating division",
"reverse": "rtruediv",
"series_examples": _div_example_SERIES,
+ "series_returns": _returns_series,
"df_examples": None,
},
"floordiv": {
@@ -279,6 +290,7 @@ def _make_flex_doc(op_name, typ):
"desc": "Integer division",
"reverse": "rfloordiv",
"series_examples": _floordiv_example_SERIES,
+ "series_returns": _returns_series,
"df_examples": None,
},
"divmod": {
@@ -286,29 +298,51 @@ def _make_flex_doc(op_name, typ):
"desc": "Integer division and modulo",
"reverse": "rdivmod",
"series_examples": None,
+ "series_returns": _returns_tuple,
"df_examples": None,
},
# Comparison Operators
- "eq": {"op": "==", "desc": "Equal to", "reverse": None, "series_examples": None},
+ "eq": {
+ "op": "==",
+ "desc": "Equal to",
+ "reverse": None,
+ "series_examples": None,
+ "series_returns": _returns_series,
+ },
"ne": {
"op": "!=",
"desc": "Not equal to",
"reverse": None,
"series_examples": None,
+ "series_returns": _returns_series,
+ },
+ "lt": {
+ "op": "<",
+ "desc": "Less than",
+ "reverse": None,
+ "series_examples": None,
+ "series_returns": _returns_series,
},
- "lt": {"op": "<", "desc": "Less than", "reverse": None, "series_examples": None},
"le": {
"op": "<=",
"desc": "Less than or equal to",
"reverse": None,
"series_examples": None,
+ "series_returns": _returns_series,
+ },
+ "gt": {
+ "op": ">",
+ "desc": "Greater than",
+ "reverse": None,
+ "series_examples": None,
+ "series_returns": _returns_series,
},
- "gt": {"op": ">", "desc": "Greater than", "reverse": None, "series_examples": None},
"ge": {
"op": ">=",
"desc": "Greater than or equal to",
"reverse": None,
"series_examples": None,
+ "series_returns": _returns_series,
},
}
@@ -339,8 +373,7 @@ def _make_flex_doc(op_name, typ):
Returns
-------
-Series
- The result of the operation.
+{series_returns}
See Also
--------
| - [x] closes #31663
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/31894 | 2020-02-11T19:25:09Z | 2020-02-12T01:39:07Z | 2020-02-12T01:39:07Z | 2020-02-12T01:39:14Z |
CLN: D213: Multi-line docstring summary should start at the second line | diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index 2df940817498c..f1959cd70ed3a 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -82,7 +82,8 @@
class OptionError(AttributeError, KeyError):
- """Exception for pandas.options, backwards compatible with KeyError
+ """
+ Exception for pandas.options, backwards compatible with KeyError
checks
"""
@@ -545,7 +546,8 @@ def deprecate_option(
def _select_options(pat: str) -> List[str]:
- """returns a list of keys matching `pat`
+ """
+ returns a list of keys matching `pat`
if pat=="all", returns all registered options
"""
@@ -708,7 +710,8 @@ def pp(name: str, ks: Iterable[str]) -> List[str]:
@contextmanager
def config_prefix(prefix):
- """contextmanager for multiple invocations of API with a common prefix
+ """
+ contextmanager for multiple invocations of API with a common prefix
supported API functions: (register / get / set )__option
diff --git a/pandas/_testing.py b/pandas/_testing.py
index 01d2bfe0458c8..b6ce06554cd59 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -743,7 +743,8 @@ def repr_class(x):
def assert_attr_equal(attr, left, right, obj="Attributes"):
- """checks attributes are equal. Both objects must have attribute.
+ """
+ checks attributes are equal. Both objects must have attribute.
Parameters
----------
@@ -820,7 +821,8 @@ def assert_is_sorted(seq):
def assert_categorical_equal(
left, right, check_dtype=True, check_category_order=True, obj="Categorical"
):
- """Test that Categoricals are equivalent.
+ """
+ Test that Categoricals are equivalent.
Parameters
----------
@@ -860,7 +862,8 @@ def assert_categorical_equal(
def assert_interval_array_equal(left, right, exact="equiv", obj="IntervalArray"):
- """Test that two IntervalArrays are equivalent.
+ """
+ Test that two IntervalArrays are equivalent.
Parameters
----------
@@ -1009,7 +1012,8 @@ def _raise(left, right, err_msg):
def assert_extension_array_equal(
left, right, check_dtype=True, check_less_precise=False, check_exact=False
):
- """Check that left and right ExtensionArrays are equal.
+ """
+ Check that left and right ExtensionArrays are equal.
Parameters
----------
@@ -1489,7 +1493,8 @@ def assert_sp_array_equal(
check_fill_value=True,
consolidate_block_indices=False,
):
- """Check that the left and right SparseArray are equal.
+ """
+ Check that the left and right SparseArray are equal.
Parameters
----------
@@ -1724,7 +1729,8 @@ def _make_timeseries(start="2000-01-01", end="2000-12-31", freq="1D", seed=None)
def all_index_generator(k=10):
- """Generator which can be iterated over to get instances of all the various
+ """
+ Generator which can be iterated over to get instances of all the various
index classes.
Parameters
@@ -1763,7 +1769,8 @@ def index_subclass_makers_generator():
def all_timeseries_index_generator(k=10):
- """Generator which can be iterated over to get instances of all the classes
+ """
+ Generator which can be iterated over to get instances of all the classes
which represent time-series.
Parameters
@@ -1854,7 +1861,8 @@ def makePeriodFrame(nper=None):
def makeCustomIndex(
nentries, nlevels, prefix="#", names=False, ndupe_l=None, idx_type=None
):
- """Create an index/multindex with given dimensions, levels, names, etc'
+ """
+ Create an index/multindex with given dimensions, levels, names, etc'
nentries - number of entries in index
nlevels - number of levels (> 1 produces multindex)
@@ -2144,7 +2152,8 @@ def makeMissingDataframe(density=0.9, random_state=None):
def optional_args(decorator):
- """allows a decorator to take optional positional and keyword arguments.
+ """
+ allows a decorator to take optional positional and keyword arguments.
Assumes that taking a single, callable, positional argument means that
it is decorating a function, i.e. something like this::
@@ -2216,7 +2225,8 @@ def _get_default_network_errors():
def can_connect(url, error_classes=None):
- """Try to connect to the given url. True if succeeds, False if IOError
+ """
+ Try to connect to the given url. True if succeeds, False if IOError
raised
Parameters
@@ -2584,7 +2594,8 @@ def use_numexpr(use, min_elements=None):
def test_parallel(num_threads=2, kwargs_list=None):
- """Decorator to run the same function multiple times in parallel.
+ """
+ Decorator to run the same function multiple times in parallel.
Parameters
----------
diff --git a/pandas/compat/chainmap.py b/pandas/compat/chainmap.py
index 588bd24ddf797..a84dbb4a661e4 100644
--- a/pandas/compat/chainmap.py
+++ b/pandas/compat/chainmap.py
@@ -5,7 +5,8 @@
class DeepChainMap(ChainMap[_KT, _VT]):
- """Variant of ChainMap that allows direct updates to inner scopes.
+ """
+ Variant of ChainMap that allows direct updates to inner scopes.
Only works when all passed mapping are mutable.
"""
diff --git a/pandas/core/aggregation.py b/pandas/core/aggregation.py
index 79b87f146b9a7..448f84d58d7a0 100644
--- a/pandas/core/aggregation.py
+++ b/pandas/core/aggregation.py
@@ -98,7 +98,8 @@ def normalize_keyword_aggregation(kwargs: dict) -> Tuple[dict, List[str], List[i
def _make_unique_kwarg_list(
seq: Sequence[Tuple[Any, Any]]
) -> Sequence[Tuple[Any, Any]]:
- """Uniquify aggfunc name of the pairs in the order list
+ """
+ Uniquify aggfunc name of the pairs in the order list
Examples:
--------
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index c3c91cea43f6b..b5da6d4c11616 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1,4 +1,5 @@
-"""An interface for extending pandas with custom arrays.
+"""
+An interface for extending pandas with custom arrays.
.. warning::
@@ -213,7 +214,8 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
@classmethod
def _from_sequence_of_strings(cls, strings, dtype=None, copy=False):
- """Construct a new ExtensionArray from a sequence of strings.
+ """
+ Construct a new ExtensionArray from a sequence of strings.
.. versionadded:: 0.24.0
@@ -961,7 +963,8 @@ def __repr__(self) -> str:
return f"{class_name}{data}\nLength: {len(self)}, dtype: {self.dtype}"
def _formatter(self, boxed: bool = False) -> Callable[[Any], Optional[str]]:
- """Formatting function for scalar values.
+ """
+ Formatting function for scalar values.
This is used in the default '__repr__'. The returned formatting
function receives instances of your scalar type.
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6c7c35e9b4763..19602010fd882 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1891,7 +1891,8 @@ def __contains__(self, key) -> bool:
return contains(self, key, container=self._codes)
def _tidy_repr(self, max_vals=10, footer=True) -> str:
- """ a short repr displaying only max_vals and an optional (but default
+ """
+ a short repr displaying only max_vals and an optional (but default
footer)
"""
num = max_vals // 2
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 07aa8d49338c8..e39d1dc03adf5 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -134,7 +134,8 @@ def _simple_new(cls, values, **kwargs):
@property
def _scalar_type(self) -> Type[DatetimeLikeScalar]:
- """The scalar associated with this datelike
+ """
+ The scalar associated with this datelike
* PeriodArray : Period
* DatetimeArray : Timestamp
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 642ae6d4deacb..f1e0882def13b 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -477,7 +477,8 @@ def astype(self, dtype, copy: bool = True) -> ArrayLike:
@property
def _ndarray_values(self) -> np.ndarray:
- """Internal pandas method for lossy conversion to a NumPy ndarray.
+ """
+ Internal pandas method for lossy conversion to a NumPy ndarray.
This method is not part of the pandas interface.
@@ -492,7 +493,8 @@ def _values_for_factorize(self) -> Tuple[np.ndarray, float]:
return self.to_numpy(na_value=np.nan), np.nan
def _values_for_argsort(self) -> np.ndarray:
- """Return values for sorting.
+ """
+ Return values for sorting.
Returns
-------
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index c8bb0878b564d..ab3ee5bbcdc3a 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -460,7 +460,8 @@ def from_tuples(cls, data, closed="right", copy=False, dtype=None):
return cls.from_arrays(left, right, closed, copy=False, dtype=dtype)
def _validate(self):
- """Verify that the IntervalArray is valid.
+ """
+ Verify that the IntervalArray is valid.
Checks that
diff --git a/pandas/core/arrays/sparse/scipy_sparse.py b/pandas/core/arrays/sparse/scipy_sparse.py
index eff9c03386a38..e77256a5aaadd 100644
--- a/pandas/core/arrays/sparse/scipy_sparse.py
+++ b/pandas/core/arrays/sparse/scipy_sparse.py
@@ -17,7 +17,8 @@ def _check_is_partition(parts, whole):
def _to_ijv(ss, row_levels=(0,), column_levels=(1,), sort_labels=False):
- """ For arbitrary (MultiIndexed) sparse Series return
+ """
+ For arbitrary (MultiIndexed) sparse Series return
(v, i, j, ilabels, jlabels) where (v, (i, j)) is suitable for
passing to scipy.sparse.coo constructor.
"""
@@ -44,7 +45,8 @@ def get_indexers(levels):
# labels_to_i[:] = np.arange(labels_to_i.shape[0])
def _get_label_to_i_dict(labels, sort_labels=False):
- """ Return dict of unique labels to number.
+ """
+ Return dict of unique labels to number.
Optionally sort by label.
"""
labels = Index(map(tuple, labels)).unique().tolist() # squish
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index 5563d3ae27118..7ed089b283903 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -1,4 +1,5 @@
-"""Operator classes for eval.
+"""
+Operator classes for eval.
"""
from datetime import datetime
@@ -248,7 +249,8 @@ def is_datetime(self) -> bool:
def _in(x, y):
- """Compute the vectorized membership of ``x in y`` if possible, otherwise
+ """
+ Compute the vectorized membership of ``x in y`` if possible, otherwise
use Python.
"""
try:
@@ -263,7 +265,8 @@ def _in(x, y):
def _not_in(x, y):
- """Compute the vectorized membership of ``x not in y`` if possible,
+ """
+ Compute the vectorized membership of ``x not in y`` if possible,
otherwise use Python.
"""
try:
@@ -445,7 +448,8 @@ def evaluate(self, env, engine: str, parser, term_type, eval_in_python):
return term_type(name, env=env)
def convert_values(self):
- """Convert datetimes to a comparable value in an expression.
+ """
+ Convert datetimes to a comparable value in an expression.
"""
def stringify(value):
diff --git a/pandas/core/computation/parsing.py b/pandas/core/computation/parsing.py
index ce213c8532834..92a2c20cd2a9e 100644
--- a/pandas/core/computation/parsing.py
+++ b/pandas/core/computation/parsing.py
@@ -1,4 +1,5 @@
-""":func:`~pandas.eval` source string parsing functions
+"""
+:func:`~pandas.eval` source string parsing functions
"""
from io import StringIO
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 097c3c22aa6c3..828ec11c2bd38 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -149,7 +149,8 @@ def is_valid(self) -> bool:
@property
def is_in_table(self) -> bool:
- """ return True if this is a valid column name for generation (e.g. an
+ """
+ return True if this is a valid column name for generation (e.g. an
actual column in the table)
"""
return self.queryables.get(self.lhs) is not None
@@ -175,7 +176,8 @@ def generate(self, v) -> str:
return f"({self.lhs} {self.op} {val})"
def convert_value(self, v) -> "TermValue":
- """ convert the expression that is in the term to something that is
+ """
+ convert the expression that is in the term to something that is
accepted by pytables
"""
diff --git a/pandas/core/computation/scope.py b/pandas/core/computation/scope.py
index 70dcf4defdb52..937c81fdeb8d6 100644
--- a/pandas/core/computation/scope.py
+++ b/pandas/core/computation/scope.py
@@ -31,7 +31,8 @@ def ensure_scope(
def _replacer(x) -> str:
- """Replace a number with its hexadecimal representation. Used to tag
+ """
+ Replace a number with its hexadecimal representation. Used to tag
temporary variables with their calling scope's id.
"""
# get the hex repr of the binary char and remove 0x and pad by pad_size
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 011c09c9ca1ef..1c969d40c2c7f 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -105,7 +105,8 @@ def is_nested_object(obj) -> bool:
def maybe_downcast_to_dtype(result, dtype):
- """ try to cast to the specified dtype (e.g. convert back to bool/int
+ """
+ try to cast to the specified dtype (e.g. convert back to bool/int
or could be an astype of float64->float32
"""
do_round = False
@@ -750,7 +751,8 @@ def maybe_upcast(values, fill_value=np.nan, dtype=None, copy: bool = False):
def invalidate_string_dtypes(dtype_set):
- """Change string like dtypes to object for
+ """
+ Change string like dtypes to object for
``DataFrame.select_dtypes()``.
"""
non_string_dtypes = dtype_set - {np.dtype("S").type, np.dtype("<U").type}
@@ -1216,7 +1218,8 @@ def try_timedelta(v):
def maybe_cast_to_datetime(value, dtype, errors: str = "raise"):
- """ try to cast the array/value to a datetimelike dtype, converting float
+ """
+ try to cast the array/value to a datetimelike dtype, converting float
nan to iNaT
"""
from pandas.core.tools.timedeltas import to_timedelta
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index 003c3505885bb..49034616b374a 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -136,7 +136,8 @@ def is_nonempty(x) -> bool:
def concat_categorical(to_concat, axis: int = 0):
- """Concatenate an object/categorical array of arrays, each of which is a
+ """
+ Concatenate an object/categorical array of arrays, each of which is a
single dtype
Parameters
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 1a16f8792e9e7..ec72541128708 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -281,21 +281,24 @@ def _validate_dtype(self, dtype):
@property
def _constructor(self: FrameOrSeries) -> Type[FrameOrSeries]:
- """Used when a manipulation result has the same dimensions as the
+ """
+ Used when a manipulation result has the same dimensions as the
original.
"""
raise AbstractMethodError(self)
@property
def _constructor_sliced(self):
- """Used when a manipulation result has one lower dimension(s) as the
+ """
+ Used when a manipulation result has one lower dimension(s) as the
original, such as DataFrame single columns slicing.
"""
raise AbstractMethodError(self)
@property
def _constructor_expanddim(self):
- """Used when a manipulation result has one higher dimension as the
+ """
+ Used when a manipulation result has one higher dimension as the
original, such as Series.to_frame()
"""
raise NotImplementedError
@@ -346,7 +349,8 @@ def _construct_axes_dict(self, axes=None, **kwargs):
def _construct_axes_from_arguments(
self, args, kwargs, require_all: bool = False, sentinel=None
):
- """Construct and returns axes if supplied in args/kwargs.
+ """
+ Construct and returns axes if supplied in args/kwargs.
If require_all, raise if all axis arguments are not supplied
return a tuple of (axes, kwargs).
@@ -1735,7 +1739,8 @@ def keys(self):
return self._info_axis
def items(self):
- """Iterate over (label, values) on info axis
+ """
+ Iterate over (label, values) on info axis
This is index for Series and columns for DataFrame.
@@ -3115,18 +3120,22 @@ def to_csv(
# Lookup Caching
def _set_as_cached(self, item, cacher) -> None:
- """Set the _cacher attribute on the calling object with a weakref to
+ """
+ Set the _cacher attribute on the calling object with a weakref to
cacher.
"""
self._cacher = (item, weakref.ref(cacher))
def _reset_cacher(self) -> None:
- """Reset the cacher."""
+ """
+ Reset the cacher.
+ """
if hasattr(self, "_cacher"):
del self._cacher
def _maybe_cache_changed(self, item, value) -> None:
- """The object has called back to us saying maybe it has changed.
+ """
+ The object has called back to us saying maybe it has changed.
"""
self._data.set(item, value)
@@ -5073,7 +5082,8 @@ def __finalize__(
return self
def __getattr__(self, name: str):
- """After regular attribute access, try looking up the name
+ """
+ After regular attribute access, try looking up the name
This allows simpler access to columns for interactive use.
"""
# Note: obj.x will always call obj.__getattribute__('x') prior to
@@ -5091,7 +5101,8 @@ def __getattr__(self, name: str):
return object.__getattribute__(self, name)
def __setattr__(self, name: str, value) -> None:
- """After regular attribute access, try setting the name
+ """
+ After regular attribute access, try setting the name
This allows simpler access to columns for interactive use.
"""
# first try regular attribute access via __getattribute__, so that
@@ -5131,7 +5142,8 @@ def __setattr__(self, name: str, value) -> None:
object.__setattr__(self, name, value)
def _dir_additions(self):
- """ add the string-like attributes from the info_axis.
+ """
+ add the string-like attributes from the info_axis.
If info_axis is a MultiIndex, it's first level values are used.
"""
additions = {
@@ -5145,7 +5157,8 @@ def _dir_additions(self):
# Consolidation of internals
def _protect_consolidate(self, f):
- """Consolidate _data -- if the blocks have changed, then clear the
+ """
+ Consolidate _data -- if the blocks have changed, then clear the
cache
"""
blocks_before = len(self._data.blocks)
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index bb62d500311df..adb2ed9211bfe 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -519,7 +519,8 @@ def reindex(self, target, method=None, level=None, limit=None, tolerance=None):
return new_target, indexer
def _reindex_non_unique(self, target):
- """ reindex from a non-unique; which CategoricalIndex's are almost
+ """
+ reindex from a non-unique; which CategoricalIndex's are almost
always
"""
new_target, indexer = self.reindex(target)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index cb8b9cc04fc24..46017377f2b9c 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -85,7 +85,8 @@ class IndexingError(Exception):
class IndexingMixin:
- """Mixin for adding .loc/.iloc/.at/.iat to Datafames and Series.
+ """
+ Mixin for adding .loc/.iloc/.at/.iat to Datafames and Series.
"""
@property
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 7d6ef11719b3a..34fa4c0e6544e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -256,7 +256,8 @@ def mgr_locs(self, new_mgr_locs):
@property
def array_dtype(self):
- """ the dtype to return if I want to construct this block as an
+ """
+ the dtype to return if I want to construct this block as an
array
"""
return self.dtype
@@ -374,7 +375,8 @@ def delete(self, loc):
self.mgr_locs = self.mgr_locs.delete(loc)
def apply(self, func, **kwargs) -> List["Block"]:
- """ apply the function to my values; return a block if we are not
+ """
+ apply the function to my values; return a block if we are not
one
"""
with np.errstate(all="ignore"):
@@ -400,7 +402,8 @@ def _split_op_result(self, result) -> List["Block"]:
return [result]
def fillna(self, value, limit=None, inplace=False, downcast=None):
- """ fillna on the block with the value. If we fail, then convert to
+ """
+ fillna on the block with the value. If we fail, then convert to
ObjectBlock and try again
"""
inplace = validate_bool_kwarg(inplace, "inplace")
@@ -648,7 +651,8 @@ def convert(
timedelta: bool = True,
coerce: bool = False,
):
- """ attempt to coerce any object types to better types return a copy
+ """
+ attempt to coerce any object types to better types return a copy
of the block (if copy = True) by definition we are not an ObjectBlock
here!
"""
@@ -693,7 +697,8 @@ def copy(self, deep=True):
def replace(
self, to_replace, value, inplace=False, filter=None, regex=False, convert=True
):
- """replace the to_replace value with value, possible to create new
+ """
+ replace the to_replace value with value, possible to create new
blocks here this is just a call to putmask. regex is not used here.
It is used in ObjectBlocks. It is here for API compatibility.
"""
@@ -913,7 +918,8 @@ def setitem(self, indexer, value):
return block
def putmask(self, mask, new, align=True, inplace=False, axis=0, transpose=False):
- """ putmask the data to the block; it is possible that we may create a
+ """
+ putmask the data to the block; it is possible that we may create a
new dtype of block
return the resulting block(s)
@@ -1446,7 +1452,8 @@ def equals(self, other) -> bool:
return array_equivalent(self.values, other.values)
def _unstack(self, unstacker_func, new_columns, n_rows, fill_value):
- """Return a list of unstacked blocks of self
+ """
+ Return a list of unstacked blocks of self
Parameters
----------
@@ -1584,7 +1591,8 @@ class NonConsolidatableMixIn:
_validate_ndim = False
def __init__(self, values, placement, ndim=None):
- """Initialize a non-consolidatable block.
+ """
+ Initialize a non-consolidatable block.
'ndim' may be inferred from 'placement'.
@@ -1699,7 +1707,8 @@ def _get_unstack_items(self, unstacker, new_columns):
class ExtensionBlock(NonConsolidatableMixIn, Block):
- """Block for holding extension types.
+ """
+ Block for holding extension types.
Notes
-----
@@ -1757,7 +1766,8 @@ def is_numeric(self):
return self.values.dtype._is_numeric
def setitem(self, indexer, value):
- """Set the value inplace, returning a same-typed block.
+ """
+ Set the value inplace, returning a same-typed block.
This differs from Block.setitem by not allowing setitem to change
the dtype of the Block.
@@ -2291,7 +2301,8 @@ def _holder(self):
return DatetimeArray
def _maybe_coerce_values(self, values):
- """Input validation for values passed to __init__. Ensure that
+ """
+ Input validation for values passed to __init__. Ensure that
we have datetime64TZ, coercing if necessary.
Parameters
@@ -2580,7 +2591,8 @@ def __init__(self, values, placement=None, ndim=2):
@property
def is_bool(self):
- """ we can be a bool if we have only bool values but are of type
+ """
+ we can be a bool if we have only bool values but are of type
object
"""
return lib.is_bool_array(self.values.ravel())
@@ -2593,7 +2605,8 @@ def convert(
timedelta: bool = True,
coerce: bool = False,
):
- """ attempt to coerce any object types to better types return a copy of
+ """
+ attempt to coerce any object types to better types return a copy of
the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
can return multiple blocks!
@@ -2886,7 +2899,8 @@ def _holder(self):
@property
def array_dtype(self):
- """ the dtype to return if I want to construct this block as an
+ """
+ the dtype to return if I want to construct this block as an
array
"""
return np.object_
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 9dd4312a39525..57ed2555761be 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -534,7 +534,8 @@ def _list_of_series_to_arrays(data, columns, coerce_float=False, dtype=None):
def _list_of_dict_to_arrays(data, columns, coerce_float=False, dtype=None):
- """Convert list of dicts to numpy arrays
+ """
+ Convert list of dicts to numpy arrays
if `columns` is not passed, column names are inferred from the records
- for OrderedDict and dicts, the column names match
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index fb20b5e89ccf3..69ceb95985140 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1756,7 +1756,8 @@ def form_blocks(arrays, names, axes):
def _simple_blockify(tuples, dtype):
- """ return a single array of a block that has a single dtype; if dtype is
+ """
+ return a single array of a block that has a single dtype; if dtype is
not None, coerce to this dtype
"""
values, placement = _stack_arrays(tuples, dtype)
@@ -1815,7 +1816,8 @@ def _shape_compat(x):
def _interleaved_dtype(
blocks: List[Block],
) -> Optional[Union[np.dtype, ExtensionDtype]]:
- """Find the common dtype for `blocks`.
+ """
+ Find the common dtype for `blocks`.
Parameters
----------
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 6115c4af41b25..a5c609473760d 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -652,7 +652,8 @@ def _get_counts_nanvar(
ddof: int,
dtype: Dtype = float,
) -> Tuple[Union[int, np.ndarray], Union[int, np.ndarray]]:
- """ Get the count of non-null values along an axis, accounting
+ """
+ Get the count of non-null values along an axis, accounting
for degrees of freedom.
Parameters
@@ -956,7 +957,8 @@ def nanskew(
skipna: bool = True,
mask: Optional[np.ndarray] = None,
) -> float:
- """ Compute the sample skewness.
+ """
+ Compute the sample skewness.
The statistic computed here is the adjusted Fisher-Pearson standardized
moment coefficient G1. The algorithm computes this coefficient directly
@@ -1194,7 +1196,8 @@ def _get_counts(
axis: Optional[int],
dtype: Dtype = float,
) -> Union[int, np.ndarray]:
- """ Get the count of non-null values along an axis
+ """
+ Get the count of non-null values along an axis
Parameters
----------
diff --git a/pandas/core/reshape/tile.py b/pandas/core/reshape/tile.py
index e499158a13b0c..86417faf6cd11 100644
--- a/pandas/core/reshape/tile.py
+++ b/pandas/core/reshape/tile.py
@@ -587,7 +587,8 @@ def _round_frac(x, precision: int):
def _infer_precision(base_precision: int, bins) -> int:
- """Infer an appropriate precision for _round_frac
+ """
+ Infer an appropriate precision for _round_frac
"""
for precision in range(base_precision, 20):
levels = [_round_frac(b, precision) for b in bins]
diff --git a/pandas/io/common.py b/pandas/io/common.py
index beb6c9d97aff3..c52583eed27ec 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -74,8 +74,9 @@ def is_url(url) -> bool:
def _expand_user(
filepath_or_buffer: FilePathOrBuffer[AnyStr],
) -> FilePathOrBuffer[AnyStr]:
- """Return the argument with an initial component of ~ or ~user
- replaced by that user's home directory.
+ """
+ Return the argument with an initial component of ~ or ~user
+ replaced by that user's home directory.
Parameters
----------
@@ -103,7 +104,8 @@ def validate_header_arg(header) -> None:
def stringify_path(
filepath_or_buffer: FilePathOrBuffer[AnyStr],
) -> FilePathOrBuffer[AnyStr]:
- """Attempt to convert a path-like object to a string.
+ """
+ Attempt to convert a path-like object to a string.
Parameters
----------
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 70c09151258ff..97959bd125113 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -721,7 +721,8 @@ def _get_sheet_name(self, sheet_name):
return sheet_name
def _value_with_fmt(self, val):
- """Convert numpy types to Python types for the Excel writers.
+ """
+ Convert numpy types to Python types for the Excel writers.
Parameters
----------
@@ -755,7 +756,8 @@ def _value_with_fmt(self, val):
@classmethod
def check_extension(cls, ext):
- """checks that path's extension against the Writer's supported
+ """
+ checks that path's extension against the Writer's supported
extensions. If it isn't supported, raises UnsupportedFiletypeError.
"""
if ext.startswith("."):
diff --git a/pandas/io/excel/_odfreader.py b/pandas/io/excel/_odfreader.py
index ec5f6fcb17ff8..7af776dc1a10f 100644
--- a/pandas/io/excel/_odfreader.py
+++ b/pandas/io/excel/_odfreader.py
@@ -64,7 +64,8 @@ def get_sheet_by_name(self, name: str):
raise ValueError(f"sheet {name} not found")
def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
- """Parse an ODF Table into a list of lists
+ """
+ Parse an ODF Table into a list of lists
"""
from odf.table import CoveredTableCell, TableCell, TableRow
@@ -120,7 +121,8 @@ def get_sheet_data(self, sheet, convert_float: bool) -> List[List[Scalar]]:
return table
def _get_row_repeat(self, row) -> int:
- """Return number of times this row was repeated
+ """
+ Return number of times this row was repeated
Repeating an empty row appeared to be a common way
of representing sparse rows in the table.
"""
@@ -134,7 +136,8 @@ def _get_column_repeat(self, cell) -> int:
return int(cell.attributes.get((TABLENS, "number-columns-repeated"), 1))
def _is_empty_row(self, row) -> bool:
- """Helper function to find empty rows
+ """
+ Helper function to find empty rows
"""
for column in row.childNodes:
if len(column.childNodes) > 0:
diff --git a/pandas/io/excel/_openpyxl.py b/pandas/io/excel/_openpyxl.py
index d35d466e6c5c9..a96c0f814e2d8 100644
--- a/pandas/io/excel/_openpyxl.py
+++ b/pandas/io/excel/_openpyxl.py
@@ -468,7 +468,8 @@ def write_cells(
class _OpenpyxlReader(_BaseExcelReader):
def __init__(self, filepath_or_buffer: FilePathOrBuffer) -> None:
- """Reader using openpyxl engine.
+ """
+ Reader using openpyxl engine.
Parameters
----------
diff --git a/pandas/io/excel/_util.py b/pandas/io/excel/_util.py
index a33406b6e80d7..c8d40d7141fc8 100644
--- a/pandas/io/excel/_util.py
+++ b/pandas/io/excel/_util.py
@@ -171,7 +171,8 @@ def _trim_excel_header(row):
def _fill_mi_header(row, control_row):
- """Forward fill blank entries in row but only inside the same parent index.
+ """
+ Forward fill blank entries in row but only inside the same parent index.
Used for creating headers in Multiindex.
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index 16f800a6de2c9..8f7d3b1368fc7 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -9,7 +9,8 @@
class _XlrdReader(_BaseExcelReader):
def __init__(self, filepath_or_buffer):
- """Reader using xlrd engine.
+ """
+ Reader using xlrd engine.
Parameters
----------
diff --git a/pandas/io/excel/_xlwt.py b/pandas/io/excel/_xlwt.py
index d102a885cef0a..78efe77e9fe2d 100644
--- a/pandas/io/excel/_xlwt.py
+++ b/pandas/io/excel/_xlwt.py
@@ -80,7 +80,8 @@ def write_cells(
def _style_to_xlwt(
cls, item, firstlevel: bool = True, field_sep=",", line_sep=";"
) -> str:
- """helper which recursively generate an xlwt easy style string
+ """
+ helper which recursively generate an xlwt easy style string
for example:
hstyle = {"font": {"bold": True},
diff --git a/pandas/io/formats/excel.py b/pandas/io/formats/excel.py
index 28a069bc9fc1b..aac1df5dcd396 100644
--- a/pandas/io/formats/excel.py
+++ b/pandas/io/formats/excel.py
@@ -41,7 +41,8 @@ def __init__(
class CSSToExcelConverter:
- """A callable for converting CSS declarations to ExcelWriter styles
+ """
+ A callable for converting CSS declarations to ExcelWriter styles
Supports parts of CSS 2.2, with minimal CSS 3.0 support (e.g. text-shadow),
focusing on font styling, backgrounds, borders and alignment.
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 55d534f975b68..0693c083c9ddc 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -1832,7 +1832,8 @@ def __init__(self, accuracy: Optional[int] = None, use_eng_prefix: bool = False)
self.use_eng_prefix = use_eng_prefix
def __call__(self, num: Union[int, float]) -> str:
- """ Formats a number in engineering notation, appending a letter
+ """
+ Formats a number in engineering notation, appending a letter
representing the power of 1000 of the original number. Some examples:
>>> format_eng(0) # for self.accuracy = 0
@@ -1930,7 +1931,8 @@ def _binify(cols: List[int], line_width: int) -> List[int]:
def get_level_lengths(
levels: Any, sentinel: Union[bool, object, str] = ""
) -> List[Dict[int, int]]:
- """For each index in each level the function returns lengths of indexes.
+ """
+ For each index in each level the function returns lengths of indexes.
Parameters
----------
diff --git a/pandas/io/html.py b/pandas/io/html.py
index ee8e96b4b3344..561570f466b68 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -600,7 +600,8 @@ def _build_doc(self):
def _build_xpath_expr(attrs) -> str:
- """Build an xpath expression to simulate bs4's ability to pass in kwargs to
+ """
+ Build an xpath expression to simulate bs4's ability to pass in kwargs to
search for attributes when using the lxml parser.
Parameters
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 1390d2d514a5e..048aa8b1915d1 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3224,7 +3224,8 @@ def is_multi_index(self) -> bool:
return isinstance(self.levels, list)
def validate_multiindex(self, obj):
- """validate that we can store the multi-index; reset and return the
+ """
+ validate that we can store the multi-index; reset and return the
new object
"""
levels = [
@@ -3375,7 +3376,8 @@ def validate_version(self, where=None):
warnings.warn(ws, IncompatibilityWarning)
def validate_min_itemsize(self, min_itemsize):
- """validate the min_itemsize doesn't contain items that are not in the
+ """
+ validate the min_itemsize doesn't contain items that are not in the
axes this needs data_columns to be defined
"""
if min_itemsize is None:
@@ -3587,7 +3589,8 @@ def get_object(cls, obj, transposed: bool):
return obj
def validate_data_columns(self, data_columns, min_itemsize, non_index_axes):
- """take the input data_columns and min_itemize and create a data
+ """
+ take the input data_columns and min_itemize and create a data
columns spec
"""
if not len(non_index_axes):
@@ -3999,7 +4002,8 @@ def create_description(
def read_coordinates(
self, where=None, start: Optional[int] = None, stop: Optional[int] = None,
):
- """select coordinates (row numbers) from a table; return the
+ """
+ select coordinates (row numbers) from a table; return the
coordinates object
"""
# validate the version
@@ -4028,7 +4032,8 @@ def read_column(
start: Optional[int] = None,
stop: Optional[int] = None,
):
- """return a single column from the table, generally only indexables
+ """
+ return a single column from the table, generally only indexables
are interesting
"""
# validate the version
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 69e5a973ff706..e7120b1f6da08 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -653,7 +653,8 @@ def create(self):
self._execute_create()
def _execute_insert(self, conn, keys, data_iter):
- """Execute SQL statement inserting data
+ """
+ Execute SQL statement inserting data
Parameters
----------
@@ -667,7 +668,8 @@ def _execute_insert(self, conn, keys, data_iter):
conn.execute(self.table.insert(), data)
def _execute_insert_multi(self, conn, keys, data_iter):
- """Alternative to _execute_insert for DBs support multivalue INSERT.
+ """
+ Alternative to _execute_insert for DBs support multivalue INSERT.
Note: multi-value insert is usually faster for analytics DBs
and tables containing a few columns
@@ -1092,7 +1094,8 @@ def read_table(
schema=None,
chunksize=None,
):
- """Read SQL database table into a DataFrame.
+ """
+ Read SQL database table into a DataFrame.
Parameters
----------
@@ -1168,7 +1171,8 @@ def read_query(
params=None,
chunksize=None,
):
- """Read SQL query into a DataFrame.
+ """
+ Read SQL query into a DataFrame.
Parameters
----------
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index cf3251faae979..593228e99477b 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -482,7 +482,8 @@ class InvalidColumnName(Warning):
def _cast_to_stata_types(data: DataFrame) -> DataFrame:
- """Checks the dtypes of the columns of a pandas DataFrame for
+ """
+ Checks the dtypes of the columns of a pandas DataFrame for
compatibility with the data types and ranges supported by Stata, and
converts if necessary.
@@ -2128,7 +2129,8 @@ def _write_bytes(self, value: bytes) -> None:
self._file.write(value)
def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
- """Check for categorical columns, retain categorical information for
+ """
+ Check for categorical columns, retain categorical information for
Stata file and convert categorical data to int
"""
is_cat = [is_categorical_dtype(data[col]) for col in data]
@@ -2170,7 +2172,8 @@ def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
def _replace_nans(self, data: DataFrame) -> DataFrame:
# return data
- """Checks floating point data columns for nans, and replaces these with
+ """
+ Checks floating point data columns for nans, and replaces these with
the generic Stata for missing value (.)
"""
for c in data:
@@ -3035,7 +3038,8 @@ def _write_header(
self._write_bytes(self._tag(bio.read(), "header"))
def _write_map(self) -> None:
- """Called twice during file write. The first populates the values in
+ """
+ Called twice during file write. The first populates the values in
the map with 0s. The second call writes the final map locations when
all blocks have been written.
"""
@@ -3185,7 +3189,8 @@ def _write_file_close_tag(self) -> None:
self._update_map("end-of-file")
def _update_strl_names(self) -> None:
- """Update column names for conversion to strl if they might have been
+ """
+ Update column names for conversion to strl if they might have been
changed to comply with Stata naming rules
"""
# Update convert_strl if names changed
@@ -3195,7 +3200,8 @@ def _update_strl_names(self) -> None:
self._convert_strl[idx] = new
def _convert_strls(self, data: DataFrame) -> DataFrame:
- """Convert columns to StrLs if either very large or in the
+ """
+ Convert columns to StrLs if either very large or in the
convert_strl variable
"""
convert_cols = [
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index dafdd6eecabc0..5743288982da4 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -100,7 +100,8 @@ def _subplots(
layout_type="box",
**fig_kw,
):
- """Create a figure with a set of subplots already made.
+ """
+ Create a figure with a set of subplots already made.
This utility wrapper makes it convenient to create common layouts of
subplots, including the enclosing figure object, in a single call.
diff --git a/pandas/tests/extension/arrow/arrays.py b/pandas/tests/extension/arrow/arrays.py
index b67ca4cfab83d..cd4b43c83340f 100644
--- a/pandas/tests/extension/arrow/arrays.py
+++ b/pandas/tests/extension/arrow/arrays.py
@@ -1,4 +1,5 @@
-"""Rudimentary Apache Arrow-backed ExtensionArray.
+"""
+Rudimentary Apache Arrow-backed ExtensionArray.
At the moment, just a boolean array / type is implemented.
Eventually, we'll want to parametrize the type and support
diff --git a/pandas/tests/extension/base/__init__.py b/pandas/tests/extension/base/__init__.py
index e2b6ea0304f6a..323cb843b2d74 100644
--- a/pandas/tests/extension/base/__init__.py
+++ b/pandas/tests/extension/base/__init__.py
@@ -1,4 +1,5 @@
-"""Base test suite for extension arrays.
+"""
+Base test suite for extension arrays.
These tests are intended for third-party libraries to subclass to validate
that their extension arrays and dtypes satisfy the interface. Moving or
diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index 0609f19c8e0c3..4009041218ac2 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -51,7 +51,8 @@ def _check_divmod_op(self, s, op, other, exc=Exception):
class BaseArithmeticOpsTests(BaseOpsUtil):
- """Various Series and DataFrame arithmetic ops methods.
+ """
+ Various Series and DataFrame arithmetic ops methods.
Subclasses supporting various ops should set the class variables
to indicate that they support ops of that kind
diff --git a/pandas/tests/extension/conftest.py b/pandas/tests/extension/conftest.py
index d37638d37e4d6..1942d737780da 100644
--- a/pandas/tests/extension/conftest.py
+++ b/pandas/tests/extension/conftest.py
@@ -13,7 +13,8 @@ def dtype():
@pytest.fixture
def data():
- """Length-100 array for this type.
+ """
+ Length-100 array for this type.
* data[0] and data[1] should both be non missing
* data[0] and data[1] should not be equal
@@ -67,7 +68,8 @@ def gen(count):
@pytest.fixture
def data_for_sorting():
- """Length-3 array with a known sort order.
+ """
+ Length-3 array with a known sort order.
This should be three items [B, C, A] with
A < B < C
@@ -77,7 +79,8 @@ def data_for_sorting():
@pytest.fixture
def data_missing_for_sorting():
- """Length-3 array with a known sort order.
+ """
+ Length-3 array with a known sort order.
This should be three items [B, NA, A] with
A < B and NA missing.
@@ -87,7 +90,8 @@ def data_missing_for_sorting():
@pytest.fixture
def na_cmp():
- """Binary operator for comparing NA values.
+ """
+ Binary operator for comparing NA values.
Should return a function of two arguments that returns
True if both arguments are (scalar) NA for your type.
@@ -105,7 +109,8 @@ def na_value():
@pytest.fixture
def data_for_grouping():
- """Data for factorization, grouping, and unique tests.
+ """
+ Data for factorization, grouping, and unique tests.
Expected to be like [B, B, NA, NA, A, A, B, C]
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index e6b147e7a4ce7..a229a824d0f9b 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -1,4 +1,5 @@
-"""Test extension array for storing nested data in a pandas container.
+"""
+Test extension array for storing nested data in a pandas container.
The JSONArray stores lists of dictionaries. The storage mechanism is a list,
not an ndarray.
diff --git a/pandas/tests/groupby/conftest.py b/pandas/tests/groupby/conftest.py
index ebac36c5f8c78..1214734358c80 100644
--- a/pandas/tests/groupby/conftest.py
+++ b/pandas/tests/groupby/conftest.py
@@ -107,7 +107,8 @@ def three_group():
@pytest.fixture(params=sorted(reduction_kernels))
def reduction_func(request):
- """yields the string names of all groupby reduction functions, one at a time.
+ """
+ yields the string names of all groupby reduction functions, one at a time.
"""
return request.param
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index 9d55609d5db00..9cc031001f81c 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -101,7 +101,8 @@ def setup_method(self, method):
setattr(self, kind, d)
def generate_indices(self, f, values=False):
- """ generate the indices
+ """
+ generate the indices
if values is True , use the axis values
is False, use the range
"""
diff --git a/pandas/tests/indexing/multiindex/conftest.py b/pandas/tests/indexing/multiindex/conftest.py
index 48e090b242208..0256f5e35e1db 100644
--- a/pandas/tests/indexing/multiindex/conftest.py
+++ b/pandas/tests/indexing/multiindex/conftest.py
@@ -20,7 +20,8 @@ def multiindex_dataframe_random_data():
@pytest.fixture
def multiindex_year_month_day_dataframe_random_data():
- """DataFrame with 3 level MultiIndex (year, month, day) covering
+ """
+ DataFrame with 3 level MultiIndex (year, month, day) covering
first 100 business days from 2000-01-01 with random data
"""
tdf = tm.makeTimeDataFrame(100)
diff --git a/pandas/tests/io/conftest.py b/pandas/tests/io/conftest.py
index 7810778602e12..fe71ca77a7dda 100644
--- a/pandas/tests/io/conftest.py
+++ b/pandas/tests/io/conftest.py
@@ -27,7 +27,8 @@ def salaries_table(datapath):
@pytest.fixture
def s3_resource(tips_file, jsonl_file):
- """Fixture for mocking S3 interaction.
+ """
+ Fixture for mocking S3 interaction.
The primary bucket name is "pandas-test". The following datasets
are loaded.
diff --git a/pandas/tests/io/pytables/common.py b/pandas/tests/io/pytables/common.py
index 7f0b3ab7957e6..aad18890de3ad 100644
--- a/pandas/tests/io/pytables/common.py
+++ b/pandas/tests/io/pytables/common.py
@@ -74,7 +74,8 @@ def ensure_clean_path(path):
def _maybe_remove(store, key):
- """For tests using tables, try removing the table to be sure there is
+ """
+ For tests using tables, try removing the table to be sure there is
no content from previous tests using the same table name.
"""
try:
diff --git a/pandas/tests/resample/conftest.py b/pandas/tests/resample/conftest.py
index a4ac15d9f3b07..d5b71a6e4cee1 100644
--- a/pandas/tests/resample/conftest.py
+++ b/pandas/tests/resample/conftest.py
@@ -98,7 +98,8 @@ def _index_name():
@pytest.fixture
def index(_index_factory, _index_start, _index_end, _index_freq, _index_name):
- """Fixture for parametrization of date_range, period_range and
+ """
+ Fixture for parametrization of date_range, period_range and
timedelta_range indexes
"""
return _index_factory(_index_start, _index_end, freq=_index_freq, name=_index_name)
@@ -106,7 +107,8 @@ def index(_index_factory, _index_start, _index_end, _index_freq, _index_name):
@pytest.fixture
def _static_values(index):
- """Fixture for parametrization of values used in parametrization of
+ """
+ Fixture for parametrization of values used in parametrization of
Series and DataFrames with date_range, period_range and
timedelta_range indexes
"""
@@ -115,7 +117,8 @@ def _static_values(index):
@pytest.fixture
def _series_name():
- """Fixture for parametrization of Series name for Series used with
+ """
+ Fixture for parametrization of Series name for Series used with
date_range, period_range and timedelta_range indexes
"""
return None
@@ -123,7 +126,8 @@ def _series_name():
@pytest.fixture
def series(index, _series_name, _static_values):
- """Fixture for parametrization of Series with date_range, period_range and
+ """
+ Fixture for parametrization of Series with date_range, period_range and
timedelta_range indexes
"""
return Series(_static_values, index=index, name=_series_name)
@@ -131,7 +135,8 @@ def series(index, _series_name, _static_values):
@pytest.fixture
def empty_series(series):
- """Fixture for parametrization of empty Series with date_range,
+ """
+ Fixture for parametrization of empty Series with date_range,
period_range and timedelta_range indexes
"""
return series[:0]
@@ -139,7 +144,8 @@ def empty_series(series):
@pytest.fixture
def frame(index, _series_name, _static_values):
- """Fixture for parametrization of DataFrame with date_range, period_range
+ """
+ Fixture for parametrization of DataFrame with date_range, period_range
and timedelta_range indexes
"""
# _series_name is intentionally unused
@@ -148,7 +154,8 @@ def frame(index, _series_name, _static_values):
@pytest.fixture
def empty_frame(series):
- """Fixture for parametrization of empty DataFrame with date_range,
+ """
+ Fixture for parametrization of empty DataFrame with date_range,
period_range and timedelta_range indexes
"""
index = series.index[:0]
@@ -157,7 +164,8 @@ def empty_frame(series):
@pytest.fixture(params=[Series, DataFrame])
def series_and_frame(request, series, frame):
- """Fixture for parametrization of Series and DataFrame with date_range,
+ """
+ Fixture for parametrization of Series and DataFrame with date_range,
period_range and timedelta_range indexes
"""
if request.param == Series:
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index bfcfd1c5a7101..682575cc9ed48 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -216,7 +216,8 @@ def validate_bool_kwarg(value, arg_name):
def validate_axis_style_args(data, args, kwargs, arg_name, method_name):
- """Argument handler for mixed index, columns / axis functions
+ """
+ Argument handler for mixed index, columns / axis functions
In an attempt to handle both `.method(index, columns)`, and
`.method(arg, axis=.)`, we have to do some bad things to argument
@@ -310,7 +311,8 @@ def validate_axis_style_args(data, args, kwargs, arg_name, method_name):
def validate_fillna_kwargs(value, method, validate_scalar_dict_value=True):
- """Validate the keyword arguments to 'fillna'.
+ """
+ Validate the keyword arguments to 'fillna'.
This checks that exactly one of 'value' and 'method' is specified.
If 'method' is specified, this validates that it's a valid method.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31893 | 2020-02-11T19:23:29Z | 2020-02-15T19:03:21Z | 2020-02-15T19:03:21Z | 2020-02-15T20:15:30Z | |
CLN: D204 1 blank line required after class docstring | diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 37e3cd42f2115..6da8b0c5ccadd 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -116,6 +116,7 @@ class EWM(_Rolling):
3 1.615385
4 3.670213
"""
+
_attributes = ["com", "min_periods", "adjust", "ignore_na", "axis"]
def __init__(
| https://api.github.com/repos/pandas-dev/pandas/pulls/31892 | 2020-02-11T19:21:23Z | 2020-02-11T22:34:39Z | 2020-02-11T22:34:39Z | 2020-02-11T22:37:33Z | |
CLN: D209 Multi-line docstring closing quotes should be on a separate line | diff --git a/pandas/_testing.py b/pandas/_testing.py
index 13af8703cef93..9e71524263a18 100644
--- a/pandas/_testing.py
+++ b/pandas/_testing.py
@@ -2150,7 +2150,8 @@ def optional_args(decorator):
@my_decorator
def function(): pass
- Calls decorator with decorator(f, *args, **kwargs)"""
+ Calls decorator with decorator(f, *args, **kwargs)
+ """
@wraps(decorator)
def wrapper(*args, **kwargs):
diff --git a/pandas/core/arrays/sparse/scipy_sparse.py b/pandas/core/arrays/sparse/scipy_sparse.py
index 17a953fce9ec0..b67f2c9f52c76 100644
--- a/pandas/core/arrays/sparse/scipy_sparse.py
+++ b/pandas/core/arrays/sparse/scipy_sparse.py
@@ -19,7 +19,8 @@ def _check_is_partition(parts, whole):
def _to_ijv(ss, row_levels=(0,), column_levels=(1,), sort_labels=False):
""" For arbitrary (MultiIndexed) sparse Series return
(v, i, j, ilabels, jlabels) where (v, (i, j)) is suitable for
- passing to scipy.sparse.coo constructor. """
+ passing to scipy.sparse.coo constructor.
+ """
# index and column levels must be a partition of the index
_check_is_partition([row_levels, column_levels], range(ss.index.nlevels))
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index 22bc772da8f28..9f209cccd5be6 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -151,7 +151,8 @@ def is_valid(self) -> bool:
@property
def is_in_table(self) -> bool:
""" return True if this is a valid column name for generation (e.g. an
- actual column in the table) """
+ actual column in the table)
+ """
return self.queryables.get(self.lhs) is not None
@property
@@ -176,7 +177,8 @@ def generate(self, v) -> str:
def convert_value(self, v) -> "TermValue":
""" convert the expression that is in the term to something that is
- accepted by pytables """
+ accepted by pytables
+ """
def stringify(value):
if self.encoding is not None:
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 5ad56e30eeb39..70c09151258ff 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -756,7 +756,8 @@ def _value_with_fmt(self, val):
@classmethod
def check_extension(cls, ext):
"""checks that path's extension against the Writer's supported
- extensions. If it isn't supported, raises UnsupportedFiletypeError."""
+ extensions. If it isn't supported, raises UnsupportedFiletypeError.
+ """
if ext.startswith("."):
ext = ext[1:]
if not any(ext in extension for extension in cls.supported_extensions):
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 4e93b62a96ef2..4e1fcb97e7891 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2129,7 +2129,8 @@ def _write_bytes(self, value: bytes) -> None:
def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
"""Check for categorical columns, retain categorical information for
- Stata file and convert categorical data to int"""
+ Stata file and convert categorical data to int
+ """
is_cat = [is_categorical_dtype(data[col]) for col in data]
self._is_col_cat = is_cat
@@ -2171,7 +2172,8 @@ def _prepare_categoricals(self, data: DataFrame) -> DataFrame:
def _replace_nans(self, data: DataFrame) -> DataFrame:
# return data
"""Checks floating point data columns for nans, and replaces these with
- the generic Stata for missing value (.)"""
+ the generic Stata for missing value (.)
+ """
for c in data:
dtype = data[c].dtype
if dtype in (np.float32, np.float64):
@@ -3037,7 +3039,8 @@ def _write_header(
def _write_map(self) -> None:
"""Called twice during file write. The first populates the values in
the map with 0s. The second call writes the final map locations when
- all blocks have been written."""
+ all blocks have been written.
+ """
assert self._file is not None
if not self._map:
self._map = dict(
@@ -3185,7 +3188,8 @@ def _write_file_close_tag(self) -> None:
def _update_strl_names(self) -> None:
"""Update column names for conversion to strl if they might have been
- changed to comply with Stata naming rules"""
+ changed to comply with Stata naming rules
+ """
# Update convert_strl if names changed
for orig, new in self._converted_names.items():
if orig in self._convert_strl:
@@ -3194,7 +3198,8 @@ def _update_strl_names(self) -> None:
def _convert_strls(self, data: DataFrame) -> DataFrame:
"""Convert columns to StrLs if either very large or in the
- convert_strl variable"""
+ convert_strl variable
+ """
convert_cols = [
col
for i, col in enumerate(data)
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 1c2de8c8c223f..9b07269811d8e 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -20,7 +20,8 @@
def cartesian_product_for_groupers(result, args, names):
""" Reindex to a cartesian production for the groupers,
- preserving the nature (Categorical) of each grouper """
+ preserving the nature (Categorical) of each grouper
+ """
def f(a):
if isinstance(a, (CategoricalIndex, Categorical)):
diff --git a/pandas/tests/indexing/multiindex/conftest.py b/pandas/tests/indexing/multiindex/conftest.py
index e6d5a9eb84410..48e090b242208 100644
--- a/pandas/tests/indexing/multiindex/conftest.py
+++ b/pandas/tests/indexing/multiindex/conftest.py
@@ -21,7 +21,8 @@ def multiindex_dataframe_random_data():
@pytest.fixture
def multiindex_year_month_day_dataframe_random_data():
"""DataFrame with 3 level MultiIndex (year, month, day) covering
- first 100 business days from 2000-01-01 with random data"""
+ first 100 business days from 2000-01-01 with random data
+ """
tdf = tm.makeTimeDataFrame(100)
ymd = tdf.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]).sum()
# use Int64Index, to make sure things work
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 6cc18a3989266..b3f6d65da5db5 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -503,7 +503,8 @@ def test_slice_integer(self):
def test_integer_positional_indexing(self):
""" make sure that we are raising on positional indexing
- w.r.t. an integer index """
+ w.r.t. an integer index
+ """
s = Series(range(2, 6), index=range(2, 6))
diff --git a/pandas/tests/io/pytables/common.py b/pandas/tests/io/pytables/common.py
index d06f467760518..7f0b3ab7957e6 100644
--- a/pandas/tests/io/pytables/common.py
+++ b/pandas/tests/io/pytables/common.py
@@ -75,7 +75,8 @@ def ensure_clean_path(path):
def _maybe_remove(store, key):
"""For tests using tables, try removing the table to be sure there is
- no content from previous tests using the same table name."""
+ no content from previous tests using the same table name.
+ """
try:
store.remove(key)
except (ValueError, KeyError):
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index fb81e57912dac..841241d5124e0 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -129,7 +129,8 @@ def test_with_missing_lzma():
def test_with_missing_lzma_runtime():
"""Tests if RuntimeError is hit when calling lzma without
- having the module available."""
+ having the module available.
+ """
code = textwrap.dedent(
"""
import sys
diff --git a/pandas/tests/resample/conftest.py b/pandas/tests/resample/conftest.py
index bb4f7ced3350f..a4ac15d9f3b07 100644
--- a/pandas/tests/resample/conftest.py
+++ b/pandas/tests/resample/conftest.py
@@ -99,7 +99,8 @@ def _index_name():
@pytest.fixture
def index(_index_factory, _index_start, _index_end, _index_freq, _index_name):
"""Fixture for parametrization of date_range, period_range and
- timedelta_range indexes"""
+ timedelta_range indexes
+ """
return _index_factory(_index_start, _index_end, freq=_index_freq, name=_index_name)
@@ -107,35 +108,40 @@ def index(_index_factory, _index_start, _index_end, _index_freq, _index_name):
def _static_values(index):
"""Fixture for parametrization of values used in parametrization of
Series and DataFrames with date_range, period_range and
- timedelta_range indexes"""
+ timedelta_range indexes
+ """
return np.arange(len(index))
@pytest.fixture
def _series_name():
"""Fixture for parametrization of Series name for Series used with
- date_range, period_range and timedelta_range indexes"""
+ date_range, period_range and timedelta_range indexes
+ """
return None
@pytest.fixture
def series(index, _series_name, _static_values):
"""Fixture for parametrization of Series with date_range, period_range and
- timedelta_range indexes"""
+ timedelta_range indexes
+ """
return Series(_static_values, index=index, name=_series_name)
@pytest.fixture
def empty_series(series):
"""Fixture for parametrization of empty Series with date_range,
- period_range and timedelta_range indexes"""
+ period_range and timedelta_range indexes
+ """
return series[:0]
@pytest.fixture
def frame(index, _series_name, _static_values):
"""Fixture for parametrization of DataFrame with date_range, period_range
- and timedelta_range indexes"""
+ and timedelta_range indexes
+ """
# _series_name is intentionally unused
return DataFrame({"value": _static_values}, index=index)
@@ -143,7 +149,8 @@ def frame(index, _series_name, _static_values):
@pytest.fixture
def empty_frame(series):
"""Fixture for parametrization of empty DataFrame with date_range,
- period_range and timedelta_range indexes"""
+ period_range and timedelta_range indexes
+ """
index = series.index[:0]
return DataFrame(index=index)
@@ -151,7 +158,8 @@ def empty_frame(series):
@pytest.fixture(params=[Series, DataFrame])
def series_and_frame(request, series, frame):
"""Fixture for parametrization of Series and DataFrame with date_range,
- period_range and timedelta_range indexes"""
+ period_range and timedelta_range indexes
+ """
if request.param == Series:
return series
if request.param == DataFrame:
diff --git a/pandas/tests/reshape/merge/test_merge_index_as_string.py b/pandas/tests/reshape/merge/test_merge_index_as_string.py
index 691f2549c0ece..9075a4e791583 100644
--- a/pandas/tests/reshape/merge/test_merge_index_as_string.py
+++ b/pandas/tests/reshape/merge/test_merge_index_as_string.py
@@ -30,7 +30,8 @@ def df2():
@pytest.fixture(params=[[], ["outer"], ["outer", "inner"]])
def left_df(request, df1):
""" Construct left test DataFrame with specified levels
- (any of 'outer', 'inner', and 'v1')"""
+ (any of 'outer', 'inner', and 'v1')
+ """
levels = request.param
if levels:
df1 = df1.set_index(levels)
@@ -41,7 +42,8 @@ def left_df(request, df1):
@pytest.fixture(params=[[], ["outer"], ["outer", "inner"]])
def right_df(request, df2):
""" Construct right test DataFrame with specified levels
- (any of 'outer', 'inner', and 'v2')"""
+ (any of 'outer', 'inner', and 'v2')
+ """
levels = request.param
if levels:
diff --git a/pandas/tests/test_register_accessor.py b/pandas/tests/test_register_accessor.py
index 08a5581886522..d839936f731a3 100644
--- a/pandas/tests/test_register_accessor.py
+++ b/pandas/tests/test_register_accessor.py
@@ -9,7 +9,8 @@
@contextlib.contextmanager
def ensure_removed(obj, attr):
"""Ensure that an attribute added to 'obj' during the test is
- removed when we're done"""
+ removed when we're done
+ """
try:
yield
finally:
| https://api.github.com/repos/pandas-dev/pandas/pulls/31891 | 2020-02-11T19:19:42Z | 2020-02-11T23:01:16Z | 2020-02-11T23:01:16Z | 2020-02-11T23:04:14Z | |
CLN: D208 Docstring is over-indented | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 131a011c5a101..7463b2b579c0c 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -106,8 +106,8 @@ def axis(request):
@pytest.fixture(params=[0, "index"], ids=lambda x: f"axis {repr(x)}")
def axis_series(request):
"""
- Fixture for returning the axis numbers of a Series.
- """
+ Fixture for returning the axis numbers of a Series.
+ """
return request.param
diff --git a/pandas/core/computation/pytables.py b/pandas/core/computation/pytables.py
index be652ca0e6a36..22bc772da8f28 100644
--- a/pandas/core/computation/pytables.py
+++ b/pandas/core/computation/pytables.py
@@ -601,8 +601,7 @@ def __init__(self, value, converted, kind: str):
self.kind = kind
def tostring(self, encoding) -> str:
- """ quote the string if not encoded
- else encode and return """
+ """ quote the string if not encoded else encode and return """
if self.kind == "string":
if encoding is not None:
return str(self.converted)
diff --git a/pandas/io/excel/_xlrd.py b/pandas/io/excel/_xlrd.py
index be1b78eeb146e..e7a132b73e076 100644
--- a/pandas/io/excel/_xlrd.py
+++ b/pandas/io/excel/_xlrd.py
@@ -57,8 +57,9 @@ def get_sheet_data(self, sheet, convert_float):
epoch1904 = self.book.datemode
def _parse_cell(cell_contents, cell_typ):
- """converts the contents of the cell into a pandas
- appropriate object"""
+ """
+ converts the contents of the cell into a pandas appropriate object
+ """
if cell_typ == XL_CELL_DATE:
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 149533bf0c238..35a6870c1194b 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -979,7 +979,7 @@ def to_html(
border : int
A ``border=border`` attribute is included in the opening
``<table>`` tag. Default ``pd.options.display.html.border``.
- """
+ """
from pandas.io.formats.html import HTMLFormatter, NotebookFormatter
Klass = NotebookFormatter if notebook else HTMLFormatter
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..b1d032eb1aaff 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1457,8 +1457,10 @@ def _should_parse_dates(self, i):
def _extract_multi_indexer_columns(
self, header, index_names, col_names, passed_names=False
):
- """ extract and return the names, index_names, col_names
- header is a list-of-lists returned from the parsers """
+ """
+ extract and return the names, index_names, col_names
+ header is a list-of-lists returned from the parsers
+ """
if len(header) < 2:
return header[0], index_names, col_names, passed_names
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 0e2b909d5cdc7..cde2b07e2a21d 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -569,9 +569,10 @@ def __getattr__(self, name: str):
)
def __contains__(self, key: str) -> bool:
- """ check for existence of this key
- can match the exact pathname or the pathnm w/o the leading '/'
- """
+ """
+ check for existence of this key
+ can match the exact pathname or the pathnm w/o the leading '/'
+ """
node = self.get_node(key)
if node is not None:
name = node._v_pathname
@@ -1831,18 +1832,19 @@ def get_result(self, coordinates: bool = False):
class IndexCol:
- """ an index column description class
+ """
+ an index column description class
- Parameters
- ----------
+ Parameters
+ ----------
- axis : axis which I reference
- values : the ndarray like converted values
- kind : a string description of this type
- typ : the pytables type
- pos : the position in the pytables
+ axis : axis which I reference
+ values : the ndarray like converted values
+ kind : a string description of this type
+ typ : the pytables type
+ pos : the position in the pytables
- """
+ """
is_an_indexable = True
is_data_indexable = True
@@ -1999,9 +2001,11 @@ def __iter__(self):
return iter(self.values)
def maybe_set_size(self, min_itemsize=None):
- """ maybe set a string col itemsize:
- min_itemsize can be an integer or a dict with this columns name
- with an integer size """
+ """
+ maybe set a string col itemsize:
+ min_itemsize can be an integer or a dict with this columns name
+ with an integer size
+ """
if _ensure_decoded(self.kind) == "string":
if isinstance(min_itemsize, dict):
@@ -2051,8 +2055,10 @@ def validate_attr(self, append: bool):
)
def update_info(self, info):
- """ set/update the info for this indexable with the key/value
- if there is a conflict raise/warn as needed """
+ """
+ set/update the info for this indexable with the key/value
+ if there is a conflict raise/warn as needed
+ """
for key in self._info_fields:
@@ -2140,17 +2146,18 @@ def set_attr(self):
class DataCol(IndexCol):
- """ a data holding column, by definition this is not indexable
+ """
+ a data holding column, by definition this is not indexable
- Parameters
- ----------
+ Parameters
+ ----------
- data : the actual data
- cname : the column name in the table to hold the data (typically
- values)
- meta : a string description of the metadata
- metadata : the actual metadata
- """
+ data : the actual data
+ cname : the column name in the table to hold the data (typically
+ values)
+ meta : a string description of the metadata
+ metadata : the actual metadata
+ """
is_an_indexable = False
is_data_indexable = False
@@ -2460,16 +2467,17 @@ class GenericDataIndexableCol(DataIndexableCol):
class Fixed:
- """ represent an object in my store
- facilitate read/write of various types of objects
- this is an abstract base class
+ """
+ represent an object in my store
+ facilitate read/write of various types of objects
+ this is an abstract base class
- Parameters
- ----------
- parent : HDFStore
- group : Node
- The group node where the table resides.
- """
+ Parameters
+ ----------
+ parent : HDFStore
+ group : Node
+ The group node where the table resides.
+ """
pandas_kind: str
format_type: str = "fixed" # GH#30962 needed by dask
@@ -2596,8 +2604,10 @@ def validate_version(self, where=None):
return True
def infer_axes(self):
- """ infer the axes of my storer
- return a boolean indicating if we have a valid storer or not """
+ """
+ infer the axes of my storer
+ return a boolean indicating if we have a valid storer or not
+ """
s = self.storable
if s is None:
@@ -3105,29 +3115,29 @@ class FrameFixed(BlockManagerFixed):
class Table(Fixed):
- """ represent a table:
- facilitate read/write of various types of tables
-
- Attrs in Table Node
- -------------------
- These are attributes that are store in the main table node, they are
- necessary to recreate these tables when read back in.
-
- index_axes : a list of tuples of the (original indexing axis and
- index column)
- non_index_axes: a list of tuples of the (original index axis and
- columns on a non-indexing axis)
- values_axes : a list of the columns which comprise the data of this
- table
- data_columns : a list of the columns that we are allowing indexing
- (these become single columns in values_axes), or True to force all
- columns
- nan_rep : the string to use for nan representations for string
- objects
- levels : the names of levels
- metadata : the names of the metadata columns
-
- """
+ """
+ represent a table:
+ facilitate read/write of various types of tables
+
+ Attrs in Table Node
+ -------------------
+ These are attributes that are store in the main table node, they are
+ necessary to recreate these tables when read back in.
+
+ index_axes : a list of tuples of the (original indexing axis and
+ index column)
+ non_index_axes: a list of tuples of the (original index axis and
+ columns on a non-indexing axis)
+ values_axes : a list of the columns which comprise the data of this
+ table
+ data_columns : a list of the columns that we are allowing indexing
+ (these become single columns in values_axes), or True to force all
+ columns
+ nan_rep : the string to use for nan representations for string
+ objects
+ levels : the names of levels
+ metadata : the names of the metadata columns
+ """
pandas_kind = "wide_table"
format_type: str = "table" # GH#30962 needed by dask
@@ -4080,10 +4090,11 @@ def read_column(
class WORMTable(Table):
- """ a write-once read-many table: this format DOES NOT ALLOW appending to a
- table. writing is a one-time operation the data are stored in a format
- that allows for searching the data on disk
- """
+ """
+ a write-once read-many table: this format DOES NOT ALLOW appending to a
+ table. writing is a one-time operation the data are stored in a format
+ that allows for searching the data on disk
+ """
table_type = "worm"
@@ -4094,14 +4105,16 @@ def read(
start: Optional[int] = None,
stop: Optional[int] = None,
):
- """ read the indices and the indexing array, calculate offset rows and
- return """
+ """
+ read the indices and the indexing array, calculate offset rows and return
+ """
raise NotImplementedError("WORMTable needs to implement read")
def write(self, **kwargs):
- """ write in a format that we can search later on (but cannot append
- to): write out the indices and the values using _write_array
- (e.g. a CArray) create an indexing table so that we can search
+ """
+ write in a format that we can search later on (but cannot append
+ to): write out the indices and the values using _write_array
+ (e.g. a CArray) create an indexing table so that we can search
"""
raise NotImplementedError("WORMTable needs to implement write")
@@ -4170,8 +4183,9 @@ def write(
table.write_data(chunksize, dropna=dropna)
def write_data(self, chunksize: Optional[int], dropna: bool = False):
- """ we form the data into a 2-d including indexes,values,mask
- write chunk-by-chunk """
+ """
+ we form the data into a 2-d including indexes,values,mask write chunk-by-chunk
+ """
names = self.dtype.names
nrows = self.nrows_expected
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 9b40778dbcfdf..d47dd2c71b86f 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -120,8 +120,10 @@ def column_data_offsets(self):
return np.asarray(self._column_data_offsets, dtype=np.int64)
def column_types(self):
- """Returns a numpy character array of the column types:
- s (string) or d (double)"""
+ """
+ Returns a numpy character array of the column types:
+ s (string) or d (double)
+ """
return np.asarray(self._column_types, dtype=np.dtype("S1"))
def close(self):
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index de09460bb833d..3f47d325d86ef 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -509,10 +509,11 @@ def _adorn_subplots(self):
self.axes[0].set_title(self.title)
def _apply_axis_properties(self, axis, rot=None, fontsize=None):
- """ Tick creation within matplotlib is reasonably expensive and is
- internally deferred until accessed as Ticks are created/destroyed
- multiple times per draw. It's therefore beneficial for us to avoid
- accessing unless we will act on the Tick.
+ """
+ Tick creation within matplotlib is reasonably expensive and is
+ internally deferred until accessed as Ticks are created/destroyed
+ multiple times per draw. It's therefore beneficial for us to avoid
+ accessing unless we will act on the Tick.
"""
if rot is not None or fontsize is not None:
diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py
index a60607d586ada..3aa188098620d 100644
--- a/pandas/tests/extension/test_datetime.py
+++ b/pandas/tests/extension/test_datetime.py
@@ -44,9 +44,9 @@ def data_missing_for_sorting(dtype):
@pytest.fixture
def data_for_grouping(dtype):
"""
- Expected to be like [B, B, NA, NA, A, A, B, C]
+ Expected to be like [B, B, NA, NA, A, A, B, C]
- Where A < B < C and NA is missing
+ Where A < B < C and NA is missing
"""
a = pd.Timestamp("2000-01-01")
b = pd.Timestamp("2000-01-02")
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 7645c6b4cf709..0c78facd5fd12 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -23,9 +23,11 @@ def _axes(self):
return self._typ._AXIS_ORDERS
def _construct(self, shape, value=None, dtype=None, **kwargs):
- """ construct an object for the given shape
- if value is specified use that if its a scalar
- if value is an array, repeat it as needed """
+ """
+ construct an object for the given shape
+ if value is specified use that if its a scalar
+ if value is an array, repeat it as needed
+ """
if isinstance(shape, int):
shape = tuple([shape] * self._ndim)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index b7d7124a3a5e5..5662d41e19885 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1496,7 +1496,7 @@ def test_groupby_reindex_inside_function():
def agg_before(hour, func, fix=False):
"""
- Run an aggregate func on the subset of data.
+ Run an aggregate func on the subset of data.
"""
def _func(data):
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index 65066fd0099ba..f968144286bd4 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -225,9 +225,8 @@ def test_round_dst_border_nonexistent(self, method, ts_str, freq):
],
)
def test_round_int64(self, timestamp, freq):
- """check that all rounding modes are accurate to int64 precision
- see GH#22591
- """
+ # check that all rounding modes are accurate to int64 precision
+ # see GH#22591
dt = Timestamp(timestamp)
unit = to_offset(freq).nanos
| https://api.github.com/repos/pandas-dev/pandas/pulls/31890 | 2020-02-11T19:17:35Z | 2020-02-11T21:30:25Z | 2020-02-11T21:30:25Z | 2020-02-11T21:41:08Z | |
CLN: D201 No blank lines allowed before function docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index b733970dcf699..c68fa5a3caff6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4037,7 +4037,6 @@ def rename(
level: Optional[Level] = None,
errors: str = "ignore",
) -> Optional["DataFrame"]:
-
"""
Alter axes labels.
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..b27dd7b31f88d 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -705,7 +705,6 @@ def read_fwf(
infer_nrows=100,
**kwds,
):
-
r"""
Read a table of fixed-width formatted lines into DataFrame.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31889 | 2020-02-11T19:15:53Z | 2020-02-11T21:28:28Z | 2020-02-11T21:28:28Z | 2020-02-11T21:37:31Z | |
CLN: D300 Use """triple double quotes""" | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 18c7504f2c2f8..a4648186477d6 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -2013,7 +2013,7 @@ def wrapper3(self, pat, na=np.nan):
def copy(source):
- "Copy a docstring from another source function (if present)"
+ """Copy a docstring from another source function (if present)"""
def do_copy(target):
if source.__doc__:
diff --git a/pandas/io/clipboard/__init__.py b/pandas/io/clipboard/__init__.py
index f808b7e706afb..6d76d7de407b1 100644
--- a/pandas/io/clipboard/__init__.py
+++ b/pandas/io/clipboard/__init__.py
@@ -126,7 +126,7 @@ def copy_osx_pyobjc(text):
board.setData_forType_(newData, AppKit.NSStringPboardType)
def paste_osx_pyobjc():
- "Returns contents of clipboard"
+ """Returns contents of clipboard"""
board = AppKit.NSPasteboard.generalPasteboard()
content = board.stringForType_(AppKit.NSStringPboardType)
return content
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..daafa9eee69fe 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2488,7 +2488,7 @@ def get_chunk(self, size=None):
def _convert_data(self, data):
# apply converters
def _clean_mapping(mapping):
- "converts col numbers to names"
+ """converts col numbers to names"""
clean = {}
for col, v in mapping.items():
if isinstance(col, int) and col not in self.orig_names:
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index a1035fd0823bb..770f89324badb 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -981,7 +981,7 @@ def __init__(
self.finder = get_finder(freq)
def _get_default_locs(self, vmin, vmax):
- "Returns the default locations of ticks."
+ """Returns the default locations of ticks."""
if self.plot_obj.date_axis_info is None:
self.plot_obj.date_axis_info = self.finder(vmin, vmax, self.freq)
@@ -993,7 +993,7 @@ def _get_default_locs(self, vmin, vmax):
return np.compress(locator["maj"], locator["val"])
def __call__(self):
- "Return the locations of the ticks."
+ """Return the locations of the ticks."""
# axis calls Locator.set_axis inside set_m<xxxx>_formatter
vi = tuple(self.axis.get_view_interval())
@@ -1062,7 +1062,7 @@ def __init__(self, freq, minor_locator=False, dynamic_mode=True, plot_obj=None):
self.finder = get_finder(freq)
def _set_default_format(self, vmin, vmax):
- "Returns the default ticks spacing."
+ """Returns the default ticks spacing."""
if self.plot_obj.date_axis_info is None:
self.plot_obj.date_axis_info = self.finder(vmin, vmax, self.freq)
@@ -1076,7 +1076,7 @@ def _set_default_format(self, vmin, vmax):
return self.formatdict
def set_locs(self, locs):
- "Sets the locations of the ticks"
+ """Sets the locations of the ticks"""
# don't actually use the locs. This is just needed to work with
# matplotlib. Force to use vmin, vmax
diff --git a/pandas/tests/scalar/period/test_period.py b/pandas/tests/scalar/period/test_period.py
index b396a88e6eb6a..3846274dacd75 100644
--- a/pandas/tests/scalar/period/test_period.py
+++ b/pandas/tests/scalar/period/test_period.py
@@ -650,7 +650,7 @@ def test_strftime(self):
class TestPeriodProperties:
- "Test properties such as year, month, weekday, etc...."
+ """Test properties such as year, month, weekday, etc...."""
@pytest.mark.parametrize("freq", ["A", "M", "D", "H"])
def test_is_leap_year(self, freq):
| https://api.github.com/repos/pandas-dev/pandas/pulls/31888 | 2020-02-11T19:14:28Z | 2020-02-11T21:27:49Z | 2020-02-11T21:27:49Z | 2020-02-11T21:35:57Z | |
CLN: simplify _setitem_with_indexer | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index d3e75d43c6bd7..081f87078d9c9 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1564,6 +1564,17 @@ def _convert_to_indexer(self, key, axis: int, is_setter: bool = False):
# -------------------------------------------------------------------
def _setitem_with_indexer(self, indexer, value):
+ """
+ _setitem_with_indexer is for setting values on a Series/DataFrame
+ using positional indexers.
+
+ If the relevant keys are not present, the Series/DataFrame may be
+ expanded.
+
+ This method is currently broken when dealing with non-unique Indexes,
+ since it goes from positional indexers back to labels when calling
+ BlockManager methods, see GH#12991, GH#22046, GH#15686.
+ """
# also has the side effect of consolidating in-place
from pandas import Series
@@ -1678,24 +1689,19 @@ def _setitem_with_indexer(self, indexer, value):
info_idx = [info_idx]
labels = item_labels[info_idx]
- # if we have a partial multiindex, then need to adjust the plane
- # indexer here
- if len(labels) == 1 and isinstance(
- self.obj[labels[0]].axes[0], ABCMultiIndex
- ):
+ if len(labels) == 1:
+ # We can operate on a single column
item = labels[0]
- obj = self.obj[item]
- index = obj.index
- idx = indexer[:info_axis][0]
+ idx = indexer[0]
- plane_indexer = tuple([idx]) + indexer[info_axis + 1 :]
- lplane_indexer = length_of_indexer(plane_indexer[0], index)
+ plane_indexer = tuple([idx])
+ lplane_indexer = length_of_indexer(plane_indexer[0], self.obj.index)
# lplane_indexer gives the expected length of obj[idx]
# require that we are setting the right number of values that
# we are indexing
- if is_list_like_indexer(value) and lplane_indexer != len(value):
-
+ if is_list_like_indexer(value) and 0 != lplane_indexer != len(value):
+ # Exclude zero-len for e.g. boolean masking that is all-false
raise ValueError(
"cannot set using a multi-index "
"selection indexer with a different "
@@ -1704,12 +1710,11 @@ def _setitem_with_indexer(self, indexer, value):
# non-mi
else:
- plane_indexer = indexer[:info_axis] + indexer[info_axis + 1 :]
- plane_axis = self.obj.axes[:info_axis][0]
- lplane_indexer = length_of_indexer(plane_indexer[0], plane_axis)
+ plane_indexer = indexer[:1]
+ lplane_indexer = length_of_indexer(plane_indexer[0], self.obj.index)
def setter(item, v):
- s = self.obj[item]
+ ser = self.obj[item]
pi = plane_indexer[0] if lplane_indexer == 1 else plane_indexer
# perform the equivalent of a setitem on the info axis
@@ -1721,16 +1726,16 @@ def setter(item, v):
com.is_null_slice(idx) or com.is_full_slice(idx, len(self.obj))
for idx in pi
):
- s = v
+ ser = v
else:
# set the item, possibly having a dtype change
- s._consolidate_inplace()
- s = s.copy()
- s._data = s._data.setitem(indexer=pi, value=v)
- s._maybe_update_cacher(clear=True)
+ ser._consolidate_inplace()
+ ser = ser.copy()
+ ser._data = ser._data.setitem(indexer=pi, value=v)
+ ser._maybe_update_cacher(clear=True)
# reset the sliced object if unique
- self.obj[item] = s
+ self.obj[item] = ser
# we need an iterable, with a ndim of at least 1
# eg. don't pass through np.array(0)
| The MultiIndex check it is doing is unnecessary for positional indexing.
Other things being cleaned up here appear to be remnants from Panel | https://api.github.com/repos/pandas-dev/pandas/pulls/31887 | 2020-02-11T19:02:55Z | 2020-02-20T22:37:38Z | 2020-02-20T22:37:38Z | 2020-02-20T23:27:36Z |
CLN-29547 replace old string formatting | diff --git a/pandas/_libs/tslibs/c_timestamp.pyx b/pandas/_libs/tslibs/c_timestamp.pyx
index 2c72cec18f096..5be35c13f5737 100644
--- a/pandas/_libs/tslibs/c_timestamp.pyx
+++ b/pandas/_libs/tslibs/c_timestamp.pyx
@@ -59,10 +59,10 @@ def integer_op_not_supported(obj):
# GH#30886 using an fstring raises SystemError
int_addsub_msg = (
- "Addition/subtraction of integers and integer-arrays with {cls} is "
- "no longer supported. Instead of adding/subtracting `n`, "
- "use `n * obj.freq`"
- ).format(cls=cls)
+ f"Addition/subtraction of integers and integer-arrays with {cls} is "
+ f"no longer supported. Instead of adding/subtracting `n`, "
+ f"use `n * obj.freq`"
+ )
return TypeError(int_addsub_msg)
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 3742506a7f8af..67bc51892a4e1 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -639,7 +639,7 @@ cdef inline int64_t parse_iso_format_string(object ts) except? -1:
bint have_dot = 0, have_value = 0, neg = 0
list number = [], unit = []
- err_msg = "Invalid ISO 8601 Duration format - {}".format(ts)
+ err_msg = f"Invalid ISO 8601 Duration format - {ts}"
for c in ts:
# number (ascii codes)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 0b35a031bc53f..528cc32b7fbeb 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1126,8 +1126,8 @@ def __arrow_array__(self, type=None):
subtype = pyarrow.from_numpy_dtype(self.dtype.subtype)
except TypeError:
raise TypeError(
- "Conversion to arrow with subtype '{}' "
- "is not supported".format(self.dtype.subtype)
+ f"Conversion to arrow with subtype '{self.dtype.subtype}' "
+ f"is not supported"
)
interval_type = ArrowIntervalType(subtype, self.closed)
storage_array = pyarrow.StructArray.from_arrays(
@@ -1155,15 +1155,13 @@ def __arrow_array__(self, type=None):
# ensure we have the same subtype and closed attributes
if not type.equals(interval_type):
raise TypeError(
- "Not supported to convert IntervalArray to type with "
- "different 'subtype' ({0} vs {1}) and 'closed' ({2} vs {3}) "
- "attributes".format(
- self.dtype.subtype, type.subtype, self.closed, type.closed
- )
+ f"Not supported to convert IntervalArray to type with "
+ f"different 'subtype' ({self.dtype.subtype} vs {type.subtype}) "
+ f"and 'closed' ({self.closed} vs {type.closed}) attributes"
)
else:
raise TypeError(
- "Not supported to convert IntervalArray to '{0}' type".format(type)
+ f"Not supported to convert IntervalArray to '{type}' type"
)
return pyarrow.ExtensionArray.from_storage(interval_type, storage_array)
@@ -1175,7 +1173,7 @@ def __arrow_array__(self, type=None):
Parameters
----------
- na_tuple : boolean, default True
+ na_tuple : bool, default True
Returns NA as a tuple if True, ``(nan, nan)``, or just as the NA
value itself if False, ``nan``.
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 3366f10b92604..b9cbc6c3ad8bd 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -295,7 +295,7 @@ def hash_array(
elif issubclass(dtype.type, (np.datetime64, np.timedelta64)):
vals = vals.view("i8").astype("u8", copy=False)
elif issubclass(dtype.type, np.number) and dtype.itemsize <= 8:
- vals = vals.view("u{}".format(vals.dtype.itemsize)).astype("u8")
+ vals = vals.view(f"u{vals.dtype.itemsize}").astype("u8")
else:
# With repeated values, its MUCH faster to categorize object dtypes,
# then hash and rename categories. We allow skipping the categorization
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 149533bf0c238..4a429949c9a08 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -187,7 +187,7 @@ def _get_footer(self) -> str:
if self.length:
if footer:
footer += ", "
- footer += "Length: {length}".format(length=len(self.categorical))
+ footer += f"Length: {len(self.categorical)}"
level_info = self.categorical._repr_categories_info()
@@ -217,7 +217,7 @@ def to_string(self) -> str:
fmt_values = self._get_formatted_values()
- fmt_values = ["{i}".format(i=i) for i in fmt_values]
+ fmt_values = [f"{i}" for i in fmt_values]
fmt_values = [i.strip() for i in fmt_values]
values = ", ".join(fmt_values)
result = ["[" + values + "]"]
@@ -301,28 +301,26 @@ def _get_footer(self) -> str:
assert isinstance(
self.series.index, (ABCDatetimeIndex, ABCPeriodIndex, ABCTimedeltaIndex)
)
- footer += "Freq: {freq}".format(freq=self.series.index.freqstr)
+ footer += f"Freq: {self.series.index.freqstr}"
if self.name is not False and name is not None:
if footer:
footer += ", "
series_name = pprint_thing(name, escape_chars=("\t", "\r", "\n"))
- footer += (
- ("Name: {sname}".format(sname=series_name)) if name is not None else ""
- )
+ footer += f"Name: {series_name}" if name is not None else ""
if self.length is True or (self.length == "truncate" and self.truncate_v):
if footer:
footer += ", "
- footer += "Length: {length}".format(length=len(self.series))
+ footer += f"Length: {len(self.series)}"
if self.dtype is not False and self.dtype is not None:
name = getattr(self.tr_series.dtype, "name", None)
if name:
if footer:
footer += ", "
- footer += "dtype: {typ}".format(typ=pprint_thing(name))
+ footer += f"dtype: {pprint_thing(name)}"
# level infos are added to the end and in a new line, like it is done
# for Categoricals
@@ -359,9 +357,7 @@ def to_string(self) -> str:
footer = self._get_footer()
if len(series) == 0:
- return "{name}([], {footer})".format(
- name=type(self.series).__name__, footer=footer
- )
+ return f"{type(self.series).__name__}([], {footer})"
fmt_index, have_header = self._get_formatted_index()
fmt_values = self._get_formatted_values()
@@ -584,10 +580,8 @@ def __init__(
self.formatters = formatters
else:
raise ValueError(
- (
- "Formatters length({flen}) should match "
- "DataFrame number of columns({dlen})"
- ).format(flen=len(formatters), dlen=len(frame.columns))
+ f"Formatters length({len(formatters)}) should match "
+ f"DataFrame number of columns({len(frame.columns)})"
)
self.na_rep = na_rep
self.decimal = decimal
@@ -816,10 +810,10 @@ def write_result(self, buf: IO[str]) -> None:
frame = self.frame
if len(frame.columns) == 0 or len(frame.index) == 0:
- info_line = "Empty {name}\nColumns: {col}\nIndex: {idx}".format(
- name=type(self.frame).__name__,
- col=pprint_thing(frame.columns),
- idx=pprint_thing(frame.index),
+ info_line = (
+ f"Empty {type(self.frame).__name__}\n"
+ f"Columns: {pprint_thing(frame.columns)}\n"
+ f"Index: {pprint_thing(frame.index)}"
)
text = info_line
else:
@@ -865,11 +859,7 @@ def write_result(self, buf: IO[str]) -> None:
buf.writelines(text)
if self.should_show_dimensions:
- buf.write(
- "\n\n[{nrows} rows x {ncols} columns]".format(
- nrows=len(frame), ncols=len(frame.columns)
- )
- )
+ buf.write(f"\n\n[{len(frame)} rows x {len(frame.columns)} columns]")
def _join_multiline(self, *args) -> str:
lwidth = self.line_width
@@ -1075,7 +1065,7 @@ def _get_formatted_index(self, frame: "DataFrame") -> List[str]:
# empty space for columns
if self.show_col_idx_names:
- col_header = ["{x}".format(x=x) for x in self._get_column_name_list()]
+ col_header = [f"{x}" for x in self._get_column_name_list()]
else:
col_header = [""] * columns.nlevels
@@ -1211,10 +1201,8 @@ def _format_strings(self) -> List[str]:
if self.float_format is None:
float_format = get_option("display.float_format")
if float_format is None:
- fmt_str = "{{x: .{prec:d}g}}".format(
- prec=get_option("display.precision")
- )
- float_format = lambda x: fmt_str.format(x=x)
+ precision = get_option("display.precision")
+ float_format = lambda x: f"{x: .{precision:d}g}"
else:
float_format = self.float_format
@@ -1240,10 +1228,10 @@ def _format(x):
pass
return self.na_rep
elif isinstance(x, PandasObject):
- return "{x}".format(x=x)
+ return f"{x}"
else:
# object dtype
- return "{x}".format(x=formatter(x))
+ return f"{formatter(x)}"
vals = self.values
if isinstance(vals, Index):
@@ -1259,7 +1247,7 @@ def _format(x):
fmt_values = []
for i, v in enumerate(vals):
if not is_float_type[i] and leading_space:
- fmt_values.append(" {v}".format(v=_format(v)))
+ fmt_values.append(f" {_format(v)}")
elif is_float_type[i]:
fmt_values.append(float_format(v))
else:
@@ -1268,8 +1256,8 @@ def _format(x):
# to include a space if we get here.
tpl = "{v}"
else:
- tpl = " {v}"
- fmt_values.append(tpl.format(v=_format(v)))
+ tpl = f" {_format(v)}"
+ fmt_values.append(tpl)
return fmt_values
@@ -1442,7 +1430,7 @@ def _format_strings(self) -> List[str]:
class IntArrayFormatter(GenericArrayFormatter):
def _format_strings(self) -> List[str]:
- formatter = self.formatter or (lambda x: "{x: d}".format(x=x))
+ formatter = self.formatter or (lambda x: f"{x: d}")
fmt_values = [formatter(x) for x in self.values]
return fmt_values
@@ -1726,7 +1714,7 @@ def _formatter(x):
x = Timedelta(x)
result = x._repr_base(format=format)
if box:
- result = "'{res}'".format(res=result)
+ result = f"'{result}'"
return result
return _formatter
@@ -1889,16 +1877,16 @@ def __call__(self, num: Union[int, float]) -> str:
prefix = self.ENG_PREFIXES[int_pow10]
else:
if int_pow10 < 0:
- prefix = "E-{pow10:02d}".format(pow10=-int_pow10)
+ prefix = f"E-{-int_pow10:02d}"
else:
- prefix = "E+{pow10:02d}".format(pow10=int_pow10)
+ prefix = f"E+{int_pow10:02d}"
mant = sign * dnum / (10 ** pow10)
if self.accuracy is None: # pragma: no cover
format_str = "{mant: g}{prefix}"
else:
- format_str = "{{mant: .{acc:d}f}}{{prefix}}".format(acc=self.accuracy)
+ format_str = f"{{mant: .{self.accuracy:d}f}}{{prefix}}"
formatted = format_str.format(mant=mant, prefix=prefix)
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index e3161415fe2bc..3bc47cefd45c0 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -56,7 +56,7 @@ def __init__(
self.table_id = self.fmt.table_id
self.render_links = self.fmt.render_links
if isinstance(self.fmt.col_space, int):
- self.fmt.col_space = "{colspace}px".format(colspace=self.fmt.col_space)
+ self.fmt.col_space = f"{self.fmt.col_space}px"
@property
def show_row_idx_names(self) -> bool:
@@ -124,7 +124,7 @@ def write_th(
"""
if header and self.fmt.col_space is not None:
tags = tags or ""
- tags += 'style="min-width: {colspace};"'.format(colspace=self.fmt.col_space)
+ tags += f'style="min-width: {self.fmt.col_space};"'
self._write_cell(s, kind="th", indent=indent, tags=tags)
@@ -135,9 +135,9 @@ def _write_cell(
self, s: Any, kind: str = "td", indent: int = 0, tags: Optional[str] = None
) -> None:
if tags is not None:
- start_tag = "<{kind} {tags}>".format(kind=kind, tags=tags)
+ start_tag = f"<{kind} {tags}>"
else:
- start_tag = "<{kind}>".format(kind=kind)
+ start_tag = f"<{kind}>"
if self.escape:
# escape & first to prevent double escaping of &
@@ -149,16 +149,13 @@ def _write_cell(
if self.render_links and is_url(rs):
rs_unescaped = pprint_thing(s, escape_chars={}).strip()
- start_tag += '<a href="{url}" target="_blank">'.format(url=rs_unescaped)
+ start_tag += f'<a href="{rs_unescaped}" target="_blank">'
end_a = "</a>"
else:
end_a = ""
self.write(
- "{start}{rs}{end_a}</{kind}>".format(
- start=start_tag, rs=rs, end_a=end_a, kind=kind
- ),
- indent,
+ f"{start_tag}{rs}{end_a}</{kind}>", indent,
)
def write_tr(
@@ -177,7 +174,7 @@ def write_tr(
if align is None:
self.write("<tr>", indent)
else:
- self.write('<tr style="text-align: {align};">'.format(align=align), indent)
+ self.write(f'<tr style="text-align: {align};">', indent)
indent += indent_delta
for i, s in enumerate(line):
@@ -196,9 +193,7 @@ def render(self) -> List[str]:
if self.should_show_dimensions:
by = chr(215) # ×
self.write(
- "<p>{rows} rows {by} {cols} columns</p>".format(
- rows=len(self.frame), by=by, cols=len(self.frame.columns)
- )
+ f"<p>{len(self.frame)} rows {by} {len(self.frame.columns)} columns</p>"
)
return self.elements
@@ -216,7 +211,7 @@ def _write_table(self, indent: int = 0) -> None:
self.classes = self.classes.split()
if not isinstance(self.classes, (list, tuple)):
raise TypeError(
- "classes must be a string, list, "
+ f"classes must be a string, list, "
f"or tuple, not {type(self.classes)}"
)
_classes.extend(self.classes)
@@ -224,12 +219,10 @@ def _write_table(self, indent: int = 0) -> None:
if self.table_id is None:
id_section = ""
else:
- id_section = ' id="{table_id}"'.format(table_id=self.table_id)
+ id_section = f' id="{self.table_id}"'
self.write(
- '<table border="{border}" class="{cls}"{id_section}>'.format(
- border=self.border, cls=" ".join(_classes), id_section=id_section
- ),
+ f'<table border="{self.border}" class="{" ".join(_classes)}"{id_section}>',
indent,
)
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 8ab56437d5c05..c6d2055ecfea8 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -59,10 +59,9 @@ def write_result(self, buf: IO[str]) -> None:
# string representation of the columns
if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
- info_line = "Empty {name}\nColumns: {col}\nIndex: {idx}".format(
- name=type(self.frame).__name__,
- col=self.frame.columns,
- idx=self.frame.index,
+ info_line = (
+ f"Empty {type(self.frame).__name__}\n"
+ f"Columns: {self.frame.columns}\nIndex: {self.frame.index}"
)
strcols = [[info_line]]
else:
@@ -141,8 +140,8 @@ def pad_empties(x):
buf.write("\\endhead\n")
buf.write("\\midrule\n")
buf.write(
- "\\multicolumn{{{n}}}{{r}}{{{{Continued on next "
- "page}}}} \\\\\n".format(n=len(row))
+ f"\\multicolumn{{{len(row)}}}{{r}}{{{{Continued on next "
+ f"page}}}} \\\\\n"
)
buf.write("\\midrule\n")
buf.write("\\endfoot\n\n")
@@ -172,7 +171,7 @@ def pad_empties(x):
if self.bold_rows and self.fmt.index:
# bold row labels
crow = [
- "\\textbf{{{x}}}".format(x=x)
+ f"\\textbf{{{x}}}"
if j < ilevels and x.strip() not in ["", "{}"]
else x
for j, x in enumerate(crow)
@@ -211,9 +210,8 @@ def append_col():
# write multicolumn if needed
if ncol > 1:
row2.append(
- "\\multicolumn{{{ncol:d}}}{{{fmt:s}}}{{{txt:s}}}".format(
- ncol=ncol, fmt=self.multicolumn_format, txt=coltext.strip()
- )
+ f"\\multicolumn{{{ncol:d}}}{{{self.multicolumn_format:s}}}"
+ f"{{{coltext.strip():s}}}"
)
# don't modify where not needed
else:
@@ -256,9 +254,7 @@ def _format_multirow(
break
if nrow > 1:
# overwrite non-multirow entry
- row[j] = "\\multirow{{{nrow:d}}}{{*}}{{{row:s}}}".format(
- nrow=nrow, row=row[j].strip()
- )
+ row[j] = f"\\multirow{{{nrow:d}}}{{*}}{{{row[j].strip():s}}}"
# save when to end the current block with \cline
self.clinebuf.append([i + nrow - 1, j + 1])
return row
@@ -269,7 +265,7 @@ def _print_cline(self, buf: IO[str], i: int, icol: int) -> None:
"""
for cl in self.clinebuf:
if cl[0] == i:
- buf.write("\\cline{{{cl:d}-{icol:d}}}\n".format(cl=cl[1], icol=icol))
+ buf.write(f"\\cline{{{cl[1]:d}-{icol:d}}}\n")
# remove entries that have been written to buffer
self.clinebuf = [x for x in self.clinebuf if x[0] != i]
@@ -293,19 +289,19 @@ def _write_tabular_begin(self, buf, column_format: str):
if self.caption is None:
caption_ = ""
else:
- caption_ = "\n\\caption{{{}}}".format(self.caption)
+ caption_ = f"\n\\caption{{{self.caption}}}"
if self.label is None:
label_ = ""
else:
- label_ = "\n\\label{{{}}}".format(self.label)
+ label_ = f"\n\\label{{{self.label}}}"
- buf.write("\\begin{{table}}\n\\centering{}{}\n".format(caption_, label_))
+ buf.write(f"\\begin{{table}}\n\\centering{caption_}{label_}\n")
else:
# then write output only in a tabular environment
pass
- buf.write("\\begin{{tabular}}{{{fmt}}}\n".format(fmt=column_format))
+ buf.write(f"\\begin{{tabular}}{{{column_format}}}\n")
def _write_tabular_end(self, buf):
"""
@@ -341,18 +337,18 @@ def _write_longtable_begin(self, buf, column_format: str):
<https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl'
for 3 columns
"""
- buf.write("\\begin{{longtable}}{{{fmt}}}\n".format(fmt=column_format))
+ buf.write(f"\\begin{{longtable}}{{{column_format}}}\n")
if self.caption is not None or self.label is not None:
if self.caption is None:
pass
else:
- buf.write("\\caption{{{}}}".format(self.caption))
+ buf.write(f"\\caption{{{self.caption}}}")
if self.label is None:
pass
else:
- buf.write("\\label{{{}}}".format(self.label))
+ buf.write(f"\\label{{{self.label}}}")
# a double-backslash is required at the end of the line
# as discussed here:
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 13b18a0b5fb6f..36e774305b577 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -229,7 +229,7 @@ def as_escaped_string(
max_seq_items=max_seq_items,
)
elif isinstance(thing, str) and quote_strings:
- result = "'{thing}'".format(thing=as_escaped_string(thing))
+ result = f"'{as_escaped_string(thing)}'"
else:
result = as_escaped_string(thing)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..b661770dc80a2 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -89,7 +89,7 @@
----------
filepath_or_buffer : str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
- URL schemes include http, ftp, s3, and file. For file URLs, a host is
+ URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any ``os.PathLike``.
@@ -1493,10 +1493,8 @@ def extract(r):
for n in range(len(columns[0])):
if all(ensure_str(col[n]) in self.unnamed_cols for col in columns):
raise ParserError(
- "Passed header=[{header}] are too many rows for this "
- "multi_index of columns".format(
- header=",".join(str(x) for x in self.header)
- )
+ f"Passed header=[{','.join(str(x) for x in self.header)}] "
+ f"are too many rows for this multi_index of columns"
)
# Clean the column names (if we have an index_col).
@@ -3613,8 +3611,8 @@ def get_rows(self, infer_nrows, skiprows=None):
def detect_colspecs(self, infer_nrows=100, skiprows=None):
# Regex escape the delimiters
- delimiters = "".join(r"\{}".format(x) for x in self.delimiter)
- pattern = re.compile("([^{}]+)".format(delimiters))
+ delimiters = "".join(fr"\{x}" for x in self.delimiter)
+ pattern = re.compile(f"([^{delimiters}]+)")
rows = self.get_rows(infer_nrows, skiprows)
if not rows:
raise EmptyDataError("No rows from which to infer column width")
diff --git a/pandas/tests/arrays/categorical/test_dtypes.py b/pandas/tests/arrays/categorical/test_dtypes.py
index 19746d7d72162..9922a8863ebc2 100644
--- a/pandas/tests/arrays/categorical/test_dtypes.py
+++ b/pandas/tests/arrays/categorical/test_dtypes.py
@@ -92,22 +92,20 @@ def test_codes_dtypes(self):
result = Categorical(["foo", "bar", "baz"])
assert result.codes.dtype == "int8"
- result = Categorical(["foo{i:05d}".format(i=i) for i in range(400)])
+ result = Categorical([f"foo{i:05d}" for i in range(400)])
assert result.codes.dtype == "int16"
- result = Categorical(["foo{i:05d}".format(i=i) for i in range(40000)])
+ result = Categorical([f"foo{i:05d}" for i in range(40000)])
assert result.codes.dtype == "int32"
# adding cats
result = Categorical(["foo", "bar", "baz"])
assert result.codes.dtype == "int8"
- result = result.add_categories(["foo{i:05d}".format(i=i) for i in range(400)])
+ result = result.add_categories([f"foo{i:05d}" for i in range(400)])
assert result.codes.dtype == "int16"
# removing cats
- result = result.remove_categories(
- ["foo{i:05d}".format(i=i) for i in range(300)]
- )
+ result = result.remove_categories([f"foo{i:05d}" for i in range(300)])
assert result.codes.dtype == "int8"
@pytest.mark.parametrize("ordered", [True, False])
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 0c830c65e0f8b..c3006687ca6dd 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -338,7 +338,7 @@ def test_compare_unordered_different_order(self):
def test_numeric_like_ops(self):
df = DataFrame({"value": np.random.randint(0, 10000, 100)})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
@@ -353,9 +353,7 @@ def test_numeric_like_ops(self):
("__mul__", r"\*"),
("__truediv__", "/"),
]:
- msg = r"Series cannot perform the operation {}|unsupported operand".format(
- str_rep
- )
+ msg = fr"Series cannot perform the operation {str_rep}|unsupported operand"
with pytest.raises(TypeError, match=msg):
getattr(df, op)(df)
@@ -363,7 +361,7 @@ def test_numeric_like_ops(self):
# min/max)
s = df["value_group"]
for op in ["kurt", "skew", "var", "std", "mean", "sum", "median"]:
- msg = "Categorical cannot perform the operation {}".format(op)
+ msg = f"Categorical cannot perform the operation {op}"
with pytest.raises(TypeError, match=msg):
getattr(s, op)(numeric_only=False)
@@ -383,9 +381,7 @@ def test_numeric_like_ops(self):
("__mul__", r"\*"),
("__truediv__", "/"),
]:
- msg = r"Series cannot perform the operation {}|unsupported operand".format(
- str_rep
- )
+ msg = fr"Series cannot perform the operation {str_rep}|unsupported operand"
with pytest.raises(TypeError, match=msg):
getattr(s, op)(2)
| I splitted PR #31844 in batches, this is the first one
For this PR I ran the command `grep -l -R -e '%s' -e '%d' -e '\.format(' --include=*.{py,pyx} pandas/` and checked all the files that were returned for `.format(` and changed the old string format for the corresponding `fstrings` to attempt a full clean of, [#29547](https://github.com/pandas-dev/pandas/issues/29547). I may have missed something so is a good idea to double check just in case
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/31878 | 2020-02-11T14:43:22Z | 2020-02-11T15:05:13Z | null | 2020-02-11T15:05:19Z |
BUG: fix infer_dtype for StringDtype | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index b31f7c1166dc0..448779e4c3e0b 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -36,8 +36,10 @@ Bug fixes
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+
**Experimental dtypes**
+- Fix bug in :meth:`DataFrame.convert_dtypes` for columns that were already using the ``"string"`` dtype (:issue:`31731`).
- Fixed bug in setting values using a slice indexer with string dtype (:issue:`31772`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 9702eb4615909..d2f0b2ffbaeec 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -1005,7 +1005,7 @@ _TYPE_MAP = {
'complex64': 'complex',
'complex128': 'complex',
'c': 'complex',
- 'string': 'bytes',
+ 'string': 'string',
'S': 'bytes',
'U': 'string',
'bool': 'boolean',
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 7463b2b579c0c..821bec19d6115 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -744,6 +744,7 @@ def any_numpy_dtype(request):
# categoricals are handled separately
_any_skipna_inferred_dtype = [
("string", ["a", np.nan, "c"]),
+ ("string", ["a", pd.NA, "c"]),
("bytes", [b"a", np.nan, b"c"]),
("empty", [np.nan, np.nan, np.nan]),
("empty", []),
@@ -754,6 +755,7 @@ def any_numpy_dtype(request):
("mixed-integer-float", [1, np.nan, 2.0]),
("decimal", [Decimal(1), np.nan, Decimal(2)]),
("boolean", [True, np.nan, False]),
+ ("boolean", [True, pd.NA, False]),
("datetime64", [np.datetime64("2013-01-01"), np.nan, np.datetime64("2018-01-01")]),
("datetime", [pd.Timestamp("20130101"), np.nan, pd.Timestamp("20180101")]),
("date", [date(2013, 1, 1), np.nan, date(2018, 1, 1)]),
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 48f9262ad3486..48ae1f67297af 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1200,6 +1200,24 @@ def test_interval(self):
inferred = lib.infer_dtype(pd.Series(idx), skipna=False)
assert inferred == "interval"
+ @pytest.mark.parametrize("klass", [pd.array, pd.Series])
+ @pytest.mark.parametrize("skipna", [True, False])
+ @pytest.mark.parametrize("data", [["a", "b", "c"], ["a", "b", pd.NA]])
+ def test_string_dtype(self, data, skipna, klass):
+ # StringArray
+ val = klass(data, dtype="string")
+ inferred = lib.infer_dtype(val, skipna=skipna)
+ assert inferred == "string"
+
+ @pytest.mark.parametrize("klass", [pd.array, pd.Series])
+ @pytest.mark.parametrize("skipna", [True, False])
+ @pytest.mark.parametrize("data", [[True, False, True], [True, False, pd.NA]])
+ def test_boolean_dtype(self, data, skipna, klass):
+ # BooleanArray
+ val = klass(data, dtype="boolean")
+ inferred = lib.infer_dtype(val, skipna=skipna)
+ assert inferred == "boolean"
+
class TestNumberScalar:
def test_is_number(self):
diff --git a/pandas/tests/series/methods/test_convert_dtypes.py b/pandas/tests/series/methods/test_convert_dtypes.py
index 923b5a94c5f41..a6b5fed40a9d7 100644
--- a/pandas/tests/series/methods/test_convert_dtypes.py
+++ b/pandas/tests/series/methods/test_convert_dtypes.py
@@ -246,3 +246,12 @@ def test_convert_dtypes(self, data, maindtype, params, answerdict):
# Make sure original not changed
tm.assert_series_equal(series, copy)
+
+ def test_convert_string_dtype(self):
+ # https://github.com/pandas-dev/pandas/issues/31731 -> converting columns
+ # that are already string dtype
+ df = pd.DataFrame(
+ {"A": ["a", "b", pd.NA], "B": ["ä", "ö", "ü"]}, dtype="string"
+ )
+ result = df.convert_dtypes()
+ tm.assert_frame_equal(df, result)
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 62d26dacde67b..1338d801e39f4 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -7,6 +7,7 @@
from pandas._libs import lib
+import pandas as pd
from pandas import DataFrame, Index, MultiIndex, Series, concat, isna, notna
import pandas._testing as tm
import pandas.core.strings as strings
@@ -207,6 +208,9 @@ def test_api_per_dtype(self, index_or_series, dtype, any_skipna_inferred_dtype):
box = index_or_series
inferred_dtype, values = any_skipna_inferred_dtype
+ if dtype == "category" and len(values) and values[1] is pd.NA:
+ pytest.xfail(reason="Categorical does not yet support pd.NA")
+
t = box(values, dtype=dtype) # explicit dtype to avoid casting
# TODO: get rid of these xfails
| Closes #31731 | https://api.github.com/repos/pandas-dev/pandas/pulls/31877 | 2020-02-11T14:10:21Z | 2020-02-12T15:05:29Z | 2020-02-12T15:05:29Z | 2020-02-12T15:14:15Z |
CLN: Move info | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4ad80273f77ba..e4f36e128059b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -14,7 +14,6 @@
import datetime
from io import StringIO
import itertools
-import sys
from textwrap import dedent
from typing import (
IO,
@@ -131,7 +130,7 @@
from pandas.io.common import get_filepath_or_buffer
from pandas.io.formats import console, format as fmt
-from pandas.io.formats.printing import pprint_thing
+from pandas.io.formats.info import info
import pandas.plotting
if TYPE_CHECKING:
@@ -2225,282 +2224,11 @@ def to_html(
)
# ----------------------------------------------------------------------
-
+ @Appender(info.__doc__)
def info(
self, verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None
) -> None:
- """
- Print a concise summary of a DataFrame.
-
- This method prints information about a DataFrame including
- the index dtype and column dtypes, non-null values and memory usage.
-
- Parameters
- ----------
- verbose : bool, optional
- Whether to print the full summary. By default, the setting in
- ``pandas.options.display.max_info_columns`` is followed.
- buf : writable buffer, defaults to sys.stdout
- Where to send the output. By default, the output is printed to
- sys.stdout. Pass a writable buffer if you need to further process
- the output.
- max_cols : int, optional
- When to switch from the verbose to the truncated output. If the
- DataFrame has more than `max_cols` columns, the truncated output
- is used. By default, the setting in
- ``pandas.options.display.max_info_columns`` is used.
- memory_usage : bool, str, optional
- Specifies whether total memory usage of the DataFrame
- elements (including the index) should be displayed. By default,
- this follows the ``pandas.options.display.memory_usage`` setting.
-
- True always show memory usage. False never shows memory usage.
- A value of 'deep' is equivalent to "True with deep introspection".
- Memory usage is shown in human-readable units (base-2
- representation). Without deep introspection a memory estimation is
- made based in column dtype and number of rows assuming values
- consume the same memory amount for corresponding dtypes. With deep
- memory introspection, a real memory usage calculation is performed
- at the cost of computational resources.
- null_counts : bool, optional
- Whether to show the non-null counts. By default, this is shown
- only if the frame is smaller than
- ``pandas.options.display.max_info_rows`` and
- ``pandas.options.display.max_info_columns``. A value of True always
- shows the counts, and False never shows the counts.
-
- Returns
- -------
- None
- This method prints a summary of a DataFrame and returns None.
-
- See Also
- --------
- DataFrame.describe: Generate descriptive statistics of DataFrame
- columns.
- DataFrame.memory_usage: Memory usage of DataFrame columns.
-
- Examples
- --------
- >>> int_values = [1, 2, 3, 4, 5]
- >>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
- >>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
- >>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
- ... "float_col": float_values})
- >>> df
- int_col text_col float_col
- 0 1 alpha 0.00
- 1 2 beta 0.25
- 2 3 gamma 0.50
- 3 4 delta 0.75
- 4 5 epsilon 1.00
-
- Prints information of all columns:
-
- >>> df.info(verbose=True)
- <class 'pandas.core.frame.DataFrame'>
- RangeIndex: 5 entries, 0 to 4
- Data columns (total 3 columns):
- # Column Non-Null Count Dtype
- --- ------ -------------- -----
- 0 int_col 5 non-null int64
- 1 text_col 5 non-null object
- 2 float_col 5 non-null float64
- dtypes: float64(1), int64(1), object(1)
- memory usage: 248.0+ bytes
-
- Prints a summary of columns count and its dtypes but not per column
- information:
-
- >>> df.info(verbose=False)
- <class 'pandas.core.frame.DataFrame'>
- RangeIndex: 5 entries, 0 to 4
- Columns: 3 entries, int_col to float_col
- dtypes: float64(1), int64(1), object(1)
- memory usage: 248.0+ bytes
-
- Pipe output of DataFrame.info to buffer instead of sys.stdout, get
- buffer content and writes to a text file:
-
- >>> import io
- >>> buffer = io.StringIO()
- >>> df.info(buf=buffer)
- >>> s = buffer.getvalue()
- >>> with open("df_info.txt", "w",
- ... encoding="utf-8") as f: # doctest: +SKIP
- ... f.write(s)
- 260
-
- The `memory_usage` parameter allows deep introspection mode, specially
- useful for big DataFrames and fine-tune memory optimization:
-
- >>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
- >>> df = pd.DataFrame({
- ... 'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
- ... 'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
- ... 'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
- ... })
- >>> df.info()
- <class 'pandas.core.frame.DataFrame'>
- RangeIndex: 1000000 entries, 0 to 999999
- Data columns (total 3 columns):
- # Column Non-Null Count Dtype
- --- ------ -------------- -----
- 0 column_1 1000000 non-null object
- 1 column_2 1000000 non-null object
- 2 column_3 1000000 non-null object
- dtypes: object(3)
- memory usage: 22.9+ MB
-
- >>> df.info(memory_usage='deep')
- <class 'pandas.core.frame.DataFrame'>
- RangeIndex: 1000000 entries, 0 to 999999
- Data columns (total 3 columns):
- # Column Non-Null Count Dtype
- --- ------ -------------- -----
- 0 column_1 1000000 non-null object
- 1 column_2 1000000 non-null object
- 2 column_3 1000000 non-null object
- dtypes: object(3)
- memory usage: 188.8 MB
- """
- if buf is None: # pragma: no cover
- buf = sys.stdout
-
- lines = []
-
- lines.append(str(type(self)))
- lines.append(self.index._summary())
-
- if len(self.columns) == 0:
- lines.append(f"Empty {type(self).__name__}")
- fmt.buffer_put_lines(buf, lines)
- return
-
- cols = self.columns
- col_count = len(self.columns)
-
- # hack
- if max_cols is None:
- max_cols = get_option("display.max_info_columns", len(self.columns) + 1)
-
- max_rows = get_option("display.max_info_rows", len(self) + 1)
-
- if null_counts is None:
- show_counts = (col_count <= max_cols) and (len(self) < max_rows)
- else:
- show_counts = null_counts
- exceeds_info_cols = col_count > max_cols
-
- def _verbose_repr():
- lines.append(f"Data columns (total {len(self.columns)} columns):")
-
- id_head = " # "
- column_head = "Column"
- col_space = 2
-
- max_col = max(len(pprint_thing(k)) for k in cols)
- len_column = len(pprint_thing(column_head))
- space = max(max_col, len_column) + col_space
-
- max_id = len(pprint_thing(col_count))
- len_id = len(pprint_thing(id_head))
- space_num = max(max_id, len_id) + col_space
- counts = None
-
- header = _put_str(id_head, space_num) + _put_str(column_head, space)
- if show_counts:
- counts = self.count()
- if len(cols) != len(counts): # pragma: no cover
- raise AssertionError(
- f"Columns must equal counts ({len(cols)} != {len(counts)})"
- )
- count_header = "Non-Null Count"
- len_count = len(count_header)
- non_null = " non-null"
- max_count = max(len(pprint_thing(k)) for k in counts) + len(non_null)
- space_count = max(len_count, max_count) + col_space
- count_temp = "{count}" + non_null
- else:
- count_header = ""
- space_count = len(count_header)
- len_count = space_count
- count_temp = "{count}"
-
- dtype_header = "Dtype"
- len_dtype = len(dtype_header)
- max_dtypes = max(len(pprint_thing(k)) for k in self.dtypes)
- space_dtype = max(len_dtype, max_dtypes)
- header += _put_str(count_header, space_count) + _put_str(
- dtype_header, space_dtype
- )
-
- lines.append(header)
- lines.append(
- _put_str("-" * len_id, space_num)
- + _put_str("-" * len_column, space)
- + _put_str("-" * len_count, space_count)
- + _put_str("-" * len_dtype, space_dtype)
- )
-
- for i, col in enumerate(self.columns):
- dtype = self.dtypes.iloc[i]
- col = pprint_thing(col)
-
- line_no = _put_str(f" {i}", space_num)
- count = ""
- if show_counts:
- count = counts.iloc[i]
-
- lines.append(
- line_no
- + _put_str(col, space)
- + _put_str(count_temp.format(count=count), space_count)
- + _put_str(dtype, space_dtype)
- )
-
- def _non_verbose_repr():
- lines.append(self.columns._summary(name="Columns"))
-
- def _sizeof_fmt(num, size_qualifier):
- # returns size in human readable format
- for x in ["bytes", "KB", "MB", "GB", "TB"]:
- if num < 1024.0:
- return f"{num:3.1f}{size_qualifier} {x}"
- num /= 1024.0
- return f"{num:3.1f}{size_qualifier} PB"
-
- if verbose:
- _verbose_repr()
- elif verbose is False: # specifically set to False, not nesc None
- _non_verbose_repr()
- else:
- if exceeds_info_cols:
- _non_verbose_repr()
- else:
- _verbose_repr()
-
- counts = self._data.get_dtype_counts()
- dtypes = [f"{k[0]}({k[1]:d})" for k in sorted(counts.items())]
- lines.append(f"dtypes: {', '.join(dtypes)}")
-
- if memory_usage is None:
- memory_usage = get_option("display.memory_usage")
- if memory_usage:
- # append memory usage of df to display
- size_qualifier = ""
- if memory_usage == "deep":
- deep = True
- else:
- # size_qualifier is just a best effort; not guaranteed to catch
- # all cases (e.g., it misses categorical data even with object
- # categories)
- deep = False
- if "object" in counts or self.index._is_memory_usage_qualified():
- size_qualifier = "+"
- mem_usage = self.memory_usage(index=True, deep=deep).sum()
- lines.append(f"memory usage: {_sizeof_fmt(mem_usage, size_qualifier)}\n")
- fmt.buffer_put_lines(buf, lines)
+ return info(self, verbose, buf, max_cols, memory_usage, null_counts)
def memory_usage(self, index=True, deep=False) -> Series:
"""
@@ -8623,7 +8351,3 @@ def _from_nested_dict(data):
new_data[col] = new_data.get(col, {})
new_data[col][index] = v
return new_data
-
-
-def _put_str(s, space):
- return str(s)[:space].ljust(space)
diff --git a/pandas/io/formats/info.py b/pandas/io/formats/info.py
new file mode 100644
index 0000000000000..0c08065f55273
--- /dev/null
+++ b/pandas/io/formats/info.py
@@ -0,0 +1,288 @@
+import sys
+
+from pandas._config import get_option
+
+from pandas.io.formats import format as fmt
+from pandas.io.formats.printing import pprint_thing
+
+
+def _put_str(s, space):
+ return str(s)[:space].ljust(space)
+
+
+def info(
+ data, verbose=None, buf=None, max_cols=None, memory_usage=None, null_counts=None
+) -> None:
+ """
+ Print a concise summary of a DataFrame.
+
+ This method prints information about a DataFrame including
+ the index dtype and column dtypes, non-null values and memory usage.
+
+ Parameters
+ ----------
+ data : DataFrame
+ DataFrame to print information about.
+ verbose : bool, optional
+ Whether to print the full summary. By default, the setting in
+ ``pandas.options.display.max_info_columns`` is followed.
+ buf : writable buffer, defaults to sys.stdout
+ Where to send the output. By default, the output is printed to
+ sys.stdout. Pass a writable buffer if you need to further process
+ the output.
+ max_cols : int, optional
+ When to switch from the verbose to the truncated output. If the
+ DataFrame has more than `max_cols` columns, the truncated output
+ is used. By default, the setting in
+ ``pandas.options.display.max_info_columns`` is used.
+ memory_usage : bool, str, optional
+ Specifies whether total memory usage of the DataFrame
+ elements (including the index) should be displayed. By default,
+ this follows the ``pandas.options.display.memory_usage`` setting.
+
+ True always show memory usage. False never shows memory usage.
+ A value of 'deep' is equivalent to "True with deep introspection".
+ Memory usage is shown in human-readable units (base-2
+ representation). Without deep introspection a memory estimation is
+ made based in column dtype and number of rows assuming values
+ consume the same memory amount for corresponding dtypes. With deep
+ memory introspection, a real memory usage calculation is performed
+ at the cost of computational resources.
+ null_counts : bool, optional
+ Whether to show the non-null counts. By default, this is shown
+ only if the frame is smaller than
+ ``pandas.options.display.max_info_rows`` and
+ ``pandas.options.display.max_info_columns``. A value of True always
+ shows the counts, and False never shows the counts.
+
+ Returns
+ -------
+ None
+ This method prints a summary of a DataFrame and returns None.
+
+ See Also
+ --------
+ DataFrame.describe: Generate descriptive statistics of DataFrame
+ columns.
+ DataFrame.memory_usage: Memory usage of DataFrame columns.
+
+ Examples
+ --------
+ >>> int_values = [1, 2, 3, 4, 5]
+ >>> text_values = ['alpha', 'beta', 'gamma', 'delta', 'epsilon']
+ >>> float_values = [0.0, 0.25, 0.5, 0.75, 1.0]
+ >>> df = pd.DataFrame({"int_col": int_values, "text_col": text_values,
+ ... "float_col": float_values})
+ >>> df
+ int_col text_col float_col
+ 0 1 alpha 0.00
+ 1 2 beta 0.25
+ 2 3 gamma 0.50
+ 3 4 delta 0.75
+ 4 5 epsilon 1.00
+
+ Prints information of all columns:
+
+ >>> df.info(verbose=True)
+ <class 'pandas.core.frame.DataFrame'>
+ RangeIndex: 5 entries, 0 to 4
+ Data columns (total 3 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 int_col 5 non-null int64
+ 1 text_col 5 non-null object
+ 2 float_col 5 non-null float64
+ dtypes: float64(1), int64(1), object(1)
+ memory usage: 248.0+ bytes
+
+ Prints a summary of columns count and its dtypes but not per column
+ information:
+
+ >>> df.info(verbose=False)
+ <class 'pandas.core.frame.DataFrame'>
+ RangeIndex: 5 entries, 0 to 4
+ Columns: 3 entries, int_col to float_col
+ dtypes: float64(1), int64(1), object(1)
+ memory usage: 248.0+ bytes
+
+ Pipe output of DataFrame.info to buffer instead of sys.stdout, get
+ buffer content and writes to a text file:
+
+ >>> import io
+ >>> buffer = io.StringIO()
+ >>> df.info(buf=buffer)
+ >>> s = buffer.getvalue()
+ >>> with open("df_info.txt", "w",
+ ... encoding="utf-8") as f: # doctest: +SKIP
+ ... f.write(s)
+ 260
+
+ The `memory_usage` parameter allows deep introspection mode, specially
+ useful for big DataFrames and fine-tune memory optimization:
+
+ >>> random_strings_array = np.random.choice(['a', 'b', 'c'], 10 ** 6)
+ >>> df = pd.DataFrame({
+ ... 'column_1': np.random.choice(['a', 'b', 'c'], 10 ** 6),
+ ... 'column_2': np.random.choice(['a', 'b', 'c'], 10 ** 6),
+ ... 'column_3': np.random.choice(['a', 'b', 'c'], 10 ** 6)
+ ... })
+ >>> df.info()
+ <class 'pandas.core.frame.DataFrame'>
+ RangeIndex: 1000000 entries, 0 to 999999
+ Data columns (total 3 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 column_1 1000000 non-null object
+ 1 column_2 1000000 non-null object
+ 2 column_3 1000000 non-null object
+ dtypes: object(3)
+ memory usage: 22.9+ MB
+
+ >>> df.info(memory_usage='deep')
+ <class 'pandas.core.frame.DataFrame'>
+ RangeIndex: 1000000 entries, 0 to 999999
+ Data columns (total 3 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 column_1 1000000 non-null object
+ 1 column_2 1000000 non-null object
+ 2 column_3 1000000 non-null object
+ dtypes: object(3)
+ memory usage: 188.8 MB
+ """
+ if buf is None: # pragma: no cover
+ buf = sys.stdout
+
+ lines = []
+
+ lines.append(str(type(data)))
+ lines.append(data.index._summary())
+
+ if len(data.columns) == 0:
+ lines.append(f"Empty {type(data).__name__}")
+ fmt.buffer_put_lines(buf, lines)
+ return
+
+ cols = data.columns
+ col_count = len(data.columns)
+
+ # hack
+ if max_cols is None:
+ max_cols = get_option("display.max_info_columns", len(data.columns) + 1)
+
+ max_rows = get_option("display.max_info_rows", len(data) + 1)
+
+ if null_counts is None:
+ show_counts = (col_count <= max_cols) and (len(data) < max_rows)
+ else:
+ show_counts = null_counts
+ exceeds_info_cols = col_count > max_cols
+
+ def _verbose_repr():
+ lines.append(f"Data columns (total {len(data.columns)} columns):")
+
+ id_head = " # "
+ column_head = "Column"
+ col_space = 2
+
+ max_col = max(len(pprint_thing(k)) for k in cols)
+ len_column = len(pprint_thing(column_head))
+ space = max(max_col, len_column) + col_space
+
+ max_id = len(pprint_thing(col_count))
+ len_id = len(pprint_thing(id_head))
+ space_num = max(max_id, len_id) + col_space
+
+ header = _put_str(id_head, space_num) + _put_str(column_head, space)
+ if show_counts:
+ counts = data.count()
+ if len(cols) != len(counts): # pragma: no cover
+ raise AssertionError(
+ f"Columns must equal counts ({len(cols)} != {len(counts)})"
+ )
+ count_header = "Non-Null Count"
+ len_count = len(count_header)
+ non_null = " non-null"
+ max_count = max(len(pprint_thing(k)) for k in counts) + len(non_null)
+ space_count = max(len_count, max_count) + col_space
+ count_temp = "{count}" + non_null
+ else:
+ count_header = ""
+ space_count = len(count_header)
+ len_count = space_count
+ count_temp = "{count}"
+
+ dtype_header = "Dtype"
+ len_dtype = len(dtype_header)
+ max_dtypes = max(len(pprint_thing(k)) for k in data.dtypes)
+ space_dtype = max(len_dtype, max_dtypes)
+ header += _put_str(count_header, space_count) + _put_str(
+ dtype_header, space_dtype
+ )
+
+ lines.append(header)
+ lines.append(
+ _put_str("-" * len_id, space_num)
+ + _put_str("-" * len_column, space)
+ + _put_str("-" * len_count, space_count)
+ + _put_str("-" * len_dtype, space_dtype)
+ )
+
+ for i, col in enumerate(data.columns):
+ dtype = data.dtypes.iloc[i]
+ col = pprint_thing(col)
+
+ line_no = _put_str(f" {i}", space_num)
+ count = ""
+ if show_counts:
+ count = counts.iloc[i]
+
+ lines.append(
+ line_no
+ + _put_str(col, space)
+ + _put_str(count_temp.format(count=count), space_count)
+ + _put_str(dtype, space_dtype)
+ )
+
+ def _non_verbose_repr():
+ lines.append(data.columns._summary(name="Columns"))
+
+ def _sizeof_fmt(num, size_qualifier):
+ # returns size in human readable format
+ for x in ["bytes", "KB", "MB", "GB", "TB"]:
+ if num < 1024.0:
+ return f"{num:3.1f}{size_qualifier} {x}"
+ num /= 1024.0
+ return f"{num:3.1f}{size_qualifier} PB"
+
+ if verbose:
+ _verbose_repr()
+ elif verbose is False: # specifically set to False, not nesc None
+ _non_verbose_repr()
+ else:
+ if exceeds_info_cols:
+ _non_verbose_repr()
+ else:
+ _verbose_repr()
+
+ counts = data._data.get_dtype_counts()
+ dtypes = [f"{k[0]}({k[1]:d})" for k in sorted(counts.items())]
+ lines.append(f"dtypes: {', '.join(dtypes)}")
+
+ if memory_usage is None:
+ memory_usage = get_option("display.memory_usage")
+ if memory_usage:
+ # append memory usage of df to display
+ size_qualifier = ""
+ if memory_usage == "deep":
+ deep = True
+ else:
+ # size_qualifier is just a best effort; not guaranteed to catch
+ # all cases (e.g., it misses categorical data even with object
+ # categories)
+ deep = False
+ if "object" in counts or data.index._is_memory_usage_qualified():
+ size_qualifier = "+"
+ mem_usage = data.memory_usage(index=True, deep=deep).sum()
+ lines.append(f"memory usage: {_sizeof_fmt(mem_usage, size_qualifier)}\n")
+ fmt.buffer_put_lines(buf, lines)
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 4ac009ef508c4..c5d4d59adbc35 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -1,16 +1,10 @@
from datetime import datetime, timedelta
from io import StringIO
-import re
-import sys
-import textwrap
import warnings
import numpy as np
import pytest
-from pandas.compat import PYPY
-
-import pandas as pd
from pandas import (
Categorical,
DataFrame,
@@ -192,357 +186,6 @@ def test_latex_repr(self):
# GH 12182
assert df._repr_latex_() is None
- def test_info(self, float_frame, datetime_frame):
- io = StringIO()
- float_frame.info(buf=io)
- datetime_frame.info(buf=io)
-
- frame = DataFrame(np.random.randn(5, 3))
-
- frame.info()
- frame.info(verbose=False)
-
- def test_info_verbose(self):
- buf = StringIO()
- size = 1001
- start = 5
- frame = DataFrame(np.random.randn(3, size))
- frame.info(verbose=True, buf=buf)
-
- res = buf.getvalue()
- header = " # Column Dtype \n--- ------ ----- "
- assert header in res
-
- frame.info(verbose=True, buf=buf)
- buf.seek(0)
- lines = buf.readlines()
- assert len(lines) > 0
-
- for i, line in enumerate(lines):
- if i >= start and i < start + size:
- line_nr = f" {i - start} "
- assert line.startswith(line_nr)
-
- def test_info_memory(self):
- # https://github.com/pandas-dev/pandas/issues/21056
- df = pd.DataFrame({"a": pd.Series([1, 2], dtype="i8")})
- buf = StringIO()
- df.info(buf=buf)
- result = buf.getvalue()
- bytes = float(df.memory_usage().sum())
-
- expected = textwrap.dedent(
- f"""\
- <class 'pandas.core.frame.DataFrame'>
- RangeIndex: 2 entries, 0 to 1
- Data columns (total 1 columns):
- # Column Non-Null Count Dtype
- --- ------ -------------- -----
- 0 a 2 non-null int64
- dtypes: int64(1)
- memory usage: {bytes} bytes
- """
- )
-
- assert result == expected
-
- def test_info_wide(self):
- from pandas import set_option, reset_option
-
- io = StringIO()
- df = DataFrame(np.random.randn(5, 101))
- df.info(buf=io)
-
- io = StringIO()
- df.info(buf=io, max_cols=101)
- rs = io.getvalue()
- assert len(rs.splitlines()) > 100
- xp = rs
-
- set_option("display.max_info_columns", 101)
- io = StringIO()
- df.info(buf=io)
- assert rs == xp
- reset_option("display.max_info_columns")
-
- def test_info_duplicate_columns(self):
- io = StringIO()
-
- # it works!
- frame = DataFrame(np.random.randn(1500, 4), columns=["a", "a", "b", "b"])
- frame.info(buf=io)
-
- def test_info_duplicate_columns_shows_correct_dtypes(self):
- # GH11761
- io = StringIO()
-
- frame = DataFrame([[1, 2.0]], columns=["a", "a"])
- frame.info(buf=io)
- io.seek(0)
- lines = io.readlines()
- assert " 0 a 1 non-null int64 \n" == lines[5]
- assert " 1 a 1 non-null float64\n" == lines[6]
-
- def test_info_shows_column_dtypes(self):
- dtypes = [
- "int64",
- "float64",
- "datetime64[ns]",
- "timedelta64[ns]",
- "complex128",
- "object",
- "bool",
- ]
- data = {}
- n = 10
- for i, dtype in enumerate(dtypes):
- data[i] = np.random.randint(2, size=n).astype(dtype)
- df = DataFrame(data)
- buf = StringIO()
- df.info(buf=buf)
- res = buf.getvalue()
- header = (
- " # Column Non-Null Count Dtype \n"
- "--- ------ -------------- ----- "
- )
- assert header in res
- for i, dtype in enumerate(dtypes):
- name = f" {i:d} {i:d} {n:d} non-null {dtype}"
- assert name in res
-
- def test_info_max_cols(self):
- df = DataFrame(np.random.randn(10, 5))
- for len_, verbose in [(5, None), (5, False), (12, True)]:
- # For verbose always ^ setting ^ summarize ^ full output
- with option_context("max_info_columns", 4):
- buf = StringIO()
- df.info(buf=buf, verbose=verbose)
- res = buf.getvalue()
- assert len(res.strip().split("\n")) == len_
-
- for len_, verbose in [(12, None), (5, False), (12, True)]:
-
- # max_cols not exceeded
- with option_context("max_info_columns", 5):
- buf = StringIO()
- df.info(buf=buf, verbose=verbose)
- res = buf.getvalue()
- assert len(res.strip().split("\n")) == len_
-
- for len_, max_cols in [(12, 5), (5, 4)]:
- # setting truncates
- with option_context("max_info_columns", 4):
- buf = StringIO()
- df.info(buf=buf, max_cols=max_cols)
- res = buf.getvalue()
- assert len(res.strip().split("\n")) == len_
-
- # setting wouldn't truncate
- with option_context("max_info_columns", 5):
- buf = StringIO()
- df.info(buf=buf, max_cols=max_cols)
- res = buf.getvalue()
- assert len(res.strip().split("\n")) == len_
-
- def test_info_memory_usage(self):
- # Ensure memory usage is displayed, when asserted, on the last line
- dtypes = [
- "int64",
- "float64",
- "datetime64[ns]",
- "timedelta64[ns]",
- "complex128",
- "object",
- "bool",
- ]
- data = {}
- n = 10
- for i, dtype in enumerate(dtypes):
- data[i] = np.random.randint(2, size=n).astype(dtype)
- df = DataFrame(data)
- buf = StringIO()
-
- # display memory usage case
- df.info(buf=buf, memory_usage=True)
- res = buf.getvalue().splitlines()
- assert "memory usage: " in res[-1]
-
- # do not display memory usage case
- df.info(buf=buf, memory_usage=False)
- res = buf.getvalue().splitlines()
- assert "memory usage: " not in res[-1]
-
- df.info(buf=buf, memory_usage=True)
- res = buf.getvalue().splitlines()
-
- # memory usage is a lower bound, so print it as XYZ+ MB
- assert re.match(r"memory usage: [^+]+\+", res[-1])
-
- df.iloc[:, :5].info(buf=buf, memory_usage=True)
- res = buf.getvalue().splitlines()
-
- # excluded column with object dtype, so estimate is accurate
- assert not re.match(r"memory usage: [^+]+\+", res[-1])
-
- # Test a DataFrame with duplicate columns
- dtypes = ["int64", "int64", "int64", "float64"]
- data = {}
- n = 100
- for i, dtype in enumerate(dtypes):
- data[i] = np.random.randint(2, size=n).astype(dtype)
- df = DataFrame(data)
- df.columns = dtypes
-
- df_with_object_index = pd.DataFrame({"a": [1]}, index=["foo"])
- df_with_object_index.info(buf=buf, memory_usage=True)
- res = buf.getvalue().splitlines()
- assert re.match(r"memory usage: [^+]+\+", res[-1])
-
- df_with_object_index.info(buf=buf, memory_usage="deep")
- res = buf.getvalue().splitlines()
- assert re.match(r"memory usage: [^+]+$", res[-1])
-
- # Ensure df size is as expected
- # (cols * rows * bytes) + index size
- df_size = df.memory_usage().sum()
- exp_size = len(dtypes) * n * 8 + df.index.nbytes
- assert df_size == exp_size
-
- # Ensure number of cols in memory_usage is the same as df
- size_df = np.size(df.columns.values) + 1 # index=True; default
- assert size_df == np.size(df.memory_usage())
-
- # assert deep works only on object
- assert df.memory_usage().sum() == df.memory_usage(deep=True).sum()
-
- # test for validity
- DataFrame(1, index=["a"], columns=["A"]).memory_usage(index=True)
- DataFrame(1, index=["a"], columns=["A"]).index.nbytes
- df = DataFrame(
- data=1,
- index=pd.MultiIndex.from_product([["a"], range(1000)]),
- columns=["A"],
- )
- df.index.nbytes
- df.memory_usage(index=True)
- df.index.values.nbytes
-
- mem = df.memory_usage(deep=True).sum()
- assert mem > 0
-
- @pytest.mark.skipif(PYPY, reason="on PyPy deep=True doesn't change result")
- def test_info_memory_usage_deep_not_pypy(self):
- df_with_object_index = pd.DataFrame({"a": [1]}, index=["foo"])
- assert (
- df_with_object_index.memory_usage(index=True, deep=True).sum()
- > df_with_object_index.memory_usage(index=True).sum()
- )
-
- df_object = pd.DataFrame({"a": ["a"]})
- assert df_object.memory_usage(deep=True).sum() > df_object.memory_usage().sum()
-
- @pytest.mark.skipif(not PYPY, reason="on PyPy deep=True does not change result")
- def test_info_memory_usage_deep_pypy(self):
- df_with_object_index = pd.DataFrame({"a": [1]}, index=["foo"])
- assert (
- df_with_object_index.memory_usage(index=True, deep=True).sum()
- == df_with_object_index.memory_usage(index=True).sum()
- )
-
- df_object = pd.DataFrame({"a": ["a"]})
- assert df_object.memory_usage(deep=True).sum() == df_object.memory_usage().sum()
-
- @pytest.mark.skipif(PYPY, reason="PyPy getsizeof() fails by design")
- def test_usage_via_getsizeof(self):
- df = DataFrame(
- data=1,
- index=pd.MultiIndex.from_product([["a"], range(1000)]),
- columns=["A"],
- )
- mem = df.memory_usage(deep=True).sum()
- # sys.getsizeof will call the .memory_usage with
- # deep=True, and add on some GC overhead
- diff = mem - sys.getsizeof(df)
- assert abs(diff) < 100
-
- def test_info_memory_usage_qualified(self):
-
- buf = StringIO()
- df = DataFrame(1, columns=list("ab"), index=[1, 2, 3])
- df.info(buf=buf)
- assert "+" not in buf.getvalue()
-
- buf = StringIO()
- df = DataFrame(1, columns=list("ab"), index=list("ABC"))
- df.info(buf=buf)
- assert "+" in buf.getvalue()
-
- buf = StringIO()
- df = DataFrame(
- 1,
- columns=list("ab"),
- index=pd.MultiIndex.from_product([range(3), range(3)]),
- )
- df.info(buf=buf)
- assert "+" not in buf.getvalue()
-
- buf = StringIO()
- df = DataFrame(
- 1,
- columns=list("ab"),
- index=pd.MultiIndex.from_product([range(3), ["foo", "bar"]]),
- )
- df.info(buf=buf)
- assert "+" in buf.getvalue()
-
- def test_info_memory_usage_bug_on_multiindex(self):
- # GH 14308
- # memory usage introspection should not materialize .values
-
- from string import ascii_uppercase as uppercase
-
- def memory_usage(f):
- return f.memory_usage(deep=True).sum()
-
- N = 100
- M = len(uppercase)
- index = pd.MultiIndex.from_product(
- [list(uppercase), pd.date_range("20160101", periods=N)],
- names=["id", "date"],
- )
- df = DataFrame({"value": np.random.randn(N * M)}, index=index)
-
- unstacked = df.unstack("id")
- assert df.values.nbytes == unstacked.values.nbytes
- assert memory_usage(df) > memory_usage(unstacked)
-
- # high upper bound
- assert memory_usage(unstacked) - memory_usage(df) < 2000
-
- def test_info_categorical(self):
- # GH14298
- idx = pd.CategoricalIndex(["a", "b"])
- df = pd.DataFrame(np.zeros((2, 2)), index=idx, columns=idx)
-
- buf = StringIO()
- df.info(buf=buf)
-
- def test_info_categorical_column(self):
-
- # make sure it works
- n = 2500
- df = DataFrame({"int64": np.random.randint(100, size=n)})
- df["category"] = Series(
- np.array(list("abcdefghij")).take(np.random.randint(0, 10, size=n))
- ).astype("category")
- df.isna()
- buf = StringIO()
- df.info(buf=buf)
-
- df2 = df[df["category"] == "d"]
- buf = StringIO()
- df2.info(buf=buf)
-
def test_repr_categorical_dates_periods(self):
# normal DataFrame
dt = date_range("2011-01-01 09:00", freq="H", periods=5, tz="US/Eastern")
diff --git a/pandas/tests/io/formats/test_info.py b/pandas/tests/io/formats/test_info.py
new file mode 100644
index 0000000000000..877bd1650ae60
--- /dev/null
+++ b/pandas/tests/io/formats/test_info.py
@@ -0,0 +1,405 @@
+from io import StringIO
+import re
+from string import ascii_uppercase as uppercase
+import sys
+import textwrap
+
+import numpy as np
+import pytest
+
+from pandas.compat import PYPY
+
+from pandas import (
+ CategoricalIndex,
+ DataFrame,
+ MultiIndex,
+ Series,
+ date_range,
+ option_context,
+ reset_option,
+ set_option,
+)
+import pandas._testing as tm
+
+
+@pytest.fixture
+def datetime_frame():
+ """
+ Fixture for DataFrame of floats with DatetimeIndex
+
+ Columns are ['A', 'B', 'C', 'D']
+
+ A B C D
+ 2000-01-03 -1.122153 0.468535 0.122226 1.693711
+ 2000-01-04 0.189378 0.486100 0.007864 -1.216052
+ 2000-01-05 0.041401 -0.835752 -0.035279 -0.414357
+ 2000-01-06 0.430050 0.894352 0.090719 0.036939
+ 2000-01-07 -0.620982 -0.668211 -0.706153 1.466335
+ 2000-01-10 -0.752633 0.328434 -0.815325 0.699674
+ 2000-01-11 -2.236969 0.615737 -0.829076 -1.196106
+ ... ... ... ... ...
+ 2000-02-03 1.642618 -0.579288 0.046005 1.385249
+ 2000-02-04 -0.544873 -1.160962 -0.284071 -1.418351
+ 2000-02-07 -2.656149 -0.601387 1.410148 0.444150
+ 2000-02-08 -1.201881 -1.289040 0.772992 -1.445300
+ 2000-02-09 1.377373 0.398619 1.008453 -0.928207
+ 2000-02-10 0.473194 -0.636677 0.984058 0.511519
+ 2000-02-11 -0.965556 0.408313 -1.312844 -0.381948
+
+ [30 rows x 4 columns]
+ """
+ return DataFrame(tm.getTimeSeriesData())
+
+
+def test_info_categorical_column():
+
+ # make sure it works
+ n = 2500
+ df = DataFrame({"int64": np.random.randint(100, size=n)})
+ df["category"] = Series(
+ np.array(list("abcdefghij")).take(np.random.randint(0, 10, size=n))
+ ).astype("category")
+ df.isna()
+ buf = StringIO()
+ df.info(buf=buf)
+
+ df2 = df[df["category"] == "d"]
+ buf = StringIO()
+ df2.info(buf=buf)
+
+
+def test_info(float_frame, datetime_frame):
+ io = StringIO()
+ float_frame.info(buf=io)
+ datetime_frame.info(buf=io)
+
+ frame = DataFrame(np.random.randn(5, 3))
+
+ frame.info()
+ frame.info(verbose=False)
+
+
+def test_info_verbose():
+ buf = StringIO()
+ size = 1001
+ start = 5
+ frame = DataFrame(np.random.randn(3, size))
+ frame.info(verbose=True, buf=buf)
+
+ res = buf.getvalue()
+ header = " # Column Dtype \n--- ------ ----- "
+ assert header in res
+
+ frame.info(verbose=True, buf=buf)
+ buf.seek(0)
+ lines = buf.readlines()
+ assert len(lines) > 0
+
+ for i, line in enumerate(lines):
+ if i >= start and i < start + size:
+ line_nr = f" {i - start} "
+ assert line.startswith(line_nr)
+
+
+def test_info_memory():
+ # https://github.com/pandas-dev/pandas/issues/21056
+ df = DataFrame({"a": Series([1, 2], dtype="i8")})
+ buf = StringIO()
+ df.info(buf=buf)
+ result = buf.getvalue()
+ bytes = float(df.memory_usage().sum())
+ expected = textwrap.dedent(
+ f"""\
+ <class 'pandas.core.frame.DataFrame'>
+ RangeIndex: 2 entries, 0 to 1
+ Data columns (total 1 columns):
+ # Column Non-Null Count Dtype
+ --- ------ -------------- -----
+ 0 a 2 non-null int64
+ dtypes: int64(1)
+ memory usage: {bytes} bytes
+ """
+ )
+ assert result == expected
+
+
+def test_info_wide():
+ io = StringIO()
+ df = DataFrame(np.random.randn(5, 101))
+ df.info(buf=io)
+
+ io = StringIO()
+ df.info(buf=io, max_cols=101)
+ rs = io.getvalue()
+ assert len(rs.splitlines()) > 100
+ xp = rs
+
+ set_option("display.max_info_columns", 101)
+ io = StringIO()
+ df.info(buf=io)
+ assert rs == xp
+ reset_option("display.max_info_columns")
+
+
+def test_info_duplicate_columns():
+ io = StringIO()
+
+ # it works!
+ frame = DataFrame(np.random.randn(1500, 4), columns=["a", "a", "b", "b"])
+ frame.info(buf=io)
+
+
+def test_info_duplicate_columns_shows_correct_dtypes():
+ # GH11761
+ io = StringIO()
+
+ frame = DataFrame([[1, 2.0]], columns=["a", "a"])
+ frame.info(buf=io)
+ io.seek(0)
+ lines = io.readlines()
+ assert " 0 a 1 non-null int64 \n" == lines[5]
+ assert " 1 a 1 non-null float64\n" == lines[6]
+
+
+def test_info_shows_column_dtypes():
+ dtypes = [
+ "int64",
+ "float64",
+ "datetime64[ns]",
+ "timedelta64[ns]",
+ "complex128",
+ "object",
+ "bool",
+ ]
+ data = {}
+ n = 10
+ for i, dtype in enumerate(dtypes):
+ data[i] = np.random.randint(2, size=n).astype(dtype)
+ df = DataFrame(data)
+ buf = StringIO()
+ df.info(buf=buf)
+ res = buf.getvalue()
+ header = (
+ " # Column Non-Null Count Dtype \n"
+ "--- ------ -------------- ----- "
+ )
+ assert header in res
+ for i, dtype in enumerate(dtypes):
+ name = f" {i:d} {i:d} {n:d} non-null {dtype}"
+ assert name in res
+
+
+def test_info_max_cols():
+ df = DataFrame(np.random.randn(10, 5))
+ for len_, verbose in [(5, None), (5, False), (12, True)]:
+ # For verbose always ^ setting ^ summarize ^ full output
+ with option_context("max_info_columns", 4):
+ buf = StringIO()
+ df.info(buf=buf, verbose=verbose)
+ res = buf.getvalue()
+ assert len(res.strip().split("\n")) == len_
+
+ for len_, verbose in [(12, None), (5, False), (12, True)]:
+
+ # max_cols not exceeded
+ with option_context("max_info_columns", 5):
+ buf = StringIO()
+ df.info(buf=buf, verbose=verbose)
+ res = buf.getvalue()
+ assert len(res.strip().split("\n")) == len_
+
+ for len_, max_cols in [(12, 5), (5, 4)]:
+ # setting truncates
+ with option_context("max_info_columns", 4):
+ buf = StringIO()
+ df.info(buf=buf, max_cols=max_cols)
+ res = buf.getvalue()
+ assert len(res.strip().split("\n")) == len_
+
+ # setting wouldn't truncate
+ with option_context("max_info_columns", 5):
+ buf = StringIO()
+ df.info(buf=buf, max_cols=max_cols)
+ res = buf.getvalue()
+ assert len(res.strip().split("\n")) == len_
+
+
+def test_info_memory_usage():
+ # Ensure memory usage is displayed, when asserted, on the last line
+ dtypes = [
+ "int64",
+ "float64",
+ "datetime64[ns]",
+ "timedelta64[ns]",
+ "complex128",
+ "object",
+ "bool",
+ ]
+ data = {}
+ n = 10
+ for i, dtype in enumerate(dtypes):
+ data[i] = np.random.randint(2, size=n).astype(dtype)
+ df = DataFrame(data)
+ buf = StringIO()
+
+ # display memory usage case
+ df.info(buf=buf, memory_usage=True)
+ res = buf.getvalue().splitlines()
+ assert "memory usage: " in res[-1]
+
+ # do not display memory usage case
+ df.info(buf=buf, memory_usage=False)
+ res = buf.getvalue().splitlines()
+ assert "memory usage: " not in res[-1]
+
+ df.info(buf=buf, memory_usage=True)
+ res = buf.getvalue().splitlines()
+
+ # memory usage is a lower bound, so print it as XYZ+ MB
+ assert re.match(r"memory usage: [^+]+\+", res[-1])
+
+ df.iloc[:, :5].info(buf=buf, memory_usage=True)
+ res = buf.getvalue().splitlines()
+
+ # excluded column with object dtype, so estimate is accurate
+ assert not re.match(r"memory usage: [^+]+\+", res[-1])
+
+ # Test a DataFrame with duplicate columns
+ dtypes = ["int64", "int64", "int64", "float64"]
+ data = {}
+ n = 100
+ for i, dtype in enumerate(dtypes):
+ data[i] = np.random.randint(2, size=n).astype(dtype)
+ df = DataFrame(data)
+ df.columns = dtypes
+
+ df_with_object_index = DataFrame({"a": [1]}, index=["foo"])
+ df_with_object_index.info(buf=buf, memory_usage=True)
+ res = buf.getvalue().splitlines()
+ assert re.match(r"memory usage: [^+]+\+", res[-1])
+
+ df_with_object_index.info(buf=buf, memory_usage="deep")
+ res = buf.getvalue().splitlines()
+ assert re.match(r"memory usage: [^+]+$", res[-1])
+
+ # Ensure df size is as expected
+ # (cols * rows * bytes) + index size
+ df_size = df.memory_usage().sum()
+ exp_size = len(dtypes) * n * 8 + df.index.nbytes
+ assert df_size == exp_size
+
+ # Ensure number of cols in memory_usage is the same as df
+ size_df = np.size(df.columns.values) + 1 # index=True; default
+ assert size_df == np.size(df.memory_usage())
+
+ # assert deep works only on object
+ assert df.memory_usage().sum() == df.memory_usage(deep=True).sum()
+
+ # test for validity
+ DataFrame(1, index=["a"], columns=["A"]).memory_usage(index=True)
+ DataFrame(1, index=["a"], columns=["A"]).index.nbytes
+ df = DataFrame(
+ data=1, index=MultiIndex.from_product([["a"], range(1000)]), columns=["A"],
+ )
+ df.index.nbytes
+ df.memory_usage(index=True)
+ df.index.values.nbytes
+
+ mem = df.memory_usage(deep=True).sum()
+ assert mem > 0
+
+
+@pytest.mark.skipif(PYPY, reason="on PyPy deep=True doesn't change result")
+def test_info_memory_usage_deep_not_pypy():
+ df_with_object_index = DataFrame({"a": [1]}, index=["foo"])
+ assert (
+ df_with_object_index.memory_usage(index=True, deep=True).sum()
+ > df_with_object_index.memory_usage(index=True).sum()
+ )
+
+ df_object = DataFrame({"a": ["a"]})
+ assert df_object.memory_usage(deep=True).sum() > df_object.memory_usage().sum()
+
+
+@pytest.mark.skipif(not PYPY, reason="on PyPy deep=True does not change result")
+def test_info_memory_usage_deep_pypy():
+ df_with_object_index = DataFrame({"a": [1]}, index=["foo"])
+ assert (
+ df_with_object_index.memory_usage(index=True, deep=True).sum()
+ == df_with_object_index.memory_usage(index=True).sum()
+ )
+
+ df_object = DataFrame({"a": ["a"]})
+ assert df_object.memory_usage(deep=True).sum() == df_object.memory_usage().sum()
+
+
+@pytest.mark.skipif(PYPY, reason="PyPy getsizeof() fails by design")
+def test_usage_via_getsizeof():
+ df = DataFrame(
+ data=1, index=MultiIndex.from_product([["a"], range(1000)]), columns=["A"],
+ )
+ mem = df.memory_usage(deep=True).sum()
+ # sys.getsizeof will call the .memory_usage with
+ # deep=True, and add on some GC overhead
+ diff = mem - sys.getsizeof(df)
+ assert abs(diff) < 100
+
+
+def test_info_memory_usage_qualified():
+
+ buf = StringIO()
+ df = DataFrame(1, columns=list("ab"), index=[1, 2, 3])
+ df.info(buf=buf)
+ assert "+" not in buf.getvalue()
+
+ buf = StringIO()
+ df = DataFrame(1, columns=list("ab"), index=list("ABC"))
+ df.info(buf=buf)
+ assert "+" in buf.getvalue()
+
+ buf = StringIO()
+ df = DataFrame(
+ 1, columns=list("ab"), index=MultiIndex.from_product([range(3), range(3)]),
+ )
+ df.info(buf=buf)
+ assert "+" not in buf.getvalue()
+
+ buf = StringIO()
+ df = DataFrame(
+ 1,
+ columns=list("ab"),
+ index=MultiIndex.from_product([range(3), ["foo", "bar"]]),
+ )
+ df.info(buf=buf)
+ assert "+" in buf.getvalue()
+
+
+def test_info_memory_usage_bug_on_multiindex():
+ # GH 14308
+ # memory usage introspection should not materialize .values
+
+ def memory_usage(f):
+ return f.memory_usage(deep=True).sum()
+
+ N = 100
+ M = len(uppercase)
+ index = MultiIndex.from_product(
+ [list(uppercase), date_range("20160101", periods=N)], names=["id", "date"],
+ )
+ df = DataFrame({"value": np.random.randn(N * M)}, index=index)
+
+ unstacked = df.unstack("id")
+ assert df.values.nbytes == unstacked.values.nbytes
+ assert memory_usage(df) > memory_usage(unstacked)
+
+ # high upper bound
+ assert memory_usage(unstacked) - memory_usage(df) < 2000
+
+
+def test_info_categorical():
+ # GH14298
+ idx = CategoricalIndex(["a", "b"])
+ df = DataFrame(np.zeros((2, 2)), index=idx, columns=idx)
+
+ buf = StringIO()
+ df.info(buf=buf)
| precursor to xref #31796. Have moved `DataFrame.info` code into `pandas/io/formats/info.py`, along with tests | https://api.github.com/repos/pandas-dev/pandas/pulls/31876 | 2020-02-11T13:50:20Z | 2020-02-19T00:45:03Z | 2020-02-19T00:45:03Z | 2020-02-25T17:11:47Z |
DOC: add redirects from Rolling to rolling.Rolling | diff --git a/doc/redirects.csv b/doc/redirects.csv
index ef93955c14fe6..3669ff4b7cc0b 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -49,7 +49,25 @@ internals,development/internals
# api moved function
reference/api/pandas.io.json.json_normalize,pandas.json_normalize
-# api rename
+# rename due to refactors
+reference/api/pandas.core.window.Rolling,pandas.core.window.rolling.Rolling
+reference/api/pandas.core.window.Rolling.aggregate,pandas.core.window.rolling.Rolling.aggregate
+reference/api/pandas.core.window.Rolling.apply,pandas.core.window.rolling.Rolling.apply
+reference/api/pandas.core.window.Rolling.corr,pandas.core.window.rolling.Rolling.corr
+reference/api/pandas.core.window.Rolling.count,pandas.core.window.rolling.Rolling.count
+reference/api/pandas.core.window.Rolling.cov,pandas.core.window.rolling.Rolling.cov
+reference/api/pandas.core.window.Rolling.kurt,pandas.core.window.rolling.Rolling.kurt
+reference/api/pandas.core.window.Rolling.max,pandas.core.window.rolling.Rolling.max
+reference/api/pandas.core.window.Rolling.mean,pandas.core.window.rolling.Rolling.mean
+reference/api/pandas.core.window.Rolling.median,pandas.core.window.rolling.Rolling.median
+reference/api/pandas.core.window.Rolling.min,pandas.core.window.rolling.Rolling.min
+reference/api/pandas.core.window.Rolling.quantile,pandas.core.window.rolling.Rolling.quantile
+reference/api/pandas.core.window.Rolling.skew,pandas.core.window.rolling.Rolling.skew
+reference/api/pandas.core.window.Rolling.std,pandas.core.window.rolling.Rolling.std
+reference/api/pandas.core.window.Rolling.sum,pandas.core.window.rolling.Rolling.sum
+reference/api/pandas.core.window.Rolling.var,pandas.core.window.rolling.Rolling.var
+
+# api url change (generated -> reference/api rename)
api,reference/index
generated/pandas.api.extensions.ExtensionArray.argsort,../reference/api/pandas.api.extensions.ExtensionArray.argsort
generated/pandas.api.extensions.ExtensionArray.astype,../reference/api/pandas.api.extensions.ExtensionArray.astype
| - [x] closes #31762 | https://api.github.com/repos/pandas-dev/pandas/pulls/31875 | 2020-02-11T13:48:37Z | 2020-03-10T08:35:48Z | 2020-03-10T08:35:48Z | 2020-03-10T08:35:57Z |
CLN: f-string formatting | diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index da27057a783ab..2073aa0727809 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -98,7 +98,7 @@ def test_shift(self):
# GH8083 test the base class for shift
idx = self.create_index()
- msg = "Not supported for type {}".format(type(idx).__name__)
+ msg = f"Not supported for type {type(idx).__name__}"
with pytest.raises(NotImplementedError, match=msg):
idx.shift(1)
with pytest.raises(NotImplementedError, match=msg):
@@ -808,7 +808,7 @@ def test_map_dictlike(self, mapper):
index = self.create_index()
if isinstance(index, (pd.CategoricalIndex, pd.IntervalIndex)):
- pytest.skip("skipping tests for {}".format(type(index)))
+ pytest.skip(f"skipping tests for {type(index)}")
identity = mapper(index.values, index)
diff --git a/pandas/tests/indexes/datetimelike.py b/pandas/tests/indexes/datetimelike.py
index 3c72d34d84b28..ba10976a67e9a 100644
--- a/pandas/tests/indexes/datetimelike.py
+++ b/pandas/tests/indexes/datetimelike.py
@@ -36,7 +36,7 @@ def test_str(self):
# test the string repr
idx = self.create_index()
idx.name = "foo"
- assert not "length={}".format(len(idx)) in str(idx)
+ assert not (f"length={len(idx)}" in str(idx))
assert "'foo'" in str(idx)
assert type(idx).__name__ in str(idx)
@@ -44,7 +44,7 @@ def test_str(self):
if idx.tz is not None:
assert idx.tz in str(idx)
if hasattr(idx, "freq"):
- assert "freq='{idx.freqstr}'".format(idx=idx) in str(idx)
+ assert f"freq='{idx.freqstr}'" in str(idx)
def test_view(self):
i = self.create_index()
diff --git a/pandas/tests/indexing/common.py b/pandas/tests/indexing/common.py
index 4804172a22529..6f6981a30d7e4 100644
--- a/pandas/tests/indexing/common.py
+++ b/pandas/tests/indexing/common.py
@@ -8,7 +8,7 @@
def _mklbl(prefix, n):
- return ["{prefix}{i}".format(prefix=prefix, i=i) for i in range(n)]
+ return [f"{prefix}{i}" for i in range(n)]
def _axify(obj, key, axis):
@@ -96,7 +96,7 @@ def setup_method(self, method):
for kind in self._kinds:
d = dict()
for typ in self._typs:
- d[typ] = getattr(self, "{kind}_{typ}".format(kind=kind, typ=typ))
+ d[typ] = getattr(self, f"{kind}_{typ}")
setattr(self, kind, d)
diff --git a/pandas/tests/indexing/test_coercion.py b/pandas/tests/indexing/test_coercion.py
index b904755b099d0..bea8eae9bb850 100644
--- a/pandas/tests/indexing/test_coercion.py
+++ b/pandas/tests/indexing/test_coercion.py
@@ -943,7 +943,7 @@ class TestReplaceSeriesCoercion(CoercionBase):
for tz in ["UTC", "US/Eastern"]:
# to test tz => different tz replacement
- key = "datetime64[ns, {0}]".format(tz)
+ key = f"datetime64[ns, {tz}]"
rep[key] = [
pd.Timestamp("2011-01-01", tz=tz),
pd.Timestamp("2011-01-03", tz=tz),
@@ -1017,9 +1017,7 @@ def test_replace_series(self, how, to_key, from_key):
):
if compat.is_platform_32bit() or compat.is_platform_windows():
- pytest.skip(
- "32-bit platform buggy: {0} -> {1}".format(from_key, to_key)
- )
+ pytest.skip(f"32-bit platform buggy: {from_key} -> {to_key}")
# Expected: do not downcast by replacement
exp = pd.Series(self.rep[to_key], index=index, name="yyy", dtype=from_key)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index bc5ba3d9b03e5..500bd1853e9a4 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -246,9 +246,7 @@ def test_iloc_getitem_bool(self):
def test_iloc_getitem_bool_diff_len(self, index):
# GH26658
s = Series([1, 2, 3])
- msg = "Boolean index has wrong length: {} instead of {}".format(
- len(index), len(s)
- )
+ msg = f"Boolean index has wrong length: {len(index)} instead of {len(s)}"
with pytest.raises(IndexError, match=msg):
_ = s.iloc[index]
@@ -612,9 +610,7 @@ def test_iloc_mask(self):
r = expected.get(key)
if r != ans:
raise AssertionError(
- "[{key}] does not match [{ans}], received [{r}]".format(
- key=key, ans=ans, r=r
- )
+ f"[{key}] does not match [{ans}], received [{r}]"
)
def test_iloc_non_unique_indexing(self):
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 96fb1e8204f55..1b3e301b0fef0 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -514,7 +514,7 @@ def __init__(self, value):
self.value = value
def __str__(self) -> str:
- return "[{0}]".format(self.value)
+ return f"[{self.value}]"
__repr__ = __str__
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 02652d993e0f3..71d85ed8bda9b 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -217,9 +217,7 @@ def test_getitem_label_list_with_missing(self):
def test_loc_getitem_bool_diff_len(self, index):
# GH26658
s = Series([1, 2, 3])
- msg = "Boolean index has wrong length: {} instead of {}".format(
- len(index), len(s)
- )
+ msg = f"Boolean index has wrong length: {len(index)} instead of {len(s)}"
with pytest.raises(IndexError, match=msg):
_ = s.loc[index]
@@ -484,12 +482,8 @@ def test_loc_assign_non_ns_datetime(self, unit):
}
)
- df.loc[:, unit] = df.loc[:, "timestamp"].values.astype(
- "datetime64[{unit}]".format(unit=unit)
- )
- df["expected"] = df.loc[:, "timestamp"].values.astype(
- "datetime64[{unit}]".format(unit=unit)
- )
+ df.loc[:, unit] = df.loc[:, "timestamp"].values.astype(f"datetime64[{unit}]")
+ df["expected"] = df.loc[:, "timestamp"].values.astype(f"datetime64[{unit}]")
expected = Series(df.loc[:, "expected"], name=unit)
tm.assert_series_equal(df.loc[:, unit], expected)
diff --git a/pandas/tests/series/methods/test_argsort.py b/pandas/tests/series/methods/test_argsort.py
index 62273e2d363fb..c7fe6ed19a2eb 100644
--- a/pandas/tests/series/methods/test_argsort.py
+++ b/pandas/tests/series/methods/test_argsort.py
@@ -27,7 +27,7 @@ def test_argsort(self, datetime_series):
assert issubclass(argsorted.dtype.type, np.integer)
# GH#2967 (introduced bug in 0.11-dev I think)
- s = Series([Timestamp("201301{i:02d}".format(i=i)) for i in range(1, 6)])
+ s = Series([Timestamp(f"201301{i:02d}") for i in range(1, 6)])
assert s.dtype == "datetime64[ns]"
shifted = s.shift(-1)
assert shifted.dtype == "datetime64[ns]"
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 2651c3d73c9ab..b0d06793dbe13 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -811,7 +811,7 @@ def test_constructor_dtype_datetime64(self):
expected = Series(values2, index=dates)
for dtype in ["s", "D", "ms", "us", "ns"]:
- values1 = dates.view(np.ndarray).astype("M8[{0}]".format(dtype))
+ values1 = dates.view(np.ndarray).astype(f"M8[{dtype}]")
result = Series(values1, dates)
tm.assert_series_equal(result, expected)
@@ -819,7 +819,7 @@ def test_constructor_dtype_datetime64(self):
# coerce to non-ns to object properly
expected = Series(values2, index=dates, dtype=object)
for dtype in ["s", "D", "ms", "us", "ns"]:
- values1 = dates.view(np.ndarray).astype("M8[{0}]".format(dtype))
+ values1 = dates.view(np.ndarray).astype(f"M8[{dtype}]")
result = Series(values1, index=dates, dtype=object)
tm.assert_series_equal(result, expected)
@@ -952,7 +952,7 @@ def test_constructor_with_datetime_tz(self):
def test_construction_to_datetimelike_unit(self, arr_dtype, dtype, unit):
# tests all units
# gh-19223
- dtype = "{}[{}]".format(dtype, unit)
+ dtype = f"{dtype}[{unit}]"
arr = np.array([1, 2, 3], dtype=arr_dtype)
s = Series(arr)
result = s.astype(dtype)
@@ -1347,12 +1347,11 @@ def test_convert_non_ns(self):
def test_constructor_cant_cast_datetimelike(self, index):
# floats are not ok
- msg = "Cannot cast {}.*? to ".format(
- # strip Index to convert PeriodIndex -> Period
- # We don't care whether the error message says
- # PeriodIndex or PeriodArray
- type(index).__name__.rstrip("Index")
- )
+ # strip Index to convert PeriodIndex -> Period
+ # We don't care whether the error message says
+ # PeriodIndex or PeriodArray
+ msg = f"Cannot cast {type(index).__name__.rstrip('Index')}.*? to "
+
with pytest.raises(TypeError, match=msg):
Series(index, dtype=float)
diff --git a/pandas/tests/tseries/frequencies/test_inference.py b/pandas/tests/tseries/frequencies/test_inference.py
index c4660417599a8..c32ad5087ab9e 100644
--- a/pandas/tests/tseries/frequencies/test_inference.py
+++ b/pandas/tests/tseries/frequencies/test_inference.py
@@ -178,7 +178,7 @@ def test_infer_freq_delta(base_delta_code_pair, count):
inc = base_delta * count
index = DatetimeIndex([b + inc * j for j in range(3)])
- exp_freq = "{count:d}{code}".format(count=count, code=code) if count > 1 else code
+ exp_freq = f"{count:d}{code}" if count > 1 else code
assert frequencies.infer_freq(index) == exp_freq
@@ -202,13 +202,11 @@ def test_infer_freq_custom(base_delta_code_pair, constructor):
def test_weekly_infer(periods, day):
- _check_generated_range("1/1/2000", periods, "W-{day}".format(day=day))
+ _check_generated_range("1/1/2000", periods, f"W-{day}")
def test_week_of_month_infer(periods, day, count):
- _check_generated_range(
- "1/1/2000", periods, "WOM-{count}{day}".format(count=count, day=day)
- )
+ _check_generated_range("1/1/2000", periods, f"WOM-{count}{day}")
@pytest.mark.parametrize("freq", ["M", "BM", "BMS"])
@@ -217,14 +215,12 @@ def test_monthly_infer(periods, freq):
def test_quarterly_infer(month, periods):
- _check_generated_range("1/1/2000", periods, "Q-{month}".format(month=month))
+ _check_generated_range("1/1/2000", periods, f"Q-{month}")
@pytest.mark.parametrize("annual", ["A", "BA"])
def test_annually_infer(month, periods, annual):
- _check_generated_range(
- "1/1/2000", periods, "{annual}-{month}".format(annual=annual, month=month)
- )
+ _check_generated_range("1/1/2000", periods, f"{annual}-{month}")
@pytest.mark.parametrize(
| Ref to [#29547](https://github.com/pandas-dev/pandas/issues/29547)
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Replacing strings interpolated with .format() with strings interpolated with f-strings | https://api.github.com/repos/pandas-dev/pandas/pulls/31868 | 2020-02-11T04:16:20Z | 2020-02-11T05:15:33Z | 2020-02-11T05:15:33Z | 2020-02-11T05:15:59Z |
API/BUG: raise only KeyError failed on geitem/loc lookups | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 7449c62a5ad31..3d478df4c0ea8 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -67,7 +67,76 @@ Backwards incompatible API changes
now raise a ``TypeError`` if a not-accepted keyword argument is passed into it.
Previously a ``UnsupportedFunctionCall`` was raised (``AssertionError`` if ``min_count`` passed into :meth:`~DataFrameGroupby.median``) (:issue:`31485`)
- :meth:`DataFrame.at` and :meth:`Series.at` will raise a ``TypeError`` instead of a ``ValueError`` if an incompatible key is passed, and ``KeyError`` if a missing key is passed, matching the behavior of ``.loc[]`` (:issue:`31722`)
--
+
+.. _whatsnew_110.api_breaking.indexing_raises_key_errors:
+
+Failed Label-Based Lookups Always Raise KeyError
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Label lookups ``series[key]``, ``series.loc[key]`` and ``frame.loc[key]``
+used to raises either ``KeyError`` or ``TypeError`` depending on the type of
+key and type of :class:`Index`. These now consistently raise ``KeyError`` (:issue:`31867`)
+
+.. ipython:: python
+
+ ser1 = pd.Series(range(3), index=[0, 1, 2])
+ ser2 = pd.Series(range(3), index=pd.date_range("2020-02-01", periods=3))
+
+*Previous behavior*:
+
+.. code-block:: ipython
+
+ In [3]: ser1[1.5]
+ ...
+ TypeError: cannot do label indexing on Int64Index with these indexers [1.5] of type float
+
+ In [4] ser1["foo"]
+ ...
+ KeyError: 'foo'
+
+ In [5]: ser1.loc[1.5]
+ ...
+ TypeError: cannot do label indexing on Int64Index with these indexers [1.5] of type float
+
+ In [6]: ser1.loc["foo"]
+ ...
+ KeyError: 'foo'
+
+ In [7]: ser2.loc[1]
+ ...
+ TypeError: cannot do label indexing on DatetimeIndex with these indexers [1] of type int
+
+ In [8]: ser2.loc[pd.Timestamp(0)]
+ ...
+ KeyError: Timestamp('1970-01-01 00:00:00')
+
+*New behavior*:
+
+.. code-block:: ipython
+
+ In [3]: ser1[1.5]
+ ...
+ KeyError: 1.5
+
+ In [4] ser1["foo"]
+ ...
+ KeyError: 'foo'
+
+ In [5]: ser1.loc[1.5]
+ ...
+ KeyError: 1.5
+
+ In [6]: ser1.loc["foo"]
+ ...
+ KeyError: 'foo'
+
+ In [7]: ser2.loc[1]
+ ...
+ KeyError: 1
+
+ In [8]: ser2.loc[pd.Timestamp(0)]
+ ...
+ KeyError: Timestamp('1970-01-01 00:00:00')
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index c896e68f7a188..cbb43317e962f 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3094,7 +3094,7 @@ def _convert_scalar_indexer(self, key, kind: str_t):
if kind == "getitem" and is_float(key):
if not self.is_floating():
- self._invalid_indexer("label", key)
+ raise KeyError(key)
elif kind == "loc" and is_float(key):
@@ -3108,11 +3108,11 @@ def _convert_scalar_indexer(self, key, kind: str_t):
"string",
"mixed",
]:
- self._invalid_indexer("label", key)
+ raise KeyError(key)
elif kind == "loc" and is_integer(key):
if not (is_integer_dtype(self.dtype) or is_object_dtype(self.dtype)):
- self._invalid_indexer("label", key)
+ raise KeyError(key)
return key
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index caa6a9a93141f..24c50ea4270a8 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -602,7 +602,7 @@ def _convert_scalar_indexer(self, key, kind: str):
try:
return self.categories._convert_scalar_indexer(key, kind="loc")
except TypeError:
- self._invalid_indexer("label", key)
+ raise KeyError(key)
return super()._convert_scalar_indexer(key, kind=kind)
@Appender(Index._convert_list_indexer.__doc__)
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 1b3b6934aa53a..88264303f6ceb 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -396,9 +396,9 @@ def _convert_scalar_indexer(self, key, kind: str):
is_int = is_integer(key)
is_flt = is_float(key)
if kind == "loc" and (is_int or is_flt):
- self._invalid_indexer("label", key)
+ raise KeyError(key)
elif kind == "getitem" and is_flt:
- self._invalid_indexer("label", key)
+ raise KeyError(key)
return super()._convert_scalar_indexer(key, kind=kind)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 1644b4203052b..e302b9ca48f90 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1151,7 +1151,7 @@ def _convert_to_indexer(self, key, axis: int, is_setter: bool = False):
# try to find out correct indexer, if not type correct raise
try:
key = labels._convert_scalar_indexer(key, kind="loc")
- except TypeError:
+ except KeyError:
# but we will allow setting
if not is_setter:
raise
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 8c9b7cd060059..1e7732611147f 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1859,11 +1859,7 @@ def check(df):
# No NaN found -> error
if len(indexer) == 0:
- msg = (
- "cannot do label indexing on RangeIndex "
- r"with these indexers \[nan\] of type float"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^nan$"):
df.loc[:, np.nan]
# single nan should result in Series
elif len(indexer) == 1:
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 4d3f1b0539aee..a9eb7f94bea80 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -93,11 +93,9 @@ def test_scalar_non_numeric(self, index_func, klass):
# getting
for idxr, getitem in [(lambda x: x.iloc, False), (lambda x: x, True)]:
- # gettitem on a DataFrame is a KeyError as it is indexing
- # via labels on the columns
- if getitem and isinstance(s, DataFrame):
+ if getitem:
error = KeyError
- msg = r"^3(\.0)?$"
+ msg = r"^3\.0?$"
else:
error = TypeError
msg = (
@@ -116,6 +114,9 @@ def test_scalar_non_numeric(self, index_func, klass):
"string",
"unicode",
"mixed",
+ "period",
+ "timedelta64",
+ "datetime64",
}:
error = KeyError
msg = r"^3\.0$"
@@ -183,12 +184,7 @@ def test_scalar_non_numeric_series_fallback(self, index_func):
i = index_func(5)
s = Series(np.arange(len(i)), index=i)
s[3]
- msg = (
- r"cannot do (label|positional) indexing "
- fr"on {type(i).__name__} with these indexers \[3\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^3.0$"):
s[3.0]
def test_scalar_with_mixed(self):
@@ -199,12 +195,12 @@ def test_scalar_with_mixed(self):
# lookup in a pure stringstr
# with an invalid indexer
msg = (
- "cannot do label indexing "
- fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ r"cannot do label indexing "
+ r"on Index with these indexers \[1\.0\] of "
r"type float|"
"Cannot index by location index with a non-integer key"
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^1.0$"):
s2[1.0]
with pytest.raises(TypeError, match=msg):
s2.iloc[1.0]
@@ -218,12 +214,7 @@ def test_scalar_with_mixed(self):
# mixed index so we have label
# indexing
- msg = (
- "cannot do label indexing "
- fr"on {Index.__name__} with these indexers \[1\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(KeyError, match="^1.0$"):
s3[1.0]
result = s3[1]
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 276d11a67ad18..4d042af8d59b4 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -35,7 +35,7 @@ def test_loc_getitem_label_out_of_range(self):
"loc", 20, typs=["ints", "uints", "mixed"], fails=KeyError,
)
self.check_result("loc", 20, typs=["labels"], fails=KeyError)
- self.check_result("loc", 20, typs=["ts"], axes=0, fails=TypeError)
+ self.check_result("loc", 20, typs=["ts"], axes=0, fails=KeyError)
self.check_result("loc", 20, typs=["floats"], axes=0, fails=KeyError)
def test_loc_getitem_label_list(self):
| - [x] closes #21567
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
ATM we raise a mix of KeyError and TypeError.
I'm in the process of going through the Indexing issues will see what else this closes. | https://api.github.com/repos/pandas-dev/pandas/pulls/31867 | 2020-02-11T03:20:07Z | 2020-02-27T23:08:52Z | 2020-02-27T23:08:52Z | 2020-02-28T00:11:25Z |
BUG: catch almost-null-slice in _convert_slice_indexer | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 14ee21ea5614c..34bdd12988282 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3163,8 +3163,7 @@ def _convert_slice_indexer(self, key: slice, kind: str_t):
def is_int(v):
return v is None or is_integer(v)
- is_null_slicer = start is None and stop is None
- is_index_slice = is_int(start) and is_int(stop)
+ is_index_slice = is_int(start) and is_int(stop) and is_int(step)
is_positional = is_index_slice and not (
self.is_integer() or self.is_categorical()
)
@@ -3194,7 +3193,7 @@ def is_int(v):
except KeyError:
pass
- if is_null_slicer:
+ if com.is_null_slice(key):
indexer = key
elif is_positional:
indexer = key
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 6327f1b03589b..22f6af2af4aed 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -2611,3 +2611,18 @@ def test_validate_1d_input():
ser = pd.Series(0, range(4))
with pytest.raises(ValueError, match=msg):
ser.index = np.array([[2, 3]] * 4)
+
+
+def test_convert_almost_null_slice(indices):
+ # slice with None at both ends, but not step
+ idx = indices
+
+ key = slice(None, None, "foo")
+
+ if isinstance(idx, pd.IntervalIndex):
+ with pytest.raises(ValueError, match="cannot support not-default step"):
+ idx._convert_slice_indexer(key, "loc")
+ else:
+ msg = "'>=' not supported between instances of 'str' and 'int'"
+ with pytest.raises(TypeError, match=msg):
+ idx._convert_slice_indexer(key, "loc")
| its a contrived corner case, but i wanted to use com.is_null_slice rather than duplicating it in _convert_slice_indexer | https://api.github.com/repos/pandas-dev/pandas/pulls/31866 | 2020-02-11T02:26:20Z | 2020-02-23T15:54:29Z | 2020-02-23T15:54:29Z | 2020-02-23T17:11:04Z |
CLN: tests.generic | diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index d301ed969789e..a5f5e6f36cd58 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -364,14 +364,14 @@ def test_pickle(self, float_string_frame, timezone_frame):
def test_consolidate_datetime64(self):
# numpy vstack bug
- data = """\
-starting,ending,measure
-2012-06-21 00:00,2012-06-23 07:00,77
-2012-06-23 07:00,2012-06-23 16:30,65
-2012-06-23 16:30,2012-06-25 08:00,77
-2012-06-25 08:00,2012-06-26 12:00,0
-2012-06-26 12:00,2012-06-27 08:00,77
-"""
+ data = (
+ "starting,ending,measure\n"
+ "2012-06-21 00:00,2012-06-23 07:00,77\n"
+ "2012-06-23 07:00,2012-06-23 16:30,65\n"
+ "2012-06-23 16:30,2012-06-25 08:00,77\n"
+ "2012-06-25 08:00,2012-06-26 12:00,0\n"
+ "2012-06-26 12:00,2012-06-27 08:00,77\n"
+ )
df = pd.read_csv(StringIO(data), parse_dates=[0, 1])
ser_starting = df.starting
@@ -397,9 +397,6 @@ def test_is_mixed_type(self, float_frame, float_string_frame):
assert float_string_frame._is_mixed_type
def test_get_numeric_data(self):
- # TODO(wesm): unused?
- intname = np.dtype(np.int_).name # noqa
- floatname = np.dtype(np.float_).name # noqa
datetime64name = np.dtype("M8[ns]").name
objectname = np.dtype(np.object_).name
@@ -581,6 +578,7 @@ def test_get_X_columns(self):
tm.assert_index_equal(df._get_numeric_data().columns, pd.Index(["a", "b", "e"]))
def test_strange_column_corruption_issue(self):
+ # FIXME: dont leave commented-out
# (wesm) Unclear how exactly this is related to internal matters
df = DataFrame(index=[0, 1])
df[0] = np.nan
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index a7e01d8f1fd6d..4ac009ef508c4 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -129,9 +129,6 @@ def test_repr_unsortable(self, float_frame):
def test_repr_unicode(self):
uval = "\u03c3\u03c3\u03c3\u03c3"
- # TODO(wesm): is this supposed to be used?
- bval = uval.encode("utf-8") # noqa
-
df = DataFrame({"A": [uval, uval]})
result = repr(df)
diff --git a/pandas/tests/generic/test_frame.py b/pandas/tests/generic/test_frame.py
index 7fe22e77c5bf3..d8f4257566f84 100644
--- a/pandas/tests/generic/test_frame.py
+++ b/pandas/tests/generic/test_frame.py
@@ -160,7 +160,7 @@ def finalize(self, other, method=None, **kwargs):
# reset
DataFrame._metadata = _metadata
- DataFrame.__finalize__ = _finalize
+ DataFrame.__finalize__ = _finalize # FIXME: use monkeypatch
def test_set_attribute(self):
# Test for consistent setattr behavior when an attribute and a column
@@ -174,6 +174,69 @@ def test_set_attribute(self):
assert df.y == 5
tm.assert_series_equal(df["y"], Series([2, 4, 6], name="y"))
+ def test_deepcopy_empty(self):
+ # This test covers empty frame copying with non-empty column sets
+ # as reported in issue GH15370
+ empty_frame = DataFrame(data=[], index=[], columns=["A"])
+ empty_frame_copy = deepcopy(empty_frame)
+
+ self._compare(empty_frame_copy, empty_frame)
+
+
+# formerly in Generic but only test DataFrame
+class TestDataFrame2:
+ def test_validate_bool_args(self):
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+
+ for value in invalid_values:
+ with pytest.raises(ValueError):
+ super(DataFrame, df).rename_axis(
+ mapper={"a": "x", "b": "y"}, axis=1, inplace=value
+ )
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df).drop("a", axis=1, inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df)._consolidate(inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df).fillna(value=0, inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df).replace(to_replace=1, value=7, inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df).interpolate(inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df)._where(cond=df.a > 2, inplace=value)
+
+ with pytest.raises(ValueError):
+ super(DataFrame, df).mask(cond=df.a > 2, inplace=value)
+
+ def test_unexpected_keyword(self):
+ # GH8597
+ df = DataFrame(np.random.randn(5, 2), columns=["jim", "joe"])
+ ca = pd.Categorical([0, 0, 2, 2, 3, np.nan])
+ ts = df["joe"].copy()
+ ts[2] = np.nan
+
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ df.drop("joe", axis=1, in_place=True)
+
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ df.reindex([1, 0], inplace=True)
+
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ ca.fillna(0, inplace=True)
+
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ ts.fillna(0, in_place=True)
+
+
+class TestToXArray:
@pytest.mark.skipif(
not _XARRAY_INSTALLED
or _XARRAY_INSTALLED
@@ -272,11 +335,3 @@ def test_to_xarray(self):
expected["f"] = expected["f"].astype(object)
expected.columns.name = None
tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_deepcopy_empty(self):
- # This test covers empty frame copying with non-empty column sets
- # as reported in issue GH15370
- empty_frame = DataFrame(data=[], index=[], columns=["A"])
- empty_frame_copy = deepcopy(empty_frame)
-
- self._compare(empty_frame_copy, empty_frame)
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 7645c6b4cf709..d574660d21c0d 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -103,23 +103,6 @@ def test_get_numeric_data(self):
# _get_numeric_data is includes _get_bool_data, so can't test for
# non-inclusion
- def test_get_default(self):
-
- # GH 7725
- d0 = "a", "b", "c", "d"
- d1 = np.arange(4, dtype="int64")
- others = "e", 10
-
- for data, index in ((d0, d1), (d1, d0)):
- s = Series(data, index=index)
- for i, d in zip(index, data):
- assert s.get(i) == d
- assert s.get(i, d) == d
- assert s.get(i, "z") == d
- for other in others:
- assert s.get(other, "z") == "z"
- assert s.get(other, other) == other
-
def test_nonzero(self):
# GH 4633
@@ -469,24 +452,6 @@ def test_split_compat(self):
assert len(np.array_split(o, 5)) == 5
assert len(np.array_split(o, 2)) == 2
- def test_unexpected_keyword(self): # GH8597
- df = DataFrame(np.random.randn(5, 2), columns=["jim", "joe"])
- ca = pd.Categorical([0, 0, 2, 2, 3, np.nan])
- ts = df["joe"].copy()
- ts[2] = np.nan
-
- with pytest.raises(TypeError, match="unexpected keyword"):
- df.drop("joe", axis=1, in_place=True)
-
- with pytest.raises(TypeError, match="unexpected keyword"):
- df.reindex([1, 0], inplace=True)
-
- with pytest.raises(TypeError, match="unexpected keyword"):
- ca.fillna(0, inplace=True)
-
- with pytest.raises(TypeError, match="unexpected keyword"):
- ts.fillna(0, in_place=True)
-
# See gh-12301
def test_stat_unexpected_keyword(self):
obj = self._construct(5)
@@ -544,37 +509,6 @@ def test_truncate_out_of_bounds(self):
self._compare(big.truncate(before=0, after=3e6), big)
self._compare(big.truncate(before=-1, after=2e6), big)
- def test_validate_bool_args(self):
- df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
- invalid_values = [1, "True", [1, 2, 3], 5.0]
-
- for value in invalid_values:
- with pytest.raises(ValueError):
- super(DataFrame, df).rename_axis(
- mapper={"a": "x", "b": "y"}, axis=1, inplace=value
- )
-
- with pytest.raises(ValueError):
- super(DataFrame, df).drop("a", axis=1, inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df)._consolidate(inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df).fillna(value=0, inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df).replace(to_replace=1, value=7, inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df).interpolate(inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df)._where(cond=df.a > 2, inplace=value)
-
- with pytest.raises(ValueError):
- super(DataFrame, df).mask(cond=df.a > 2, inplace=value)
-
def test_copy_and_deepcopy(self):
# GH 15444
for shape in [0, 1, 2]:
diff --git a/pandas/tests/generic/test_series.py b/pandas/tests/generic/test_series.py
index 8ad8355f2d530..ce0daf8522687 100644
--- a/pandas/tests/generic/test_series.py
+++ b/pandas/tests/generic/test_series.py
@@ -181,8 +181,49 @@ def finalize(self, other, method=None, **kwargs):
# reset
Series._metadata = _metadata
- Series.__finalize__ = _finalize
+ Series.__finalize__ = _finalize # FIXME: use monkeypatch
+ @pytest.mark.parametrize(
+ "s",
+ [
+ Series([np.arange(5)]),
+ pd.date_range("1/1/2011", periods=24, freq="H"),
+ pd.Series(range(5), index=pd.date_range("2017", periods=5)),
+ ],
+ )
+ @pytest.mark.parametrize("shift_size", [0, 1, 2])
+ def test_shift_always_copy(self, s, shift_size):
+ # GH22397
+ assert s.shift(shift_size) is not s
+
+ @pytest.mark.parametrize("move_by_freq", [pd.Timedelta("1D"), pd.Timedelta("1M")])
+ def test_datetime_shift_always_copy(self, move_by_freq):
+ # GH22397
+ s = pd.Series(range(5), index=pd.date_range("2017", periods=5))
+ assert s.shift(freq=move_by_freq) is not s
+
+
+class TestSeries2:
+ # moved from Generic
+ def test_get_default(self):
+
+ # GH#7725
+ d0 = ["a", "b", "c", "d"]
+ d1 = np.arange(4, dtype="int64")
+ others = ["e", 10]
+
+ for data, index in ((d0, d1), (d1, d0)):
+ s = Series(data, index=index)
+ for i, d in zip(index, data):
+ assert s.get(i) == d
+ assert s.get(i, d) == d
+ assert s.get(i, "z") == d
+ for other in others:
+ assert s.get(other, "z") == "z"
+ assert s.get(other, other) == other
+
+
+class TestToXArray:
@pytest.mark.skipif(
not _XARRAY_INSTALLED
or _XARRAY_INSTALLED
@@ -242,22 +283,3 @@ def test_to_xarray(self):
tm.assert_almost_equal(list(result.coords.keys()), ["one", "two"])
assert isinstance(result, DataArray)
tm.assert_series_equal(result.to_series(), s)
-
- @pytest.mark.parametrize(
- "s",
- [
- Series([np.arange(5)]),
- pd.date_range("1/1/2011", periods=24, freq="H"),
- pd.Series(range(5), index=pd.date_range("2017", periods=5)),
- ],
- )
- @pytest.mark.parametrize("shift_size", [0, 1, 2])
- def test_shift_always_copy(self, s, shift_size):
- # GH22397
- assert s.shift(shift_size) is not s
-
- @pytest.mark.parametrize("move_by_freq", [pd.Timedelta("1D"), pd.Timedelta("1M")])
- def test_datetime_shift_always_copy(self, move_by_freq):
- # GH22397
- s = pd.Series(range(5), index=pd.date_range("2017", periods=5))
- assert s.shift(freq=move_by_freq) is not s
diff --git a/pandas/tests/test_compat.py b/pandas/tests/test_compat.py
deleted file mode 100644
index 4ff8b0b31e85e..0000000000000
--- a/pandas/tests/test_compat.py
+++ /dev/null
@@ -1,3 +0,0 @@
-"""
-Testing that functions from compat work as expected
-"""
| Many of the tests here really only test either Series or DataFrame, not both. | https://api.github.com/repos/pandas-dev/pandas/pulls/31865 | 2020-02-11T01:22:55Z | 2020-02-11T09:42:02Z | 2020-02-11T09:42:02Z | 2020-02-11T15:43:47Z |
REF: organize base class Index tests | diff --git a/pandas/tests/indexes/base_class/test_reshape.py b/pandas/tests/indexes/base_class/test_reshape.py
new file mode 100644
index 0000000000000..61826f2403a4b
--- /dev/null
+++ b/pandas/tests/indexes/base_class/test_reshape.py
@@ -0,0 +1,61 @@
+"""
+Tests for ndarray-like method on the base Index class
+"""
+import pytest
+
+import pandas as pd
+from pandas import Index
+import pandas._testing as tm
+
+
+class TestReshape:
+ def test_repeat(self):
+ repeats = 2
+ index = pd.Index([1, 2, 3])
+ expected = pd.Index([1, 1, 2, 2, 3, 3])
+
+ result = index.repeat(repeats)
+ tm.assert_index_equal(result, expected)
+
+ def test_insert(self):
+
+ # GH 7256
+ # validate neg/pos inserts
+ result = Index(["b", "c", "d"])
+
+ # test 0th element
+ tm.assert_index_equal(Index(["a", "b", "c", "d"]), result.insert(0, "a"))
+
+ # test Nth element that follows Python list behavior
+ tm.assert_index_equal(Index(["b", "c", "e", "d"]), result.insert(-1, "e"))
+
+ # test loc +/- neq (0, -1)
+ tm.assert_index_equal(result.insert(1, "z"), result.insert(-2, "z"))
+
+ # test empty
+ null_index = Index([])
+ tm.assert_index_equal(Index(["a"]), null_index.insert(0, "a"))
+
+ @pytest.mark.parametrize(
+ "pos,expected",
+ [
+ (0, Index(["b", "c", "d"], name="index")),
+ (-1, Index(["a", "b", "c"], name="index")),
+ ],
+ )
+ def test_delete(self, pos, expected):
+ index = Index(["a", "b", "c", "d"], name="index")
+ result = index.delete(pos)
+ tm.assert_index_equal(result, expected)
+ assert result.name == expected.name
+
+ def test_append_multiple(self):
+ index = Index(["a", "b", "c", "d", "e", "f"])
+
+ foos = [index[:2], index[2:4], index[4:]]
+ result = foos[0].append(foos[1:])
+ tm.assert_index_equal(result, index)
+
+ # empty
+ result = index.append([])
+ tm.assert_index_equal(result, index)
diff --git a/pandas/tests/indexes/base_class/test_setops.py b/pandas/tests/indexes/base_class/test_setops.py
index e7d5e21d0ba47..ec3ef8050967c 100644
--- a/pandas/tests/indexes/base_class/test_setops.py
+++ b/pandas/tests/indexes/base_class/test_setops.py
@@ -1,12 +1,49 @@
import numpy as np
import pytest
+import pandas as pd
from pandas import Index, Series
import pandas._testing as tm
from pandas.core.algorithms import safe_sort
class TestIndexSetOps:
+ @pytest.mark.parametrize(
+ "method", ["union", "intersection", "difference", "symmetric_difference"]
+ )
+ def test_setops_disallow_true(self, method):
+ idx1 = pd.Index(["a", "b"])
+ idx2 = pd.Index(["b", "c"])
+
+ with pytest.raises(ValueError, match="The 'sort' keyword only takes"):
+ getattr(idx1, method)(idx2, sort=True)
+
+ def test_setops_preserve_object_dtype(self):
+ idx = pd.Index([1, 2, 3], dtype=object)
+ result = idx.intersection(idx[1:])
+ expected = idx[1:]
+ tm.assert_index_equal(result, expected)
+
+ # if other is not monotonic increasing, intersection goes through
+ # a different route
+ result = idx.intersection(idx[1:][::-1])
+ tm.assert_index_equal(result, expected)
+
+ result = idx._union(idx[1:], sort=None)
+ expected = idx
+ tm.assert_index_equal(result, expected)
+
+ result = idx.union(idx[1:], sort=None)
+ tm.assert_index_equal(result, expected)
+
+ # if other is not monotonic increasing, _union goes through
+ # a different route
+ result = idx._union(idx[1:][::-1], sort=None)
+ tm.assert_index_equal(result, expected)
+
+ result = idx.union(idx[1:][::-1], sort=None)
+ tm.assert_index_equal(result, expected)
+
def test_union_base(self):
index = Index([0, "a", 1, "b", 2, "c"])
first = index[3:]
@@ -28,6 +65,32 @@ def test_union_different_type_base(self, klass):
assert tm.equalContents(result, index)
+ def test_union_sort_other_incomparable(self):
+ # https://github.com/pandas-dev/pandas/issues/24959
+ idx = pd.Index([1, pd.Timestamp("2000")])
+ # default (sort=None)
+ with tm.assert_produces_warning(RuntimeWarning):
+ result = idx.union(idx[:1])
+
+ tm.assert_index_equal(result, idx)
+
+ # sort=None
+ with tm.assert_produces_warning(RuntimeWarning):
+ result = idx.union(idx[:1], sort=None)
+ tm.assert_index_equal(result, idx)
+
+ # sort=False
+ result = idx.union(idx[:1], sort=False)
+ tm.assert_index_equal(result, idx)
+
+ @pytest.mark.xfail(reason="Not implemented")
+ def test_union_sort_other_incomparable_true(self):
+ # TODO decide on True behaviour
+ # sort=True
+ idx = pd.Index([1, pd.Timestamp("2000")])
+ with pytest.raises(TypeError, match=".*"):
+ idx.union(idx[:1], sort=True)
+
@pytest.mark.parametrize("sort", [None, False])
def test_intersection_base(self, sort):
# (same results for py2 and py3 but sortedness not tested elsewhere)
@@ -50,6 +113,16 @@ def test_intersection_different_type_base(self, klass, sort):
result = first.intersection(klass(second.values), sort=sort)
assert tm.equalContents(result, second)
+ def test_intersect_nosort(self):
+ result = pd.Index(["c", "b", "a"]).intersection(["b", "a"])
+ expected = pd.Index(["b", "a"])
+ tm.assert_index_equal(result, expected)
+
+ def test_intersection_equal_sort(self):
+ idx = pd.Index(["c", "a", "b"])
+ tm.assert_index_equal(idx.intersection(idx, sort=False), idx)
+ tm.assert_index_equal(idx.intersection(idx, sort=None), idx)
+
@pytest.mark.parametrize("sort", [None, False])
def test_difference_base(self, sort):
# (same results for py2 and py3 but sortedness not tested elsewhere)
diff --git a/pandas/tests/indexes/test_any_index.py b/pandas/tests/indexes/test_any_index.py
index 0db63f615c4f8..86881b8984228 100644
--- a/pandas/tests/indexes/test_any_index.py
+++ b/pandas/tests/indexes/test_any_index.py
@@ -7,7 +7,8 @@
def test_sort(indices):
- with pytest.raises(TypeError):
+ msg = "cannot sort an Index object in-place, use sort_values instead"
+ with pytest.raises(TypeError, match=msg):
indices.sort()
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 04af9b09bbf89..df434ff658fd8 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -14,7 +14,6 @@
from pandas.compat.numpy import np_datetime64_compat
from pandas.util._test_decorators import async_mark
-from pandas.core.dtypes.common import is_unsigned_integer_dtype
from pandas.core.dtypes.generic import ABCIndex
import pandas as pd
@@ -107,15 +106,6 @@ def test_constructor_copy(self, index):
# arr = np.array(5.)
# pytest.raises(Exception, arr.view, Index)
- @pytest.mark.parametrize("na_value", [None, np.nan])
- @pytest.mark.parametrize("vtype", [list, tuple, iter])
- def test_construction_list_tuples_nan(self, na_value, vtype):
- # GH 18505 : valid tuples containing NaN
- values = [(1, "two"), (3.0, na_value)]
- result = Index(vtype(values))
- expected = MultiIndex.from_tuples(values)
- tm.assert_index_equal(result, expected)
-
@pytest.mark.parametrize("cast_as_obj", [True, False])
@pytest.mark.parametrize(
"index",
@@ -236,21 +226,6 @@ def __array__(self, dtype=None) -> np.ndarray:
result = pd.Index(ArrayLike(array))
tm.assert_index_equal(result, expected)
- @pytest.mark.parametrize(
- "dtype",
- [int, "int64", "int32", "int16", "int8", "uint64", "uint32", "uint16", "uint8"],
- )
- def test_constructor_int_dtype_float(self, dtype):
- # GH 18400
- if is_unsigned_integer_dtype(dtype):
- index_type = UInt64Index
- else:
- index_type = Int64Index
-
- expected = index_type([0, 1, 2, 3])
- result = Index([0.0, 1.0, 2.0, 3.0], dtype=dtype)
- tm.assert_index_equal(result, expected)
-
def test_constructor_int_dtype_nan(self):
# see gh-15187
data = [np.nan]
@@ -370,19 +345,6 @@ def test_constructor_dtypes_to_float64(self, vals):
index = Index(vals, dtype=float)
assert isinstance(index, Float64Index)
- @pytest.mark.parametrize("cast_index", [True, False])
- @pytest.mark.parametrize(
- "vals", [[True, False, True], np.array([True, False, True], dtype=bool)]
- )
- def test_constructor_dtypes_to_object(self, cast_index, vals):
- if cast_index:
- index = Index(vals, dtype=bool)
- else:
- index = Index(vals)
-
- assert isinstance(index, Index)
- assert index.dtype == object
-
@pytest.mark.parametrize(
"vals",
[
@@ -587,25 +549,6 @@ def test_equals_object(self):
def test_not_equals_object(self, comp):
assert not Index(["a", "b", "c"]).equals(comp)
- def test_insert(self):
-
- # GH 7256
- # validate neg/pos inserts
- result = Index(["b", "c", "d"])
-
- # test 0th element
- tm.assert_index_equal(Index(["a", "b", "c", "d"]), result.insert(0, "a"))
-
- # test Nth element that follows Python list behavior
- tm.assert_index_equal(Index(["b", "c", "e", "d"]), result.insert(-1, "e"))
-
- # test loc +/- neq (0, -1)
- tm.assert_index_equal(result.insert(1, "z"), result.insert(-2, "z"))
-
- # test empty
- null_index = Index([])
- tm.assert_index_equal(Index(["a"]), null_index.insert(0, "a"))
-
def test_insert_missing(self, nulls_fixture):
# GH 22295
# test there is no mangling of NA values
@@ -613,19 +556,6 @@ def test_insert_missing(self, nulls_fixture):
result = Index(list("abc")).insert(1, nulls_fixture)
tm.assert_index_equal(result, expected)
- @pytest.mark.parametrize(
- "pos,expected",
- [
- (0, Index(["b", "c", "d"], name="index")),
- (-1, Index(["a", "b", "c"], name="index")),
- ],
- )
- def test_delete(self, pos, expected):
- index = Index(["a", "b", "c", "d"], name="index")
- result = index.delete(pos)
- tm.assert_index_equal(result, expected)
- assert result.name == expected.name
-
def test_delete_raises(self):
index = Index(["a", "b", "c", "d"], name="index")
msg = "index 5 is out of bounds for axis 0 with size 4"
@@ -839,16 +769,6 @@ def test_intersect_str_dates(self, sort):
assert len(result) == 0
- def test_intersect_nosort(self):
- result = pd.Index(["c", "b", "a"]).intersection(["b", "a"])
- expected = pd.Index(["b", "a"])
- tm.assert_index_equal(result, expected)
-
- def test_intersection_equal_sort(self):
- idx = pd.Index(["c", "a", "b"])
- tm.assert_index_equal(idx.intersection(idx, sort=False), idx)
- tm.assert_index_equal(idx.intersection(idx, sort=None), idx)
-
@pytest.mark.xfail(reason="Not implemented")
def test_intersection_equal_sort_true(self):
# TODO decide on True behaviour
@@ -910,32 +830,6 @@ def test_union_sort_special_true(self, slice_):
expected = pd.Index([0, 1, 2])
tm.assert_index_equal(result, expected)
- def test_union_sort_other_incomparable(self):
- # https://github.com/pandas-dev/pandas/issues/24959
- idx = pd.Index([1, pd.Timestamp("2000")])
- # default (sort=None)
- with tm.assert_produces_warning(RuntimeWarning):
- result = idx.union(idx[:1])
-
- tm.assert_index_equal(result, idx)
-
- # sort=None
- with tm.assert_produces_warning(RuntimeWarning):
- result = idx.union(idx[:1], sort=None)
- tm.assert_index_equal(result, idx)
-
- # sort=False
- result = idx.union(idx[:1], sort=False)
- tm.assert_index_equal(result, idx)
-
- @pytest.mark.xfail(reason="Not implemented")
- def test_union_sort_other_incomparable_true(self):
- # TODO decide on True behaviour
- # sort=True
- idx = pd.Index([1, pd.Timestamp("2000")])
- with pytest.raises(TypeError, match=".*"):
- idx.union(idx[:1], sort=True)
-
@pytest.mark.parametrize("klass", [np.array, Series, list])
@pytest.mark.parametrize("sort", [None, False])
def test_union_from_iterables(self, index, klass, sort):
@@ -1008,42 +902,6 @@ def test_union_dt_as_obj(self, sort):
tm.assert_contains_all(index, second_cat)
tm.assert_contains_all(date_index, first_cat)
- @pytest.mark.parametrize(
- "method", ["union", "intersection", "difference", "symmetric_difference"]
- )
- def test_setops_disallow_true(self, method):
- idx1 = pd.Index(["a", "b"])
- idx2 = pd.Index(["b", "c"])
-
- with pytest.raises(ValueError, match="The 'sort' keyword only takes"):
- getattr(idx1, method)(idx2, sort=True)
-
- def test_setops_preserve_object_dtype(self):
- idx = pd.Index([1, 2, 3], dtype=object)
- result = idx.intersection(idx[1:])
- expected = idx[1:]
- tm.assert_index_equal(result, expected)
-
- # if other is not monotonic increasing, intersection goes through
- # a different route
- result = idx.intersection(idx[1:][::-1])
- tm.assert_index_equal(result, expected)
-
- result = idx._union(idx[1:], sort=None)
- expected = idx
- tm.assert_index_equal(result, expected)
-
- result = idx.union(idx[1:], sort=None)
- tm.assert_index_equal(result, expected)
-
- # if other is not monotonic increasing, _union goes through
- # a different route
- result = idx._union(idx[1:][::-1], sort=None)
- tm.assert_index_equal(result, expected)
-
- result = idx.union(idx[1:][::-1], sort=None)
- tm.assert_index_equal(result, expected)
-
def test_map_identity_mapping(self, indices):
# GH 12766
tm.assert_index_equal(indices, indices.map(lambda x: x))
@@ -1151,17 +1009,6 @@ def test_map_defaultdict(self):
expected = Index(["stuff", "blank", "blank"])
tm.assert_index_equal(result, expected)
- def test_append_multiple(self):
- index = Index(["a", "b", "c", "d", "e", "f"])
-
- foos = [index[:2], index[2:4], index[4:]]
- result = foos[0].append(foos[1:])
- tm.assert_index_equal(result, index)
-
- # empty
- result = index.append([])
- tm.assert_index_equal(result, index)
-
@pytest.mark.parametrize("name,expected", [("foo", "foo"), ("bar", None)])
def test_append_empty_preserve_name(self, name, expected):
left = Index([], name="foo")
@@ -2437,7 +2284,6 @@ class TestMixedIntIndex(Base):
# Mostly the tests from common.py for which the results differ
# in py2 and py3 because ints and strings are uncomparable in py3
# (GH 13514)
-
_holder = Index
@pytest.fixture(params=[[0, "a", 1, "b", 2, "c"]], ids=["mixedIndex"])
@@ -2573,14 +2419,6 @@ def test_get_combined_index(self):
expected = Index([])
tm.assert_index_equal(result, expected)
- def test_repeat(self):
- repeats = 2
- index = pd.Index([1, 2, 3])
- expected = pd.Index([1, 1, 2, 2, 3, 3])
-
- result = index.repeat(repeats)
- tm.assert_index_equal(result, expected)
-
@pytest.mark.parametrize(
"index",
[
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index 7e30233353553..b46e6514b4536 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -158,13 +158,6 @@ def test_set_name_methods(self, indices):
assert indices.name == name
assert indices.names == [name]
- def test_hash_error(self, indices):
- index = indices
- with pytest.raises(
- TypeError, match=f"unhashable type: '{type(index).__name__}'"
- ):
- hash(indices)
-
def test_copy_and_deepcopy(self, indices):
from copy import copy, deepcopy
@@ -246,11 +239,6 @@ def test_get_unique_index(self, indices):
result = i._get_unique_index(dropna=dropna)
tm.assert_index_equal(result, expected)
- def test_sort(self, indices):
- msg = "cannot sort an Index object in-place, use sort_values instead"
- with pytest.raises(TypeError, match=msg):
- indices.sort()
-
def test_mutability(self, indices):
if not len(indices):
pytest.skip("Skip check for empty Index")
@@ -261,9 +249,6 @@ def test_mutability(self, indices):
def test_view(self, indices):
assert indices.view().name == indices.name
- def test_compat(self, indices):
- assert indices.tolist() == list(indices)
-
def test_searchsorted_monotonic(self, indices):
# GH17271
# not implemented for tuple searches in MultiIndex
diff --git a/pandas/tests/indexes/test_index_new.py b/pandas/tests/indexes/test_index_new.py
new file mode 100644
index 0000000000000..e150df971da2d
--- /dev/null
+++ b/pandas/tests/indexes/test_index_new.py
@@ -0,0 +1,49 @@
+"""
+Tests for the Index constructor conducting inference.
+"""
+import numpy as np
+import pytest
+
+from pandas.core.dtypes.common import is_unsigned_integer_dtype
+
+from pandas import Index, Int64Index, MultiIndex, UInt64Index
+import pandas._testing as tm
+
+
+class TestIndexConstructorInference:
+ @pytest.mark.parametrize("na_value", [None, np.nan])
+ @pytest.mark.parametrize("vtype", [list, tuple, iter])
+ def test_construction_list_tuples_nan(self, na_value, vtype):
+ # GH#18505 : valid tuples containing NaN
+ values = [(1, "two"), (3.0, na_value)]
+ result = Index(vtype(values))
+ expected = MultiIndex.from_tuples(values)
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "dtype",
+ [int, "int64", "int32", "int16", "int8", "uint64", "uint32", "uint16", "uint8"],
+ )
+ def test_constructor_int_dtype_float(self, dtype):
+ # GH#18400
+ if is_unsigned_integer_dtype(dtype):
+ index_type = UInt64Index
+ else:
+ index_type = Int64Index
+
+ expected = index_type([0, 1, 2, 3])
+ result = Index([0.0, 1.0, 2.0, 3.0], dtype=dtype)
+ tm.assert_index_equal(result, expected)
+
+ @pytest.mark.parametrize("cast_index", [True, False])
+ @pytest.mark.parametrize(
+ "vals", [[True, False, True], np.array([True, False, True], dtype=bool)]
+ )
+ def test_constructor_dtypes_to_object(self, cast_index, vals):
+ if cast_index:
+ index = Index(vals, dtype=bool)
+ else:
+ index = Index(vals)
+
+ assert type(index) is Index
+ assert index.dtype == object
| implement test_index_new for testing `Index.__new__` inference | https://api.github.com/repos/pandas-dev/pandas/pulls/31864 | 2020-02-11T01:15:54Z | 2020-02-22T15:43:06Z | 2020-02-22T15:43:05Z | 2020-02-22T15:51:28Z |
CLN: organize MultiIndex indexing tests | diff --git a/pandas/conftest.py b/pandas/conftest.py
index d19bf85877140..f7c6a0c899642 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -956,6 +956,25 @@ def __len__(self):
return TestNonDictMapping
+def _gen_mi():
+ # a MultiIndex used to test the general functionality of this object
+
+ # See Also: tests.multi.conftest.idx
+ major_axis = Index(["foo", "bar", "baz", "qux"])
+ minor_axis = Index(["one", "two"])
+
+ major_codes = np.array([0, 0, 1, 2, 3, 3])
+ minor_codes = np.array([0, 1, 0, 1, 0, 1])
+ index_names = ["first", "second"]
+ mi = MultiIndex(
+ levels=[major_axis, minor_axis],
+ codes=[major_codes, minor_codes],
+ names=index_names,
+ verify_integrity=False,
+ )
+ return mi
+
+
indices_dict = {
"unicode": tm.makeUnicodeIndex(100),
"string": tm.makeStringIndex(100),
@@ -972,6 +991,7 @@ def __len__(self):
"interval": tm.makeIntervalIndex(100),
"empty": Index([]),
"tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
+ "multi": _gen_mi(),
"repeats": Index([0, 0, 1, 1, 2, 2]),
}
diff --git a/pandas/tests/indexes/multi/test_indexing.py b/pandas/tests/indexes/multi/test_indexing.py
index 3c9b34a4a1439..39049006edb7c 100644
--- a/pandas/tests/indexes/multi/test_indexing.py
+++ b/pandas/tests/indexes/multi/test_indexing.py
@@ -4,116 +4,126 @@
import pytest
import pandas as pd
-from pandas import (
- Categorical,
- CategoricalIndex,
- Index,
- IntervalIndex,
- MultiIndex,
- date_range,
-)
+from pandas import Categorical, Index, MultiIndex, date_range
import pandas._testing as tm
from pandas.core.indexes.base import InvalidIndexError
-def test_slice_locs_partial(idx):
- sorted_idx, _ = idx.sortlevel(0)
-
- result = sorted_idx.slice_locs(("foo", "two"), ("qux", "one"))
- assert result == (1, 5)
+class TestSliceLocs:
+ def test_slice_locs_partial(self, idx):
+ sorted_idx, _ = idx.sortlevel(0)
- result = sorted_idx.slice_locs(None, ("qux", "one"))
- assert result == (0, 5)
+ result = sorted_idx.slice_locs(("foo", "two"), ("qux", "one"))
+ assert result == (1, 5)
- result = sorted_idx.slice_locs(("foo", "two"), None)
- assert result == (1, len(sorted_idx))
+ result = sorted_idx.slice_locs(None, ("qux", "one"))
+ assert result == (0, 5)
- result = sorted_idx.slice_locs("bar", "baz")
- assert result == (2, 4)
+ result = sorted_idx.slice_locs(("foo", "two"), None)
+ assert result == (1, len(sorted_idx))
+ result = sorted_idx.slice_locs("bar", "baz")
+ assert result == (2, 4)
-def test_slice_locs():
- df = tm.makeTimeDataFrame()
- stacked = df.stack()
- idx = stacked.index
+ def test_slice_locs(self):
+ df = tm.makeTimeDataFrame()
+ stacked = df.stack()
+ idx = stacked.index
- slob = slice(*idx.slice_locs(df.index[5], df.index[15]))
- sliced = stacked[slob]
- expected = df[5:16].stack()
- tm.assert_almost_equal(sliced.values, expected.values)
+ slob = slice(*idx.slice_locs(df.index[5], df.index[15]))
+ sliced = stacked[slob]
+ expected = df[5:16].stack()
+ tm.assert_almost_equal(sliced.values, expected.values)
- slob = slice(
- *idx.slice_locs(
- df.index[5] + timedelta(seconds=30), df.index[15] - timedelta(seconds=30)
+ slob = slice(
+ *idx.slice_locs(
+ df.index[5] + timedelta(seconds=30),
+ df.index[15] - timedelta(seconds=30),
+ )
)
- )
- sliced = stacked[slob]
- expected = df[6:15].stack()
- tm.assert_almost_equal(sliced.values, expected.values)
-
-
-def test_slice_locs_with_type_mismatch():
- df = tm.makeTimeDataFrame()
- stacked = df.stack()
- idx = stacked.index
- with pytest.raises(TypeError, match="^Level type mismatch"):
- idx.slice_locs((1, 3))
- with pytest.raises(TypeError, match="^Level type mismatch"):
- idx.slice_locs(df.index[5] + timedelta(seconds=30), (5, 2))
- df = tm.makeCustomDataframe(5, 5)
- stacked = df.stack()
- idx = stacked.index
- with pytest.raises(TypeError, match="^Level type mismatch"):
- idx.slice_locs(timedelta(seconds=30))
- # TODO: Try creating a UnicodeDecodeError in exception message
- with pytest.raises(TypeError, match="^Level type mismatch"):
- idx.slice_locs(df.index[1], (16, "a"))
-
-
-def test_slice_locs_not_sorted():
- index = MultiIndex(
- levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
- codes=[
- np.array([0, 0, 1, 2, 2, 2, 3, 3]),
- np.array([0, 1, 0, 0, 0, 1, 0, 1]),
- np.array([1, 0, 1, 1, 0, 0, 1, 0]),
- ],
- )
- msg = "[Kk]ey length.*greater than MultiIndex lexsort depth"
- with pytest.raises(KeyError, match=msg):
- index.slice_locs((1, 0, 1), (2, 1, 0))
+ sliced = stacked[slob]
+ expected = df[6:15].stack()
+ tm.assert_almost_equal(sliced.values, expected.values)
+
+ def test_slice_locs_with_type_mismatch(self):
+ df = tm.makeTimeDataFrame()
+ stacked = df.stack()
+ idx = stacked.index
+ with pytest.raises(TypeError, match="^Level type mismatch"):
+ idx.slice_locs((1, 3))
+ with pytest.raises(TypeError, match="^Level type mismatch"):
+ idx.slice_locs(df.index[5] + timedelta(seconds=30), (5, 2))
+ df = tm.makeCustomDataframe(5, 5)
+ stacked = df.stack()
+ idx = stacked.index
+ with pytest.raises(TypeError, match="^Level type mismatch"):
+ idx.slice_locs(timedelta(seconds=30))
+ # TODO: Try creating a UnicodeDecodeError in exception message
+ with pytest.raises(TypeError, match="^Level type mismatch"):
+ idx.slice_locs(df.index[1], (16, "a"))
+
+ def test_slice_locs_not_sorted(self):
+ index = MultiIndex(
+ levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
+ codes=[
+ np.array([0, 0, 1, 2, 2, 2, 3, 3]),
+ np.array([0, 1, 0, 0, 0, 1, 0, 1]),
+ np.array([1, 0, 1, 1, 0, 0, 1, 0]),
+ ],
+ )
+ msg = "[Kk]ey length.*greater than MultiIndex lexsort depth"
+ with pytest.raises(KeyError, match=msg):
+ index.slice_locs((1, 0, 1), (2, 1, 0))
- # works
- sorted_index, _ = index.sortlevel(0)
- # should there be a test case here???
- sorted_index.slice_locs((1, 0, 1), (2, 1, 0))
+ # works
+ sorted_index, _ = index.sortlevel(0)
+ # should there be a test case here???
+ sorted_index.slice_locs((1, 0, 1), (2, 1, 0))
+ def test_slice_locs_not_contained(self):
+ # some searchsorted action
-def test_slice_locs_not_contained():
- # some searchsorted action
+ index = MultiIndex(
+ levels=[[0, 2, 4, 6], [0, 2, 4]],
+ codes=[[0, 0, 0, 1, 1, 2, 3, 3, 3], [0, 1, 2, 1, 2, 2, 0, 1, 2]],
+ )
- index = MultiIndex(
- levels=[[0, 2, 4, 6], [0, 2, 4]],
- codes=[[0, 0, 0, 1, 1, 2, 3, 3, 3], [0, 1, 2, 1, 2, 2, 0, 1, 2]],
- )
+ result = index.slice_locs((1, 0), (5, 2))
+ assert result == (3, 6)
- result = index.slice_locs((1, 0), (5, 2))
- assert result == (3, 6)
+ result = index.slice_locs(1, 5)
+ assert result == (3, 6)
- result = index.slice_locs(1, 5)
- assert result == (3, 6)
+ result = index.slice_locs((2, 2), (5, 2))
+ assert result == (3, 6)
- result = index.slice_locs((2, 2), (5, 2))
- assert result == (3, 6)
+ result = index.slice_locs(2, 5)
+ assert result == (3, 6)
- result = index.slice_locs(2, 5)
- assert result == (3, 6)
+ result = index.slice_locs((1, 0), (6, 3))
+ assert result == (3, 8)
- result = index.slice_locs((1, 0), (6, 3))
- assert result == (3, 8)
+ result = index.slice_locs(-1, 10)
+ assert result == (0, len(index))
- result = index.slice_locs(-1, 10)
- assert result == (0, len(index))
+ @pytest.mark.parametrize(
+ "index_arr,expected,start_idx,end_idx",
+ [
+ ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, None),
+ ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, "b"),
+ ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, ("b", "e")),
+ ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), None),
+ ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), "c"),
+ ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), ("c", "e")),
+ ],
+ )
+ def test_slice_locs_with_missing_value(
+ self, index_arr, expected, start_idx, end_idx
+ ):
+ # issue 19132
+ idx = MultiIndex.from_arrays(index_arr)
+ result = idx.slice_locs(start=start_idx, end=end_idx)
+ assert result == expected
def test_putmask_with_wrong_mask(idx):
@@ -130,67 +140,104 @@ def test_putmask_with_wrong_mask(idx):
idx.putmask("foo", 1)
-def test_get_indexer():
- major_axis = Index(np.arange(4))
- minor_axis = Index(np.arange(2))
+class TestGetIndexer:
+ def test_get_indexer(self):
+ major_axis = Index(np.arange(4))
+ minor_axis = Index(np.arange(2))
- major_codes = np.array([0, 0, 1, 2, 2, 3, 3], dtype=np.intp)
- minor_codes = np.array([0, 1, 0, 0, 1, 0, 1], dtype=np.intp)
+ major_codes = np.array([0, 0, 1, 2, 2, 3, 3], dtype=np.intp)
+ minor_codes = np.array([0, 1, 0, 0, 1, 0, 1], dtype=np.intp)
- index = MultiIndex(
- levels=[major_axis, minor_axis], codes=[major_codes, minor_codes]
- )
- idx1 = index[:5]
- idx2 = index[[1, 3, 5]]
+ index = MultiIndex(
+ levels=[major_axis, minor_axis], codes=[major_codes, minor_codes]
+ )
+ idx1 = index[:5]
+ idx2 = index[[1, 3, 5]]
- r1 = idx1.get_indexer(idx2)
- tm.assert_almost_equal(r1, np.array([1, 3, -1], dtype=np.intp))
+ r1 = idx1.get_indexer(idx2)
+ tm.assert_almost_equal(r1, np.array([1, 3, -1], dtype=np.intp))
- r1 = idx2.get_indexer(idx1, method="pad")
- e1 = np.array([-1, 0, 0, 1, 1], dtype=np.intp)
- tm.assert_almost_equal(r1, e1)
+ r1 = idx2.get_indexer(idx1, method="pad")
+ e1 = np.array([-1, 0, 0, 1, 1], dtype=np.intp)
+ tm.assert_almost_equal(r1, e1)
- r2 = idx2.get_indexer(idx1[::-1], method="pad")
- tm.assert_almost_equal(r2, e1[::-1])
+ r2 = idx2.get_indexer(idx1[::-1], method="pad")
+ tm.assert_almost_equal(r2, e1[::-1])
- rffill1 = idx2.get_indexer(idx1, method="ffill")
- tm.assert_almost_equal(r1, rffill1)
+ rffill1 = idx2.get_indexer(idx1, method="ffill")
+ tm.assert_almost_equal(r1, rffill1)
- r1 = idx2.get_indexer(idx1, method="backfill")
- e1 = np.array([0, 0, 1, 1, 2], dtype=np.intp)
- tm.assert_almost_equal(r1, e1)
+ r1 = idx2.get_indexer(idx1, method="backfill")
+ e1 = np.array([0, 0, 1, 1, 2], dtype=np.intp)
+ tm.assert_almost_equal(r1, e1)
- r2 = idx2.get_indexer(idx1[::-1], method="backfill")
- tm.assert_almost_equal(r2, e1[::-1])
+ r2 = idx2.get_indexer(idx1[::-1], method="backfill")
+ tm.assert_almost_equal(r2, e1[::-1])
- rbfill1 = idx2.get_indexer(idx1, method="bfill")
- tm.assert_almost_equal(r1, rbfill1)
+ rbfill1 = idx2.get_indexer(idx1, method="bfill")
+ tm.assert_almost_equal(r1, rbfill1)
- # pass non-MultiIndex
- r1 = idx1.get_indexer(idx2.values)
- rexp1 = idx1.get_indexer(idx2)
- tm.assert_almost_equal(r1, rexp1)
+ # pass non-MultiIndex
+ r1 = idx1.get_indexer(idx2.values)
+ rexp1 = idx1.get_indexer(idx2)
+ tm.assert_almost_equal(r1, rexp1)
- r1 = idx1.get_indexer([1, 2, 3])
- assert (r1 == [-1, -1, -1]).all()
+ r1 = idx1.get_indexer([1, 2, 3])
+ assert (r1 == [-1, -1, -1]).all()
- # create index with duplicates
- idx1 = Index(list(range(10)) + list(range(10)))
- idx2 = Index(list(range(20)))
+ # create index with duplicates
+ idx1 = Index(list(range(10)) + list(range(10)))
+ idx2 = Index(list(range(20)))
- msg = "Reindexing only valid with uniquely valued Index objects"
- with pytest.raises(InvalidIndexError, match=msg):
- idx1.get_indexer(idx2)
+ msg = "Reindexing only valid with uniquely valued Index objects"
+ with pytest.raises(InvalidIndexError, match=msg):
+ idx1.get_indexer(idx2)
+ def test_get_indexer_nearest(self):
+ midx = MultiIndex.from_tuples([("a", 1), ("b", 2)])
+ msg = (
+ "method='nearest' not implemented yet for MultiIndex; "
+ "see GitHub issue 9365"
+ )
+ with pytest.raises(NotImplementedError, match=msg):
+ midx.get_indexer(["a"], method="nearest")
+ msg = "tolerance not implemented yet for MultiIndex"
+ with pytest.raises(NotImplementedError, match=msg):
+ midx.get_indexer(["a"], method="pad", tolerance=2)
+
+ def test_get_indexer_categorical_time(self):
+ # https://github.com/pandas-dev/pandas/issues/21390
+ midx = MultiIndex.from_product(
+ [
+ Categorical(["a", "b", "c"]),
+ Categorical(date_range("2012-01-01", periods=3, freq="H")),
+ ]
+ )
+ result = midx.get_indexer(midx)
+ tm.assert_numpy_array_equal(result, np.arange(9, dtype=np.intp))
-def test_get_indexer_nearest():
- midx = MultiIndex.from_tuples([("a", 1), ("b", 2)])
- msg = "method='nearest' not implemented yet for MultiIndex; see GitHub issue 9365"
- with pytest.raises(NotImplementedError, match=msg):
- midx.get_indexer(["a"], method="nearest")
- msg = "tolerance not implemented yet for MultiIndex"
- with pytest.raises(NotImplementedError, match=msg):
- midx.get_indexer(["a"], method="pad", tolerance=2)
+ @pytest.mark.parametrize(
+ "index_arr,labels,expected",
+ [
+ (
+ [[1, np.nan, 2], [3, 4, 5]],
+ [1, np.nan, 2],
+ np.array([-1, -1, -1], dtype=np.intp),
+ ),
+ ([[1, np.nan, 2], [3, 4, 5]], [(np.nan, 4)], np.array([1], dtype=np.intp)),
+ ([[1, 2, 3], [np.nan, 4, 5]], [(1, np.nan)], np.array([0], dtype=np.intp)),
+ (
+ [[1, 2, 3], [np.nan, 4, 5]],
+ [np.nan, 4, 5],
+ np.array([-1, -1, -1], dtype=np.intp),
+ ),
+ ],
+ )
+ def test_get_indexer_with_missing_value(self, index_arr, labels, expected):
+ # issue 19132
+ idx = MultiIndex.from_arrays(index_arr)
+ result = idx.get_indexer(labels)
+ tm.assert_numpy_array_equal(result, expected)
def test_getitem(idx):
@@ -216,25 +263,6 @@ def test_getitem_group_select(idx):
assert sorted_idx.get_loc("foo") == slice(0, 2)
-def test_get_indexer_consistency(idx):
- # See GH 16819
- if isinstance(idx, IntervalIndex):
- pass
-
- if idx.is_unique or isinstance(idx, CategoricalIndex):
- indexer = idx.get_indexer(idx[0:2])
- assert isinstance(indexer, np.ndarray)
- assert indexer.dtype == np.intp
- else:
- e = "Reindexing only valid with uniquely valued Index objects"
- with pytest.raises(InvalidIndexError, match=e):
- idx.get_indexer(idx[0:2])
-
- indexer, _ = idx.get_indexer_non_unique(idx[0:2])
- assert isinstance(indexer, np.ndarray)
- assert indexer.dtype == np.intp
-
-
@pytest.mark.parametrize("ind1", [[True] * 5, pd.Index([True] * 5)])
@pytest.mark.parametrize(
"ind2",
@@ -263,158 +291,155 @@ def test_getitem_bool_index_single(ind1, ind2):
tm.assert_index_equal(idx[ind2], expected)
-def test_get_loc(idx):
- assert idx.get_loc(("foo", "two")) == 1
- assert idx.get_loc(("baz", "two")) == 3
- with pytest.raises(KeyError, match=r"^10$"):
- idx.get_loc(("bar", "two"))
- with pytest.raises(KeyError, match=r"^'quux'$"):
- idx.get_loc("quux")
-
- msg = "only the default get_loc method is currently supported for MultiIndex"
- with pytest.raises(NotImplementedError, match=msg):
- idx.get_loc("foo", method="nearest")
-
- # 3 levels
- index = MultiIndex(
- levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
- codes=[
- np.array([0, 0, 1, 2, 2, 2, 3, 3]),
- np.array([0, 1, 0, 0, 0, 1, 0, 1]),
- np.array([1, 0, 1, 1, 0, 0, 1, 0]),
- ],
- )
- with pytest.raises(KeyError, match=r"^\(1, 1\)$"):
- index.get_loc((1, 1))
- assert index.get_loc((2, 0)) == slice(3, 5)
-
-
-def test_get_loc_duplicates():
- index = Index([2, 2, 2, 2])
- result = index.get_loc(2)
- expected = slice(0, 4)
- assert result == expected
- # pytest.raises(Exception, index.get_loc, 2)
-
- index = Index(["c", "a", "a", "b", "b"])
- rs = index.get_loc("c")
- xp = 0
- assert rs == xp
-
-
-def test_get_loc_level():
- index = MultiIndex(
- levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
- codes=[
- np.array([0, 0, 1, 2, 2, 2, 3, 3]),
- np.array([0, 1, 0, 0, 0, 1, 0, 1]),
- np.array([1, 0, 1, 1, 0, 0, 1, 0]),
- ],
- )
- loc, new_index = index.get_loc_level((0, 1))
- expected = slice(1, 2)
- exp_index = index[expected].droplevel(0).droplevel(0)
- assert loc == expected
- assert new_index.equals(exp_index)
-
- loc, new_index = index.get_loc_level((0, 1, 0))
- expected = 1
- assert loc == expected
- assert new_index is None
-
- with pytest.raises(KeyError, match=r"^\(2, 2\)$"):
- index.get_loc_level((2, 2))
- # GH 22221: unused label
- with pytest.raises(KeyError, match=r"^2$"):
- index.drop(2).get_loc_level(2)
- # Unused label on unsorted level:
- with pytest.raises(KeyError, match=r"^2$"):
- index.drop(1, level=2).get_loc_level(2, level=2)
-
- index = MultiIndex(
- levels=[[2000], list(range(4))],
- codes=[np.array([0, 0, 0, 0]), np.array([0, 1, 2, 3])],
- )
- result, new_index = index.get_loc_level((2000, slice(None, None)))
- expected = slice(None, None)
- assert result == expected
- assert new_index.equals(index.droplevel(0))
-
-
-@pytest.mark.parametrize("dtype1", [int, float, bool, str])
-@pytest.mark.parametrize("dtype2", [int, float, bool, str])
-def test_get_loc_multiple_dtypes(dtype1, dtype2):
- # GH 18520
- levels = [np.array([0, 1]).astype(dtype1), np.array([0, 1]).astype(dtype2)]
- idx = pd.MultiIndex.from_product(levels)
- assert idx.get_loc(idx[2]) == 2
-
-
-@pytest.mark.parametrize("level", [0, 1])
-@pytest.mark.parametrize("dtypes", [[int, float], [float, int]])
-def test_get_loc_implicit_cast(level, dtypes):
- # GH 18818, GH 15994 : as flat index, cast int to float and vice-versa
- levels = [["a", "b"], ["c", "d"]]
- key = ["b", "d"]
- lev_dtype, key_dtype = dtypes
- levels[level] = np.array([0, 1], dtype=lev_dtype)
- key[level] = key_dtype(1)
- idx = MultiIndex.from_product(levels)
- assert idx.get_loc(tuple(key)) == 3
-
-
-def test_get_loc_cast_bool():
- # GH 19086 : int is casted to bool, but not vice-versa
- levels = [[False, True], np.arange(2, dtype="int64")]
- idx = MultiIndex.from_product(levels)
-
- assert idx.get_loc((0, 1)) == 1
- assert idx.get_loc((1, 0)) == 2
-
- with pytest.raises(KeyError, match=r"^\(False, True\)$"):
- idx.get_loc((False, True))
- with pytest.raises(KeyError, match=r"^\(True, False\)$"):
- idx.get_loc((True, False))
-
-
-@pytest.mark.parametrize("level", [0, 1])
-def test_get_loc_nan(level, nulls_fixture):
- # GH 18485 : NaN in MultiIndex
- levels = [["a", "b"], ["c", "d"]]
- key = ["b", "d"]
- levels[level] = np.array([0, nulls_fixture], dtype=type(nulls_fixture))
- key[level] = nulls_fixture
-
- if nulls_fixture is pd.NA:
- pytest.xfail("MultiIndex from pd.NA in np.array broken; see GH 31883")
-
- idx = MultiIndex.from_product(levels)
- assert idx.get_loc(tuple(key)) == 3
-
-
-def test_get_loc_missing_nan():
- # GH 8569
- idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]])
- assert isinstance(idx.get_loc(1), slice)
- with pytest.raises(KeyError, match=r"^3$"):
- idx.get_loc(3)
- with pytest.raises(KeyError, match=r"^nan$"):
- idx.get_loc(np.nan)
- with pytest.raises(TypeError, match="unhashable type: 'list'"):
- # listlike/non-hashable raises TypeError
- idx.get_loc([np.nan])
-
-
-def test_get_indexer_categorical_time():
- # https://github.com/pandas-dev/pandas/issues/21390
- midx = MultiIndex.from_product(
- [
- Categorical(["a", "b", "c"]),
- Categorical(date_range("2012-01-01", periods=3, freq="H")),
- ]
- )
- result = midx.get_indexer(midx)
- tm.assert_numpy_array_equal(result, np.arange(9, dtype=np.intp))
+class TestGetLoc:
+ def test_get_loc(self, idx):
+ assert idx.get_loc(("foo", "two")) == 1
+ assert idx.get_loc(("baz", "two")) == 3
+ with pytest.raises(KeyError, match=r"^10$"):
+ idx.get_loc(("bar", "two"))
+ with pytest.raises(KeyError, match=r"^'quux'$"):
+ idx.get_loc("quux")
+
+ msg = "only the default get_loc method is currently supported for MultiIndex"
+ with pytest.raises(NotImplementedError, match=msg):
+ idx.get_loc("foo", method="nearest")
+
+ # 3 levels
+ index = MultiIndex(
+ levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
+ codes=[
+ np.array([0, 0, 1, 2, 2, 2, 3, 3]),
+ np.array([0, 1, 0, 0, 0, 1, 0, 1]),
+ np.array([1, 0, 1, 1, 0, 0, 1, 0]),
+ ],
+ )
+ with pytest.raises(KeyError, match=r"^\(1, 1\)$"):
+ index.get_loc((1, 1))
+ assert index.get_loc((2, 0)) == slice(3, 5)
+
+ def test_get_loc_duplicates(self):
+ index = Index([2, 2, 2, 2])
+ result = index.get_loc(2)
+ expected = slice(0, 4)
+ assert result == expected
+ # FIXME: dont leave commented-out
+ # pytest.raises(Exception, index.get_loc, 2)
+
+ index = Index(["c", "a", "a", "b", "b"])
+ rs = index.get_loc("c")
+ xp = 0
+ assert rs == xp
+
+ def test_get_loc_level(self):
+ index = MultiIndex(
+ levels=[Index(np.arange(4)), Index(np.arange(4)), Index(np.arange(4))],
+ codes=[
+ np.array([0, 0, 1, 2, 2, 2, 3, 3]),
+ np.array([0, 1, 0, 0, 0, 1, 0, 1]),
+ np.array([1, 0, 1, 1, 0, 0, 1, 0]),
+ ],
+ )
+ loc, new_index = index.get_loc_level((0, 1))
+ expected = slice(1, 2)
+ exp_index = index[expected].droplevel(0).droplevel(0)
+ assert loc == expected
+ assert new_index.equals(exp_index)
+
+ loc, new_index = index.get_loc_level((0, 1, 0))
+ expected = 1
+ assert loc == expected
+ assert new_index is None
+
+ with pytest.raises(KeyError, match=r"^\(2, 2\)$"):
+ index.get_loc_level((2, 2))
+ # GH 22221: unused label
+ with pytest.raises(KeyError, match=r"^2$"):
+ index.drop(2).get_loc_level(2)
+ # Unused label on unsorted level:
+ with pytest.raises(KeyError, match=r"^2$"):
+ index.drop(1, level=2).get_loc_level(2, level=2)
+
+ index = MultiIndex(
+ levels=[[2000], list(range(4))],
+ codes=[np.array([0, 0, 0, 0]), np.array([0, 1, 2, 3])],
+ )
+ result, new_index = index.get_loc_level((2000, slice(None, None)))
+ expected = slice(None, None)
+ assert result == expected
+ assert new_index.equals(index.droplevel(0))
+
+ @pytest.mark.parametrize("dtype1", [int, float, bool, str])
+ @pytest.mark.parametrize("dtype2", [int, float, bool, str])
+ def test_get_loc_multiple_dtypes(self, dtype1, dtype2):
+ # GH 18520
+ levels = [np.array([0, 1]).astype(dtype1), np.array([0, 1]).astype(dtype2)]
+ idx = pd.MultiIndex.from_product(levels)
+ assert idx.get_loc(idx[2]) == 2
+
+ @pytest.mark.parametrize("level", [0, 1])
+ @pytest.mark.parametrize("dtypes", [[int, float], [float, int]])
+ def test_get_loc_implicit_cast(self, level, dtypes):
+ # GH 18818, GH 15994 : as flat index, cast int to float and vice-versa
+ levels = [["a", "b"], ["c", "d"]]
+ key = ["b", "d"]
+ lev_dtype, key_dtype = dtypes
+ levels[level] = np.array([0, 1], dtype=lev_dtype)
+ key[level] = key_dtype(1)
+ idx = MultiIndex.from_product(levels)
+ assert idx.get_loc(tuple(key)) == 3
+
+ def test_get_loc_cast_bool(self):
+ # GH 19086 : int is casted to bool, but not vice-versa
+ levels = [[False, True], np.arange(2, dtype="int64")]
+ idx = MultiIndex.from_product(levels)
+
+ assert idx.get_loc((0, 1)) == 1
+ assert idx.get_loc((1, 0)) == 2
+
+ with pytest.raises(KeyError, match=r"^\(False, True\)$"):
+ idx.get_loc((False, True))
+ with pytest.raises(KeyError, match=r"^\(True, False\)$"):
+ idx.get_loc((True, False))
+
+ @pytest.mark.parametrize("level", [0, 1])
+ def test_get_loc_nan(self, level, nulls_fixture):
+ # GH 18485 : NaN in MultiIndex
+ levels = [["a", "b"], ["c", "d"]]
+ key = ["b", "d"]
+ levels[level] = np.array([0, nulls_fixture], dtype=type(nulls_fixture))
+ key[level] = nulls_fixture
+
+ if nulls_fixture is pd.NA:
+ pytest.xfail("MultiIndex from pd.NA in np.array broken; see GH 31883")
+
+ idx = MultiIndex.from_product(levels)
+ assert idx.get_loc(tuple(key)) == 3
+
+ def test_get_loc_missing_nan(self):
+ # GH 8569
+ idx = MultiIndex.from_arrays([[1.0, 2.0], [3.0, 4.0]])
+ assert isinstance(idx.get_loc(1), slice)
+ with pytest.raises(KeyError, match=r"^3$"):
+ idx.get_loc(3)
+ with pytest.raises(KeyError, match=r"^nan$"):
+ idx.get_loc(np.nan)
+ with pytest.raises(TypeError, match="unhashable type: 'list'"):
+ # listlike/non-hashable raises TypeError
+ idx.get_loc([np.nan])
+
+ def test_get_loc_with_values_including_missing_values(self):
+ # issue 19132
+ idx = MultiIndex.from_product([[np.nan, 1]] * 2)
+ expected = slice(0, 2, None)
+ assert idx.get_loc(np.nan) == expected
+
+ idx = MultiIndex.from_arrays([[np.nan, 1, 2, np.nan]])
+ expected = np.array([True, False, False, True])
+ tm.assert_numpy_array_equal(idx.get_loc(np.nan), expected)
+
+ idx = MultiIndex.from_product([[np.nan, 1]] * 3)
+ expected = slice(2, 4, None)
+ assert idx.get_loc((np.nan, 1)) == expected
def test_timestamp_multiindex_indexer():
@@ -444,45 +469,6 @@ def test_timestamp_multiindex_indexer():
tm.assert_series_equal(result, should_be)
-def test_get_loc_with_values_including_missing_values():
- # issue 19132
- idx = MultiIndex.from_product([[np.nan, 1]] * 2)
- expected = slice(0, 2, None)
- assert idx.get_loc(np.nan) == expected
-
- idx = MultiIndex.from_arrays([[np.nan, 1, 2, np.nan]])
- expected = np.array([True, False, False, True])
- tm.assert_numpy_array_equal(idx.get_loc(np.nan), expected)
-
- idx = MultiIndex.from_product([[np.nan, 1]] * 3)
- expected = slice(2, 4, None)
- assert idx.get_loc((np.nan, 1)) == expected
-
-
-@pytest.mark.parametrize(
- "index_arr,labels,expected",
- [
- (
- [[1, np.nan, 2], [3, 4, 5]],
- [1, np.nan, 2],
- np.array([-1, -1, -1], dtype=np.intp),
- ),
- ([[1, np.nan, 2], [3, 4, 5]], [(np.nan, 4)], np.array([1], dtype=np.intp)),
- ([[1, 2, 3], [np.nan, 4, 5]], [(1, np.nan)], np.array([0], dtype=np.intp)),
- (
- [[1, 2, 3], [np.nan, 4, 5]],
- [np.nan, 4, 5],
- np.array([-1, -1, -1], dtype=np.intp),
- ),
- ],
-)
-def test_get_indexer_with_missing_value(index_arr, labels, expected):
- # issue 19132
- idx = MultiIndex.from_arrays(index_arr)
- result = idx.get_indexer(labels)
- tm.assert_numpy_array_equal(result, expected)
-
-
@pytest.mark.parametrize(
"index_arr,expected,target,algo",
[
@@ -512,21 +498,3 @@ def test_slice_indexer_with_missing_value(index_arr, expected, start_idx, end_id
idx = MultiIndex.from_arrays(index_arr)
result = idx.slice_indexer(start=start_idx, end=end_idx)
assert result == expected
-
-
-@pytest.mark.parametrize(
- "index_arr,expected,start_idx,end_idx",
- [
- ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, None),
- ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, "b"),
- ([[np.nan, "a", "b"], ["c", "d", "e"]], (0, 3), np.nan, ("b", "e")),
- ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), None),
- ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), "c"),
- ([["a", "b", "c"], ["d", np.nan, "e"]], (1, 3), ("b", np.nan), ("c", "e")),
- ],
-)
-def test_slice_locs_with_missing_value(index_arr, expected, start_idx, end_idx):
- # issue 19132
- idx = MultiIndex.from_arrays(index_arr)
- result = idx.slice_locs(start=start_idx, end=end_idx)
- assert result == expected
| https://api.github.com/repos/pandas-dev/pandas/pulls/31863 | 2020-02-11T01:12:33Z | 2020-02-22T16:11:43Z | 2020-02-22T16:11:43Z | 2020-02-22T16:38:28Z | |
DOC: Specify use of google cloud storage for CSVs | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8bc8470ae7658..4e26ceef0af26 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -89,7 +89,7 @@
----------
filepath_or_buffer : str, path object or file-like object
Any valid string path is acceptable. The string could be a URL. Valid
- URL schemes include http, ftp, s3, and file. For file URLs, a host is
+ URL schemes include http, ftp, s3, gs, and file. For file URLs, a host is
expected. A local file could be: file://localhost/path/to/table.csv.
If you want to pass in a path object, pandas accepts any ``os.PathLike``.
| - [x] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31860 | 2020-02-10T23:18:12Z | 2020-02-11T02:01:49Z | 2020-02-11T02:01:49Z | 2020-02-11T09:59:01Z |
REF: move loc-only methods to loc | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 46017377f2b9c..36140d3213ce1 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -577,18 +577,6 @@ def __call__(self, axis=None):
new_self.axis = axis
return new_self
- def _get_label(self, label, axis: int):
- if self.ndim == 1:
- # for perf reasons we want to try _xs first
- # as its basically direct indexing
- # but will fail when the index is not present
- # see GH5667
- return self.obj._xs(label, axis=axis)
- elif isinstance(label, tuple) and isinstance(label[axis], slice):
- raise IndexingError("no slices here, handle elsewhere")
-
- return self.obj._xs(label, axis=axis)
-
def _get_setitem_indexer(self, key):
"""
Convert a potentially-label-based key into a positional indexer.
@@ -700,23 +688,6 @@ def _convert_tuple(self, key, is_setter: bool = False):
keyidx.append(idx)
return tuple(keyidx)
- def _handle_lowerdim_multi_index_axis0(self, tup: Tuple):
- # we have an axis0 multi-index, handle or raise
- axis = self.axis or 0
- try:
- # fast path for series or for tup devoid of slices
- return self._get_label(tup, axis=axis)
- except TypeError:
- # slices are unhashable
- pass
- except KeyError as ek:
- # raise KeyError if number of indexers match
- # else IndexingError will be raised
- if len(tup) <= self.obj.index.nlevels and len(tup) > self.ndim:
- raise ek
-
- return None
-
def _getitem_tuple_same_dim(self, tup: Tuple):
"""
Index with indexers that should return an object of the same dimension
@@ -798,6 +769,9 @@ def _getitem_nested_tuple(self, tup: Tuple):
# multi-index dimension, try to see if we have something like
# a tuple passed to a series with a multi-index
if len(tup) > self.ndim:
+ if self.name != "loc":
+ # This should never be reached, but lets be explicit about it
+ raise ValueError("Too many indices")
result = self._handle_lowerdim_multi_index_axis0(tup)
if result is not None:
return result
@@ -1069,6 +1043,35 @@ def _getitem_tuple(self, tup: Tuple):
return self._getitem_tuple_same_dim(tup)
+ def _get_label(self, label, axis: int):
+ if self.ndim == 1:
+ # for perf reasons we want to try _xs first
+ # as its basically direct indexing
+ # but will fail when the index is not present
+ # see GH5667
+ return self.obj._xs(label, axis=axis)
+ elif isinstance(label, tuple) and isinstance(label[axis], slice):
+ raise IndexingError("no slices here, handle elsewhere")
+
+ return self.obj._xs(label, axis=axis)
+
+ def _handle_lowerdim_multi_index_axis0(self, tup: Tuple):
+ # we have an axis0 multi-index, handle or raise
+ axis = self.axis or 0
+ try:
+ # fast path for series or for tup devoid of slices
+ return self._get_label(tup, axis=axis)
+ except TypeError:
+ # slices are unhashable
+ pass
+ except KeyError as ek:
+ # raise KeyError if number of indexers match
+ # else IndexingError will be raised
+ if len(tup) <= self.obj.index.nlevels and len(tup) > self.ndim:
+ raise ek
+
+ return None
+
def _getitem_axis(self, key, axis: int):
key = item_from_zerodim(key)
if is_iterator(key):
| https://api.github.com/repos/pandas-dev/pandas/pulls/31859 | 2020-02-10T23:07:04Z | 2020-02-17T20:01:52Z | 2020-02-17T20:01:52Z | 2020-02-17T20:06:04Z | |
REF: move Loc method that belongs on Index | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 719bf13cbd313..f5d6cc19f1332 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3101,6 +3101,16 @@ def _filter_indexer_tolerance(
# --------------------------------------------------------------------
# Indexer Conversion Methods
+ def _get_partial_string_timestamp_match_key(self, key):
+ """
+ Translate any partial string timestamp matches in key, returning the
+ new key.
+
+ Only relevant for MultiIndex.
+ """
+ # GH#10331
+ return key
+
def _convert_scalar_indexer(self, key, kind: str_t):
"""
Convert a scalar indexer.
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index ac151daac951a..3cfa511d5594b 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2396,6 +2396,35 @@ def _convert_listlike_indexer(self, keyarr):
return indexer, keyarr
+ def _get_partial_string_timestamp_match_key(self, key):
+ """
+ Translate any partial string timestamp matches in key, returning the
+ new key.
+
+ Only relevant for MultiIndex.
+ """
+ # GH#10331
+ if isinstance(key, str) and self.levels[0]._supports_partial_string_indexing:
+ # Convert key '2016-01-01' to
+ # ('2016-01-01'[, slice(None, None, None)]+)
+ key = tuple([key] + [slice(None)] * (len(self.levels) - 1))
+
+ if isinstance(key, tuple):
+ # Convert (..., '2016-01-01', ...) in tuple to
+ # (..., slice('2016-01-01', '2016-01-01', None), ...)
+ new_key = []
+ for i, component in enumerate(key):
+ if (
+ isinstance(component, str)
+ and self.levels[i]._supports_partial_string_indexing
+ ):
+ new_key.append(slice(component, component, None))
+ else:
+ new_key.append(component)
+ key = tuple(new_key)
+
+ return key
+
@Appender(_index_shared_docs["get_indexer"] % _index_doc_kwargs)
def get_indexer(self, target, method=None, limit=None, tolerance=None):
method = missing.clean_reindex_fill_method(method)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index c7dcccab00d95..e1feabb900000 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -980,38 +980,6 @@ def _multi_take(self, tup: Tuple):
# -------------------------------------------------------------------
- def _get_partial_string_timestamp_match_key(self, key, labels):
- """
- Translate any partial string timestamp matches in key, returning the
- new key.
-
- (GH 10331)
- """
- if isinstance(labels, ABCMultiIndex):
- if (
- isinstance(key, str)
- and labels.levels[0]._supports_partial_string_indexing
- ):
- # Convert key '2016-01-01' to
- # ('2016-01-01'[, slice(None, None, None)]+)
- key = tuple([key] + [slice(None)] * (len(labels.levels) - 1))
-
- if isinstance(key, tuple):
- # Convert (..., '2016-01-01', ...) in tuple to
- # (..., slice('2016-01-01', '2016-01-01', None), ...)
- new_key = []
- for i, component in enumerate(key):
- if (
- isinstance(component, str)
- and labels.levels[i]._supports_partial_string_indexing
- ):
- new_key.append(slice(component, component, None))
- else:
- new_key.append(component)
- key = tuple(new_key)
-
- return key
-
def _getitem_iterable(self, key, axis: int):
"""
Index current object with an an iterable collection of keys.
@@ -1072,7 +1040,7 @@ def _getitem_axis(self, key, axis: int):
key = list(key)
labels = self.obj._get_axis(axis)
- key = self._get_partial_string_timestamp_match_key(key, labels)
+ key = labels._get_partial_string_timestamp_match_key(key)
if isinstance(key, slice):
self._validate_key(key, axis)
| https://api.github.com/repos/pandas-dev/pandas/pulls/31857 | 2020-02-10T21:10:49Z | 2020-02-22T16:04:39Z | 2020-02-22T16:04:39Z | 2020-02-22T19:38:00Z | |
TST: parametrize tests.indexing.test_float | diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 80a4d81b20a13..7c4fe286f4416 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -5,6 +5,16 @@
import pandas._testing as tm
+def gen_obj(klass, index):
+ if klass is Series:
+ obj = Series(np.arange(len(index)), index=index)
+ else:
+ obj = DataFrame(
+ np.random.randn(len(index), len(index)), index=index, columns=index
+ )
+ return obj
+
+
class TestFloatIndexers:
def check(self, result, original, indexer, getitem):
"""
@@ -70,97 +80,107 @@ def test_scalar_error(self, index_func):
tm.makePeriodIndex,
],
)
- def test_scalar_non_numeric(self, index_func):
+ @pytest.mark.parametrize("klass", [Series, DataFrame])
+ def test_scalar_non_numeric(self, index_func, klass):
# GH 4892
# float_indexers should raise exceptions
# on appropriate Index types & accessors
i = index_func(5)
+ s = gen_obj(klass, i)
- for s in [
- Series(np.arange(len(i)), index=i),
- DataFrame(np.random.randn(len(i), len(i)), index=i, columns=i),
- ]:
-
- # getting
- for idxr, getitem in [(lambda x: x.iloc, False), (lambda x: x, True)]:
+ # getting
+ for idxr, getitem in [(lambda x: x.iloc, False), (lambda x: x, True)]:
- # gettitem on a DataFrame is a KeyError as it is indexing
- # via labels on the columns
- if getitem and isinstance(s, DataFrame):
- error = KeyError
- msg = r"^3(\.0)?$"
- else:
- error = TypeError
- msg = (
- r"cannot do (label|positional) indexing "
- fr"on {type(i).__name__} with these indexers \[3\.0\] of "
- r"type float|"
- "Cannot index by location index with a "
- "non-integer key"
- )
- with pytest.raises(error, match=msg):
- idxr(s)[3.0]
-
- # label based can be a TypeError or KeyError
- if s.index.inferred_type in {
- "categorical",
- "string",
- "unicode",
- "mixed",
- }:
+ # gettitem on a DataFrame is a KeyError as it is indexing
+ # via labels on the columns
+ if getitem and isinstance(s, DataFrame):
error = KeyError
- msg = r"^3\.0$"
+ msg = r"^3(\.0)?$"
else:
error = TypeError
msg = (
r"cannot do (label|positional) indexing "
fr"on {type(i).__name__} with these indexers \[3\.0\] of "
- "type float"
+ r"type float|"
+ "Cannot index by location index with a "
+ "non-integer key"
)
with pytest.raises(error, match=msg):
- s.loc[3.0]
-
- # contains
- assert 3.0 not in s
-
- # setting with a float fails with iloc
+ idxr(s)[3.0]
+
+ # label based can be a TypeError or KeyError
+ if s.index.inferred_type in {
+ "categorical",
+ "string",
+ "unicode",
+ "mixed",
+ }:
+ error = KeyError
+ msg = r"^3\.0$"
+ else:
+ error = TypeError
msg = (
r"cannot do (label|positional) indexing "
fr"on {type(i).__name__} with these indexers \[3\.0\] of "
"type float"
)
- with pytest.raises(TypeError, match=msg):
- s.iloc[3.0] = 0
-
- # setting with an indexer
- if s.index.inferred_type in ["categorical"]:
- # Value or Type Error
- pass
- elif s.index.inferred_type in ["datetime64", "timedelta64", "period"]:
-
- # these should prob work
- # and are inconsistent between series/dataframe ATM
- # for idxr in [lambda x: x]:
- # s2 = s.copy()
- #
- # with pytest.raises(TypeError):
- # idxr(s2)[3.0] = 0
- pass
+ with pytest.raises(error, match=msg):
+ s.loc[3.0]
- else:
+ # contains
+ assert 3.0 not in s
+
+ # setting with a float fails with iloc
+ msg = (
+ r"cannot do (label|positional) indexing "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[3.0] = 0
+
+ # setting with an indexer
+ if s.index.inferred_type in ["categorical"]:
+ # Value or Type Error
+ pass
+ elif s.index.inferred_type in ["datetime64", "timedelta64", "period"]:
+
+ # these should prob work
+ # and are inconsistent between series/dataframe ATM
+ # for idxr in [lambda x: x]:
+ # s2 = s.copy()
+ #
+ # with pytest.raises(TypeError):
+ # idxr(s2)[3.0] = 0
+ pass
+ else:
+
+ s2 = s.copy()
+ s2.loc[3.0] = 10
+ assert s2.index.is_object()
+
+ for idxr in [lambda x: x]:
s2 = s.copy()
- s2.loc[3.0] = 10
+ idxr(s2)[3.0] = 0
assert s2.index.is_object()
- for idxr in [lambda x: x]:
- s2 = s.copy()
- idxr(s2)[3.0] = 0
- assert s2.index.is_object()
-
+ @pytest.mark.parametrize(
+ "index_func",
+ [
+ tm.makeStringIndex,
+ tm.makeUnicodeIndex,
+ tm.makeCategoricalIndex,
+ tm.makeDateIndex,
+ tm.makeTimedeltaIndex,
+ tm.makePeriodIndex,
+ ],
+ )
+ def test_scalar_non_numeric_series_fallback(self, index_func):
# fallsback to position selection, series only
+ i = index_func(5)
s = Series(np.arange(len(i)), index=i)
s[3]
msg = (
@@ -178,16 +198,16 @@ def test_scalar_with_mixed(self):
# lookup in a pure stringstr
# with an invalid indexer
- for idxr in [lambda x: x, lambda x: x.iloc]:
-
- msg = (
- "cannot do label indexing "
- fr"on {Index.__name__} with these indexers \[1\.0\] of "
- r"type float|"
- "Cannot index by location index with a non-integer key"
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s2)[1.0]
+ msg = (
+ "cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ r"type float|"
+ "Cannot index by location index with a non-integer key"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s2[1.0]
+ with pytest.raises(TypeError, match=msg):
+ s2.iloc[1.0]
with pytest.raises(KeyError, match=r"^1\.0$"):
s2.loc[1.0]
@@ -198,19 +218,17 @@ def test_scalar_with_mixed(self):
# mixed index so we have label
# indexing
- for idxr in [lambda x: x]:
-
- msg = (
- "cannot do label indexing "
- fr"on {Index.__name__} with these indexers \[1\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s3)[1.0]
+ msg = (
+ "cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s3[1.0]
- result = idxr(s3)[1]
- expected = 2
- assert result == expected
+ result = s3[1]
+ expected = 2
+ assert result == expected
msg = "Cannot index by location index with a non-integer key"
with pytest.raises(TypeError, match=msg):
@@ -234,6 +252,7 @@ def test_scalar_integer(self, index_func, klass):
i = index_func(5)
if klass is Series:
+ # TODO: Should we be passing index=i here?
obj = Series(np.arange(len(i)))
else:
obj = DataFrame(np.random.randn(len(i), len(i)), index=i, columns=i)
@@ -273,58 +292,54 @@ def compare(x, y):
# coerce to equal int
assert 3.0 in obj
- def test_scalar_float(self):
+ @pytest.mark.parametrize("klass", [Series, DataFrame])
+ def test_scalar_float(self, klass):
# scalar float indexers work on a float index
index = Index(np.arange(5.0))
- for s in [
- Series(np.arange(len(index)), index=index),
- DataFrame(
- np.random.randn(len(index), len(index)), index=index, columns=index
- ),
- ]:
+ s = gen_obj(klass, index)
- # assert all operations except for iloc are ok
- indexer = index[3]
- for idxr, getitem in [(lambda x: x.loc, False), (lambda x: x, True)]:
+ # assert all operations except for iloc are ok
+ indexer = index[3]
+ for idxr, getitem in [(lambda x: x.loc, False), (lambda x: x, True)]:
- # getting
- result = idxr(s)[indexer]
- self.check(result, s, 3, getitem)
+ # getting
+ result = idxr(s)[indexer]
+ self.check(result, s, 3, getitem)
- # setting
- s2 = s.copy()
+ # setting
+ s2 = s.copy()
- result = idxr(s2)[indexer]
- self.check(result, s, 3, getitem)
+ result = idxr(s2)[indexer]
+ self.check(result, s, 3, getitem)
- # random integer is a KeyError
- with pytest.raises(KeyError, match=r"^3\.5$"):
- idxr(s)[3.5]
+ # random integer is a KeyError
+ with pytest.raises(KeyError, match=r"^3\.5$"):
+ idxr(s)[3.5]
- # contains
- assert 3.0 in s
+ # contains
+ assert 3.0 in s
- # iloc succeeds with an integer
- expected = s.iloc[3]
- s2 = s.copy()
+ # iloc succeeds with an integer
+ expected = s.iloc[3]
+ s2 = s.copy()
- s2.iloc[3] = expected
- result = s2.iloc[3]
- self.check(result, s, 3, False)
+ s2.iloc[3] = expected
+ result = s2.iloc[3]
+ self.check(result, s, 3, False)
- # iloc raises with a float
- msg = "Cannot index by location index with a non-integer key"
- with pytest.raises(TypeError, match=msg):
- s.iloc[3.0]
+ # iloc raises with a float
+ msg = "Cannot index by location index with a non-integer key"
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[3.0]
- msg = (
- "cannot do positional indexing "
- fr"on {Float64Index.__name__} with these indexers \[3\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s2.iloc[3.0] = 0
+ msg = (
+ "cannot do positional indexing "
+ fr"on {Float64Index.__name__} with these indexers \[3\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s2.iloc[3.0] = 0
@pytest.mark.parametrize(
"index_func",
@@ -336,60 +351,54 @@ def test_scalar_float(self):
tm.makePeriodIndex,
],
)
- def test_slice_non_numeric(self, index_func):
+ @pytest.mark.parametrize("l", [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)])
+ @pytest.mark.parametrize("klass", [Series, DataFrame])
+ def test_slice_non_numeric(self, index_func, l, klass):
# GH 4892
# float_indexers should raise exceptions
# on appropriate Index types & accessors
index = index_func(5)
- for s in [
- Series(range(5), index=index),
- DataFrame(np.random.randn(5, 2), index=index),
- ]:
-
- # getitem
- for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
-
- msg = (
- "cannot do positional indexing "
- fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s.iloc[l]
+ s = gen_obj(klass, index)
- for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
+ # getitem
+ msg = (
+ "cannot do positional indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[l]
- msg = (
- "cannot do (slice|positional) indexing "
- fr"on {type(index).__name__} with these indexers "
- r"\[(3|4)(\.0)?\] "
- r"of type (float|int)"
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s)[l]
+ msg = (
+ "cannot do (slice|positional) indexing "
+ fr"on {type(index).__name__} with these indexers "
+ r"\[(3|4)(\.0)?\] "
+ r"of type (float|int)"
+ )
+ for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
+ with pytest.raises(TypeError, match=msg):
+ idxr(s)[l]
- # setitem
- for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
+ # setitem
+ msg = (
+ "cannot do positional indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[l] = 0
- msg = (
- "cannot do positional indexing "
- fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s.iloc[l] = 0
-
- for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
- msg = (
- "cannot do (slice|positional) indexing "
- fr"on {type(index).__name__} with these indexers "
- r"\[(3|4)(\.0)?\] "
- r"of type (float|int)"
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s)[l] = 0
+ msg = (
+ "cannot do (slice|positional) indexing "
+ fr"on {type(index).__name__} with these indexers "
+ r"\[(3|4)(\.0)?\] "
+ r"of type (float|int)"
+ )
+ for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
+ with pytest.raises(TypeError, match=msg):
+ idxr(s)[l] = 0
def test_slice_integer(self):
@@ -409,18 +418,16 @@ def test_slice_integer(self):
# getitem
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
- for idxr in [lambda x: x.loc]:
-
- result = idxr(s)[l]
+ result = s.loc[l]
- # these are all label indexing
- # except getitem which is positional
- # empty
- if oob:
- indexer = slice(0, 0)
- else:
- indexer = slice(3, 5)
- self.check(result, s, indexer, False)
+ # these are all label indexing
+ # except getitem which is positional
+ # empty
+ if oob:
+ indexer = slice(0, 0)
+ else:
+ indexer = slice(3, 5)
+ self.check(result, s, indexer, False)
# positional indexing
msg = (
@@ -434,17 +441,16 @@ def test_slice_integer(self):
# getitem out-of-bounds
for l in [slice(-6, 6), slice(-6.0, 6.0)]:
- for idxr in [lambda x: x.loc]:
- result = idxr(s)[l]
+ result = s.loc[l]
- # these are all label indexing
- # except getitem which is positional
- # empty
- if oob:
- indexer = slice(0, 0)
- else:
- indexer = slice(-6, 6)
- self.check(result, s, indexer, False)
+ # these are all label indexing
+ # except getitem which is positional
+ # empty
+ if oob:
+ indexer = slice(0, 0)
+ else:
+ indexer = slice(-6, 6)
+ self.check(result, s, indexer, False)
# positional indexing
msg = (
@@ -462,15 +468,13 @@ def test_slice_integer(self):
(slice(2.5, 3.5), slice(3, 4)),
]:
- for idxr in [lambda x: x.loc]:
-
- result = idxr(s)[l]
- if oob:
- res = slice(0, 0)
- else:
- res = res1
+ result = s.loc[l]
+ if oob:
+ res = slice(0, 0)
+ else:
+ res = res1
- self.check(result, s, res, False)
+ self.check(result, s, res, False)
# positional indexing
msg = (
@@ -484,11 +488,10 @@ def test_slice_integer(self):
# setitem
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
- for idxr in [lambda x: x.loc]:
- sc = s.copy()
- idxr(sc)[l] = 0
- result = idxr(sc)[l].values.ravel()
- assert (result == 0).all()
+ sc = s.copy()
+ sc.loc[l] = 0
+ result = sc.loc[l].values.ravel()
+ assert (result == 0).all()
# positional indexing
msg = (
@@ -499,7 +502,8 @@ def test_slice_integer(self):
with pytest.raises(TypeError, match=msg):
s[l] = 0
- def test_integer_positional_indexing(self):
+ @pytest.mark.parametrize("l", [slice(2, 4.0), slice(2.0, 4), slice(2.0, 4.0)])
+ def test_integer_positional_indexing(self, l):
""" make sure that we are raising on positional indexing
w.r.t. an integer index
"""
@@ -509,18 +513,16 @@ def test_integer_positional_indexing(self):
expected = s.iloc[2:4]
tm.assert_series_equal(result, expected)
- for idxr in [lambda x: x, lambda x: x.iloc]:
-
- for l in [slice(2, 4.0), slice(2.0, 4), slice(2.0, 4.0)]:
-
- klass = RangeIndex
- msg = (
- "cannot do (slice|positional) indexing "
- fr"on {klass.__name__} with these indexers \[(2|4)\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s)[l]
+ klass = RangeIndex
+ msg = (
+ "cannot do (slice|positional) indexing "
+ fr"on {klass.__name__} with these indexers \[(2|4)\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s[l]
+ with pytest.raises(TypeError, match=msg):
+ s.iloc[l]
@pytest.mark.parametrize(
"index_func", [tm.makeIntIndex, tm.makeRangeIndex],
@@ -532,102 +534,95 @@ def test_slice_integer_frame_getitem(self, index_func):
s = DataFrame(np.random.randn(5, 2), index=index)
- def f(idxr):
-
- # getitem
- for l in [slice(0.0, 1), slice(0, 1.0), slice(0.0, 1.0)]:
-
- result = idxr(s)[l]
- indexer = slice(0, 2)
- self.check(result, s, indexer, False)
-
- # positional indexing
- msg = (
- "cannot do slice indexing "
- fr"on {type(index).__name__} with these indexers \[(0|1)\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s[l]
-
- # getitem out-of-bounds
- for l in [slice(-10, 10), slice(-10.0, 10.0)]:
+ # getitem
+ for l in [slice(0.0, 1), slice(0, 1.0), slice(0.0, 1.0)]:
- result = idxr(s)[l]
- self.check(result, s, slice(-10, 10), True)
+ result = s.loc[l]
+ indexer = slice(0, 2)
+ self.check(result, s, indexer, False)
# positional indexing
msg = (
"cannot do slice indexing "
- fr"on {type(index).__name__} with these indexers \[-10\.0\] of "
+ fr"on {type(index).__name__} with these indexers \[(0|1)\.0\] of "
"type float"
)
with pytest.raises(TypeError, match=msg):
- s[slice(-10.0, 10.0)]
+ s[l]
- # getitem odd floats
- for l, res in [
- (slice(0.5, 1), slice(1, 2)),
- (slice(0, 0.5), slice(0, 1)),
- (slice(0.5, 1.5), slice(1, 2)),
- ]:
+ # getitem out-of-bounds
+ for l in [slice(-10, 10), slice(-10.0, 10.0)]:
- result = idxr(s)[l]
- self.check(result, s, res, False)
+ result = s.loc[l]
+ self.check(result, s, slice(-10, 10), True)
- # positional indexing
- msg = (
- "cannot do slice indexing "
- fr"on {type(index).__name__} with these indexers \[0\.5\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s[l]
+ # positional indexing
+ msg = (
+ "cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[-10\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s[slice(-10.0, 10.0)]
- # setitem
- for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
+ # getitem odd floats
+ for l, res in [
+ (slice(0.5, 1), slice(1, 2)),
+ (slice(0, 0.5), slice(0, 1)),
+ (slice(0.5, 1.5), slice(1, 2)),
+ ]:
- sc = s.copy()
- idxr(sc)[l] = 0
- result = idxr(sc)[l].values.ravel()
- assert (result == 0).all()
+ result = s.loc[l]
+ self.check(result, s, res, False)
- # positional indexing
- msg = (
- "cannot do slice indexing "
- fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
- "type float"
- )
- with pytest.raises(TypeError, match=msg):
- s[l] = 0
+ # positional indexing
+ msg = (
+ "cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[0\.5\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s[l]
- f(lambda x: x.loc)
+ # setitem
+ for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
+
+ sc = s.copy()
+ sc.loc[l] = 0
+ result = sc.loc[l].values.ravel()
+ assert (result == 0).all()
+
+ # positional indexing
+ msg = (
+ "cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ "type float"
+ )
+ with pytest.raises(TypeError, match=msg):
+ s[l] = 0
- def test_slice_float(self):
+ @pytest.mark.parametrize("l", [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)])
+ @pytest.mark.parametrize("klass", [Series, DataFrame])
+ def test_slice_float(self, l, klass):
# same as above, but for floats
index = Index(np.arange(5.0)) + 0.1
- for s in [
- Series(range(5), index=index),
- DataFrame(np.random.randn(5, 2), index=index),
- ]:
+ s = gen_obj(klass, index)
- for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
+ expected = s.iloc[3:4]
+ for idxr in [lambda x: x.loc, lambda x: x]:
- expected = s.iloc[3:4]
- for idxr in [lambda x: x.loc, lambda x: x]:
-
- # getitem
- result = idxr(s)[l]
- if isinstance(s, Series):
- tm.assert_series_equal(result, expected)
- else:
- tm.assert_frame_equal(result, expected)
- # setitem
- s2 = s.copy()
- idxr(s2)[l] = 0
- result = idxr(s2)[l].values.ravel()
- assert (result == 0).all()
+ # getitem
+ result = idxr(s)[l]
+ if isinstance(s, Series):
+ tm.assert_series_equal(result, expected)
+ else:
+ tm.assert_frame_equal(result, expected)
+ # setitem
+ s2 = s.copy()
+ idxr(s2)[l] = 0
+ result = idxr(s2)[l].values.ravel()
+ assert (result == 0).all()
def test_floating_index_doc_example(self):
| Working on making our exception-raising more consistent, getting these tests cleaned up will make that easier. | https://api.github.com/repos/pandas-dev/pandas/pulls/31855 | 2020-02-10T20:45:41Z | 2020-02-15T00:05:05Z | 2020-02-15T00:05:05Z | 2020-02-15T01:53:31Z |
REF: make Series/DataFrame _slice always positional | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 004b92176f030..dfafb1057a543 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3498,13 +3498,11 @@ def _iget_item_cache(self, item):
def _box_item_values(self, key, values):
raise AbstractMethodError(self)
- def _slice(
- self: FrameOrSeries, slobj: slice, axis=0, kind: str = "getitem"
- ) -> FrameOrSeries:
+ def _slice(self: FrameOrSeries, slobj: slice, axis=0) -> FrameOrSeries:
"""
Construct a slice of this container.
- kind parameter is maintained for compatibility with Series slicing.
+ Slicing with this method is *always* positional.
"""
axis = self._get_block_manager_axis(axis)
result = self._constructor(self._data.get_slice(slobj, axis=axis))
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index c7dcccab00d95..44b3c318366d2 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1146,7 +1146,7 @@ def _get_slice_axis(self, slice_obj: slice, axis: int):
)
if isinstance(indexer, slice):
- return self.obj._slice(indexer, axis=axis, kind="iloc")
+ return self.obj._slice(indexer, axis=axis)
else:
# DatetimeIndex overrides Index.slice_indexer and may
# return a DatetimeIndex instead of a slice object.
@@ -1553,7 +1553,7 @@ def _get_slice_axis(self, slice_obj: slice, axis: int):
labels = obj._get_axis(axis)
labels._validate_positional_slice(slice_obj)
- return self.obj._slice(slice_obj, axis=axis, kind="iloc")
+ return self.obj._slice(slice_obj, axis=axis)
def _convert_to_indexer(self, key, axis: int, is_setter: bool = False):
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 34ebbaf79e5e9..7d74d32bf5e14 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -840,12 +840,9 @@ def _ixs(self, i: int, axis: int = 0):
"""
return self._values[i]
- def _slice(self, slobj: slice, axis: int = 0, kind: str = "getitem") -> "Series":
- assert kind in ["getitem", "iloc"]
- if kind == "getitem":
- # If called from getitem, we need to determine whether
- # this slice is positional or label-based.
- slobj = self.index._convert_slice_indexer(slobj, kind="getitem")
+ def _slice(self, slobj: slice, axis: int = 0) -> "Series":
+ # axis kwarg is retained for compat with NDFrame method
+ # _slice is *always* positional
return self._get_values(slobj)
def __getitem__(self, key):
@@ -889,7 +886,10 @@ def __getitem__(self, key):
def _get_with(self, key):
# other: fancy integer or otherwise
if isinstance(key, slice):
- return self._slice(key, kind="getitem")
+ # _convert_slice_indexer to determing if this slice is positional
+ # or label based, and if the latter, convert to positional
+ slobj = self.index._convert_slice_indexer(key, kind="getitem")
+ return self._slice(slobj)
elif isinstance(key, ABCDataFrame):
raise TypeError(
"Indexing a Series with DataFrame is not "
| ATM we have a loc/iloc kwarg that NDFrame._slice (which is in effect DataFrame._slice) ignores and always slices positionally. This makes Series._slice always slice positionally too. | https://api.github.com/repos/pandas-dev/pandas/pulls/31854 | 2020-02-10T18:08:21Z | 2020-02-11T04:12:20Z | 2020-02-11T04:12:20Z | 2020-02-11T15:44:34Z |
REF: Use nonzero in place of argwhere | diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index d651fe9f67773..4e93b62a96ef2 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1671,17 +1671,13 @@ def _do_convert_missing(self, data: DataFrame, convert_missing: bool) -> DataFra
continue
if convert_missing: # Replacement follows Stata notation
- missing_loc = np.argwhere(missing._ndarray_values)
+ missing_loc = np.nonzero(missing._ndarray_values)[0]
umissing, umissing_loc = np.unique(series[missing], return_inverse=True)
replacement = Series(series, dtype=np.object)
for j, um in enumerate(umissing):
missing_value = StataMissingValue(um)
loc = missing_loc[umissing_loc == j]
- if loc.ndim == 2 and loc.shape[1] == 1:
- # GH#31813 avoid trying to set Series values with wrong
- # dimension
- loc = loc[:, 0]
replacement.iloc[loc] = missing_value
else: # All replacements are identical
dtype = series.dtype
| Use nonzero to preserve 1d of Series
xref #31813
| https://api.github.com/repos/pandas-dev/pandas/pulls/31853 | 2020-02-10T16:20:16Z | 2020-02-11T04:10:18Z | 2020-02-11T04:10:18Z | 2020-07-28T14:41:36Z |
add messages to tests | diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index a78e4bb34e42a..f4ffcb8d0f109 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -148,7 +148,8 @@ class Reduce:
def check_reduce(self, s, op_name, skipna):
if op_name in ["median", "skew", "kurt"]:
- with pytest.raises(NotImplementedError):
+ msg = r"decimal does not support the .* operation"
+ with pytest.raises(NotImplementedError, match=msg):
getattr(s, op_name)(skipna=skipna)
else:
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index f7ca99be2adea..d086896fb09c3 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -136,10 +136,11 @@ def test_custom_asserts(self):
self.assert_frame_equal(a.to_frame(), a.to_frame())
b = pd.Series(data.take([0, 0, 1]))
- with pytest.raises(AssertionError):
+ msg = r"ExtensionArray are different"
+ with pytest.raises(AssertionError, match=msg):
self.assert_series_equal(a, b)
- with pytest.raises(AssertionError):
+ with pytest.raises(AssertionError, match=msg):
self.assert_frame_equal(a.to_frame(), b.to_frame())
diff --git a/pandas/tests/extension/test_boolean.py b/pandas/tests/extension/test_boolean.py
index 0c6b187eac1fc..e2331b69916fb 100644
--- a/pandas/tests/extension/test_boolean.py
+++ b/pandas/tests/extension/test_boolean.py
@@ -112,9 +112,9 @@ def _check_op(self, s, op, other, op_name, exc=NotImplementedError):
# subtraction for bools raises TypeError (but not yet in 1.13)
if _np_version_under1p14:
pytest.skip("__sub__ does not yet raise in numpy 1.13")
- with pytest.raises(TypeError):
+ msg = r"numpy boolean subtract"
+ with pytest.raises(TypeError, match=msg):
op(s, other)
-
return
result = op(s, other)
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 336b23e54d74c..69a97f5c9fe02 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -278,7 +278,8 @@ def _compare_other(self, s, data, op_name, other):
assert (result == expected).all()
else:
- with pytest.raises(TypeError):
+ msg = "Unordered Categoricals can only compare equality or not"
+ with pytest.raises(TypeError, match=msg):
op(data, other)
diff --git a/pandas/tests/frame/indexing/test_categorical.py b/pandas/tests/frame/indexing/test_categorical.py
index 3a472a8b58b6c..f5b3f980cc534 100644
--- a/pandas/tests/frame/indexing/test_categorical.py
+++ b/pandas/tests/frame/indexing/test_categorical.py
@@ -115,7 +115,12 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_cats_value)
# - assign a single value not in the current categories set
- with pytest.raises(ValueError):
+ msg1 = (
+ "Cannot setitem on a Categorical with a new category, "
+ "set the categories first"
+ )
+ msg2 = "Cannot set a Categorical with another, without identical categories"
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.iloc[2, 0] = "c"
@@ -125,7 +130,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_row)
# - assign a complete row (mixed values) not in categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.iloc[2, :] = ["c", 2]
@@ -134,7 +139,7 @@ def test_assigning_ops(self):
df.iloc[2:4, :] = [["b", 2], ["b", 2]]
tm.assert_frame_equal(df, exp_multi_row)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.iloc[2:4, :] = [["c", 2], ["c", 2]]
@@ -144,12 +149,12 @@ def test_assigning_ops(self):
df.iloc[2:4, 0] = Categorical(["b", "b"], categories=["a", "b"])
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different categories -> not sure if this should fail or pass
df = orig.copy()
df.iloc[2:4, 0] = Categorical(list("bb"), categories=list("abc"))
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different values
df = orig.copy()
df.iloc[2:4, 0] = Categorical(list("cc"), categories=list("abc"))
@@ -160,7 +165,7 @@ def test_assigning_ops(self):
df.iloc[2:4, 0] = ["b", "b"]
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df.iloc[2:4, 0] = ["c", "c"]
# loc
@@ -175,7 +180,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_cats_value)
# - assign a single value not in the current categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j", "cats"] = "c"
@@ -185,7 +190,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_row)
# - assign a complete row (mixed values) not in categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j", :] = ["c", 2]
@@ -194,7 +199,7 @@ def test_assigning_ops(self):
df.loc["j":"k", :] = [["b", 2], ["b", 2]]
tm.assert_frame_equal(df, exp_multi_row)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j":"k", :] = [["c", 2], ["c", 2]]
@@ -204,14 +209,14 @@ def test_assigning_ops(self):
df.loc["j":"k", "cats"] = Categorical(["b", "b"], categories=["a", "b"])
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different categories -> not sure if this should fail or pass
df = orig.copy()
df.loc["j":"k", "cats"] = Categorical(
["b", "b"], categories=["a", "b", "c"]
)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different values
df = orig.copy()
df.loc["j":"k", "cats"] = Categorical(
@@ -224,7 +229,7 @@ def test_assigning_ops(self):
df.loc["j":"k", "cats"] = ["b", "b"]
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df.loc["j":"k", "cats"] = ["c", "c"]
# loc
@@ -239,7 +244,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_cats_value)
# - assign a single value not in the current categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j", df.columns[0]] = "c"
@@ -249,7 +254,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_row)
# - assign a complete row (mixed values) not in categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j", :] = ["c", 2]
@@ -258,7 +263,7 @@ def test_assigning_ops(self):
df.loc["j":"k", :] = [["b", 2], ["b", 2]]
tm.assert_frame_equal(df, exp_multi_row)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.loc["j":"k", :] = [["c", 2], ["c", 2]]
@@ -268,14 +273,14 @@ def test_assigning_ops(self):
df.loc["j":"k", df.columns[0]] = Categorical(["b", "b"], categories=["a", "b"])
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different categories -> not sure if this should fail or pass
df = orig.copy()
df.loc["j":"k", df.columns[0]] = Categorical(
["b", "b"], categories=["a", "b", "c"]
)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg2):
# different values
df = orig.copy()
df.loc["j":"k", df.columns[0]] = Categorical(
@@ -288,7 +293,7 @@ def test_assigning_ops(self):
df.loc["j":"k", df.columns[0]] = ["b", "b"]
tm.assert_frame_equal(df, exp_parts_cats_col)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df.loc["j":"k", df.columns[0]] = ["c", "c"]
# iat
@@ -297,7 +302,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_cats_value)
# - assign a single value not in the current categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.iat[2, 0] = "c"
@@ -308,7 +313,7 @@ def test_assigning_ops(self):
tm.assert_frame_equal(df, exp_single_cats_value)
# - assign a single value not in the current categories set
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.at["j", "cats"] = "c"
@@ -332,7 +337,7 @@ def test_assigning_ops(self):
df.at["j", "cats"] = "b"
tm.assert_frame_equal(df, exp_single_cats_value)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg1):
df = orig.copy()
df.at["j", "cats"] = "c"
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index d892e3d637772..fcf0a41e0f74e 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -481,7 +481,8 @@ def test_setitem(self, float_frame):
# so raise/warn
smaller = float_frame[:2]
- with pytest.raises(com.SettingWithCopyError):
+ msg = r"\nA value is trying to be set on a copy of a slice from a DataFrame"
+ with pytest.raises(com.SettingWithCopyError, match=msg):
smaller["col10"] = ["1", "2"]
assert smaller["col10"].dtype == np.object_
@@ -865,7 +866,8 @@ def test_fancy_getitem_slice_mixed(self, float_frame, float_string_frame):
# setting it triggers setting with copy
sliced = float_frame.iloc[:, -3:]
- with pytest.raises(com.SettingWithCopyError):
+ msg = r"\nA value is trying to be set on a copy of a slice from a DataFrame"
+ with pytest.raises(com.SettingWithCopyError, match=msg):
sliced["C"] = 4.0
assert (float_frame["C"] == 4).all()
@@ -992,7 +994,7 @@ def test_getitem_setitem_fancy_exceptions(self, float_frame):
with pytest.raises(IndexingError, match="Too many indexers"):
ix[:, :, :]
- with pytest.raises(IndexingError):
+ with pytest.raises(IndexingError, match="Too many indexers"):
ix[:, :, :] = 1
def test_getitem_setitem_boolean_misaligned(self, float_frame):
@@ -1071,10 +1073,10 @@ def test_getitem_setitem_float_labels(self):
cp = df.copy()
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
cp.iloc[1.0:5] = 0
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
result = cp.iloc[1.0:5] == 0 # noqa
assert result.values.all()
@@ -1470,7 +1472,8 @@ def test_iloc_row(self):
# verify slice is view
# setting it makes it raise/warn
- with pytest.raises(com.SettingWithCopyError):
+ msg = r"\nA value is trying to be set on a copy of a slice from a DataFrame"
+ with pytest.raises(com.SettingWithCopyError, match=msg):
result[2] = 0.0
exp_col = df[2].copy()
@@ -1501,7 +1504,8 @@ def test_iloc_col(self):
# verify slice is view
# and that we are setting a copy
- with pytest.raises(com.SettingWithCopyError):
+ msg = r"\nA value is trying to be set on a copy of a slice from a DataFrame"
+ with pytest.raises(com.SettingWithCopyError, match=msg):
result[8] = 0.0
assert (df[8] == 0).all()
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index 507b2e9cd237b..eee754a47fb8c 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -50,7 +50,8 @@ def _check_get(df, cond, check_dtypes=True):
# check getting
df = where_frame
if df is float_string_frame:
- with pytest.raises(TypeError):
+ msg = "'>' not supported between instances of 'str' and 'int'"
+ with pytest.raises(TypeError, match=msg):
df > 0
return
cond = df > 0
@@ -114,7 +115,8 @@ def _check_align(df, cond, other, check_dtypes=True):
df = where_frame
if df is float_string_frame:
- with pytest.raises(TypeError):
+ msg = "'>' not supported between instances of 'str' and 'int'"
+ with pytest.raises(TypeError, match=msg):
df > 0
return
@@ -172,7 +174,8 @@ def _check_set(df, cond, check_dtypes=True):
df = where_frame
if df is float_string_frame:
- with pytest.raises(TypeError):
+ msg = "'>' not supported between instances of 'str' and 'int'"
+ with pytest.raises(TypeError, match=msg):
df > 0
return
@@ -358,7 +361,8 @@ def test_where_datetime(self):
)
stamp = datetime(2013, 1, 3)
- with pytest.raises(TypeError):
+ msg = "'>' not supported between instances of 'float' and 'datetime.datetime'"
+ with pytest.raises(TypeError, match=msg):
df > stamp
result = df[df.iloc[:, :-1] > stamp]
diff --git a/pandas/tests/frame/methods/test_explode.py b/pandas/tests/frame/methods/test_explode.py
index 76c87ed355492..bad8349ec977b 100644
--- a/pandas/tests/frame/methods/test_explode.py
+++ b/pandas/tests/frame/methods/test_explode.py
@@ -9,11 +9,11 @@ def test_error():
df = pd.DataFrame(
{"A": pd.Series([[0, 1, 2], np.nan, [], (3, 4)], index=list("abcd")), "B": 1}
)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match="column must be a scalar"):
df.explode(list("AA"))
df.columns = list("AA")
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match="columns must be unique"):
df.explode("A")
diff --git a/pandas/tests/frame/methods/test_isin.py b/pandas/tests/frame/methods/test_isin.py
index 0eb94afc99d94..6307738021f68 100644
--- a/pandas/tests/frame/methods/test_isin.py
+++ b/pandas/tests/frame/methods/test_isin.py
@@ -60,10 +60,14 @@ def test_isin_with_string_scalar(self):
},
index=["foo", "bar", "baz", "qux"],
)
- with pytest.raises(TypeError):
+ msg = (
+ r"only list-like or dict-like objects are allowed "
+ r"to be passed to DataFrame.isin\(\), you passed a 'str'"
+ )
+ with pytest.raises(TypeError, match=msg):
df.isin("a")
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
df.isin("aaa")
def test_isin_df(self):
@@ -92,7 +96,8 @@ def test_isin_df_dupe_values(self):
df1 = DataFrame({"A": [1, 2, 3, 4], "B": [2, np.nan, 4, 4]})
# just cols duped
df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], columns=["B", "B"])
- with pytest.raises(ValueError):
+ msg = r"cannot compute isin with a duplicate axis\."
+ with pytest.raises(ValueError, match=msg):
df1.isin(df2)
# just index duped
@@ -101,12 +106,12 @@ def test_isin_df_dupe_values(self):
columns=["A", "B"],
index=[0, 0, 1, 1],
)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg):
df1.isin(df2)
# cols and index:
df2.columns = ["B", "B"]
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match=msg):
df1.isin(df2)
def test_isin_dupe_self(self):
diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py
index 64461c08d34f4..9c52e8ec5620f 100644
--- a/pandas/tests/frame/methods/test_quantile.py
+++ b/pandas/tests/frame/methods/test_quantile.py
@@ -75,7 +75,8 @@ def test_quantile_axis_mixed(self):
tm.assert_series_equal(result, expected)
# must raise
- with pytest.raises(TypeError):
+ msg = "'<' not supported between instances of 'Timestamp' and 'float'"
+ with pytest.raises(TypeError, match=msg):
df.quantile(0.5, axis=1, numeric_only=False)
def test_quantile_axis_parameter(self):
diff --git a/pandas/tests/frame/methods/test_round.py b/pandas/tests/frame/methods/test_round.py
index 0865e03cedc50..6dcdf49e93729 100644
--- a/pandas/tests/frame/methods/test_round.py
+++ b/pandas/tests/frame/methods/test_round.py
@@ -34,7 +34,8 @@ def test_round(self):
# Round with a list
round_list = [1, 2]
- with pytest.raises(TypeError):
+ msg = "decimals must be an integer, a dict-like or a Series"
+ with pytest.raises(TypeError, match=msg):
df.round(round_list)
# Round with a dictionary
@@ -57,34 +58,37 @@ def test_round(self):
# float input to `decimals`
non_int_round_dict = {"col1": 1, "col2": 0.5}
- with pytest.raises(TypeError):
+ msg = "integer argument expected, got float"
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
# String input
non_int_round_dict = {"col1": 1, "col2": "foo"}
- with pytest.raises(TypeError):
+ msg = r"an integer is required \(got type str\)"
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
non_int_round_Series = Series(non_int_round_dict)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_Series)
# List input
non_int_round_dict = {"col1": 1, "col2": [1, 2]}
- with pytest.raises(TypeError):
+ msg = r"an integer is required \(got type list\)"
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
non_int_round_Series = Series(non_int_round_dict)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_Series)
# Non integer Series inputs
non_int_round_Series = Series(non_int_round_dict)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_Series)
non_int_round_Series = Series(non_int_round_dict)
- with pytest.raises(TypeError):
+ with pytest.raises(TypeError, match=msg):
df.round(non_int_round_Series)
# Negative numbers
@@ -103,7 +107,8 @@ def test_round(self):
{"col1": [1.123, 2.123, 3.123], "col2": [1.2, 2.2, 3.2]}
)
- with pytest.raises(TypeError):
+ msg = "integer argument expected, got float"
+ with pytest.raises(TypeError, match=msg):
df.round(nan_round_Series)
# Make sure this doesn't break existing Series.round
diff --git a/pandas/tests/frame/methods/test_sort_values.py b/pandas/tests/frame/methods/test_sort_values.py
index 96f4d6ed90d6b..5a25d1c2c0894 100644
--- a/pandas/tests/frame/methods/test_sort_values.py
+++ b/pandas/tests/frame/methods/test_sort_values.py
@@ -458,7 +458,7 @@ def test_sort_values_na_position_with_categories_raises(self):
}
)
- with pytest.raises(ValueError):
+ with pytest.raises(ValueError, match="invalid na_position: bad_position"):
df.sort_values(by="c", ascending=False, na_position="bad_position")
@pytest.mark.parametrize("inplace", [True, False])
diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py
index 40393721c4ac6..cd9bd169322fd 100644
--- a/pandas/tests/frame/methods/test_to_dict.py
+++ b/pandas/tests/frame/methods/test_to_dict.py
@@ -132,7 +132,13 @@ def test_to_dict(self, mapping):
def test_to_dict_errors(self, mapping):
# GH#16122
df = DataFrame(np.random.randn(3, 3))
- with pytest.raises(TypeError):
+ msg = "|".join(
+ [
+ "unsupported type: <class 'list'>",
+ r"to_dict\(\) only accepts initialized defaultdicts",
+ ]
+ )
+ with pytest.raises(TypeError, match=msg):
df.to_dict(into=mapping)
def test_to_dict_not_unique_warning(self):
| - [ ] xref #30999
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31852 | 2020-02-10T16:17:21Z | 2020-02-19T23:19:49Z | 2020-02-19T23:19:49Z | 2020-02-19T23:19:56Z |
CLN-29547 replace old string formatting | diff --git a/pandas/_libs/tslibs/c_timestamp.pyx b/pandas/_libs/tslibs/c_timestamp.pyx
index 2c72cec18f096..5be35c13f5737 100644
--- a/pandas/_libs/tslibs/c_timestamp.pyx
+++ b/pandas/_libs/tslibs/c_timestamp.pyx
@@ -59,10 +59,10 @@ def integer_op_not_supported(obj):
# GH#30886 using an fstring raises SystemError
int_addsub_msg = (
- "Addition/subtraction of integers and integer-arrays with {cls} is "
- "no longer supported. Instead of adding/subtracting `n`, "
- "use `n * obj.freq`"
- ).format(cls=cls)
+ f"Addition/subtraction of integers and integer-arrays with {cls} is "
+ f"no longer supported. Instead of adding/subtracting `n`, "
+ f"use `n * obj.freq`"
+ )
return TypeError(int_addsub_msg)
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 3742506a7f8af..67bc51892a4e1 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -639,7 +639,7 @@ cdef inline int64_t parse_iso_format_string(object ts) except? -1:
bint have_dot = 0, have_value = 0, neg = 0
list number = [], unit = []
- err_msg = "Invalid ISO 8601 Duration format - {}".format(ts)
+ err_msg = f"Invalid ISO 8601 Duration format - {ts}"
for c in ts:
# number (ascii codes)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index c8bb0878b564d..528cc32b7fbeb 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1126,8 +1126,8 @@ def __arrow_array__(self, type=None):
subtype = pyarrow.from_numpy_dtype(self.dtype.subtype)
except TypeError:
raise TypeError(
- "Conversion to arrow with subtype '{}' "
- "is not supported".format(self.dtype.subtype)
+ f"Conversion to arrow with subtype '{self.dtype.subtype}' "
+ f"is not supported"
)
interval_type = ArrowIntervalType(subtype, self.closed)
storage_array = pyarrow.StructArray.from_arrays(
@@ -1155,15 +1155,13 @@ def __arrow_array__(self, type=None):
# ensure we have the same subtype and closed attributes
if not type.equals(interval_type):
raise TypeError(
- "Not supported to convert IntervalArray to type with "
- "different 'subtype' ({0} vs {1}) and 'closed' ({2} vs {3}) "
- "attributes".format(
- self.dtype.subtype, type.subtype, self.closed, type.closed
- )
+ f"Not supported to convert IntervalArray to type with "
+ f"different 'subtype' ({self.dtype.subtype} vs {type.subtype}) "
+ f"and 'closed' ({self.closed} vs {type.closed}) attributes"
)
else:
raise TypeError(
- "Not supported to convert IntervalArray to '{0}' type".format(type)
+ f"Not supported to convert IntervalArray to '{type}' type"
)
return pyarrow.ExtensionArray.from_storage(interval_type, storage_array)
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 3366f10b92604..b9cbc6c3ad8bd 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -295,7 +295,7 @@ def hash_array(
elif issubclass(dtype.type, (np.datetime64, np.timedelta64)):
vals = vals.view("i8").astype("u8", copy=False)
elif issubclass(dtype.type, np.number) and dtype.itemsize <= 8:
- vals = vals.view("u{}".format(vals.dtype.itemsize)).astype("u8")
+ vals = vals.view(f"u{vals.dtype.itemsize}").astype("u8")
else:
# With repeated values, its MUCH faster to categorize object dtypes,
# then hash and rename categories. We allow skipping the categorization
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 149533bf0c238..4a429949c9a08 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -187,7 +187,7 @@ def _get_footer(self) -> str:
if self.length:
if footer:
footer += ", "
- footer += "Length: {length}".format(length=len(self.categorical))
+ footer += f"Length: {len(self.categorical)}"
level_info = self.categorical._repr_categories_info()
@@ -217,7 +217,7 @@ def to_string(self) -> str:
fmt_values = self._get_formatted_values()
- fmt_values = ["{i}".format(i=i) for i in fmt_values]
+ fmt_values = [f"{i}" for i in fmt_values]
fmt_values = [i.strip() for i in fmt_values]
values = ", ".join(fmt_values)
result = ["[" + values + "]"]
@@ -301,28 +301,26 @@ def _get_footer(self) -> str:
assert isinstance(
self.series.index, (ABCDatetimeIndex, ABCPeriodIndex, ABCTimedeltaIndex)
)
- footer += "Freq: {freq}".format(freq=self.series.index.freqstr)
+ footer += f"Freq: {self.series.index.freqstr}"
if self.name is not False and name is not None:
if footer:
footer += ", "
series_name = pprint_thing(name, escape_chars=("\t", "\r", "\n"))
- footer += (
- ("Name: {sname}".format(sname=series_name)) if name is not None else ""
- )
+ footer += f"Name: {series_name}" if name is not None else ""
if self.length is True or (self.length == "truncate" and self.truncate_v):
if footer:
footer += ", "
- footer += "Length: {length}".format(length=len(self.series))
+ footer += f"Length: {len(self.series)}"
if self.dtype is not False and self.dtype is not None:
name = getattr(self.tr_series.dtype, "name", None)
if name:
if footer:
footer += ", "
- footer += "dtype: {typ}".format(typ=pprint_thing(name))
+ footer += f"dtype: {pprint_thing(name)}"
# level infos are added to the end and in a new line, like it is done
# for Categoricals
@@ -359,9 +357,7 @@ def to_string(self) -> str:
footer = self._get_footer()
if len(series) == 0:
- return "{name}([], {footer})".format(
- name=type(self.series).__name__, footer=footer
- )
+ return f"{type(self.series).__name__}([], {footer})"
fmt_index, have_header = self._get_formatted_index()
fmt_values = self._get_formatted_values()
@@ -584,10 +580,8 @@ def __init__(
self.formatters = formatters
else:
raise ValueError(
- (
- "Formatters length({flen}) should match "
- "DataFrame number of columns({dlen})"
- ).format(flen=len(formatters), dlen=len(frame.columns))
+ f"Formatters length({len(formatters)}) should match "
+ f"DataFrame number of columns({len(frame.columns)})"
)
self.na_rep = na_rep
self.decimal = decimal
@@ -816,10 +810,10 @@ def write_result(self, buf: IO[str]) -> None:
frame = self.frame
if len(frame.columns) == 0 or len(frame.index) == 0:
- info_line = "Empty {name}\nColumns: {col}\nIndex: {idx}".format(
- name=type(self.frame).__name__,
- col=pprint_thing(frame.columns),
- idx=pprint_thing(frame.index),
+ info_line = (
+ f"Empty {type(self.frame).__name__}\n"
+ f"Columns: {pprint_thing(frame.columns)}\n"
+ f"Index: {pprint_thing(frame.index)}"
)
text = info_line
else:
@@ -865,11 +859,7 @@ def write_result(self, buf: IO[str]) -> None:
buf.writelines(text)
if self.should_show_dimensions:
- buf.write(
- "\n\n[{nrows} rows x {ncols} columns]".format(
- nrows=len(frame), ncols=len(frame.columns)
- )
- )
+ buf.write(f"\n\n[{len(frame)} rows x {len(frame.columns)} columns]")
def _join_multiline(self, *args) -> str:
lwidth = self.line_width
@@ -1075,7 +1065,7 @@ def _get_formatted_index(self, frame: "DataFrame") -> List[str]:
# empty space for columns
if self.show_col_idx_names:
- col_header = ["{x}".format(x=x) for x in self._get_column_name_list()]
+ col_header = [f"{x}" for x in self._get_column_name_list()]
else:
col_header = [""] * columns.nlevels
@@ -1211,10 +1201,8 @@ def _format_strings(self) -> List[str]:
if self.float_format is None:
float_format = get_option("display.float_format")
if float_format is None:
- fmt_str = "{{x: .{prec:d}g}}".format(
- prec=get_option("display.precision")
- )
- float_format = lambda x: fmt_str.format(x=x)
+ precision = get_option("display.precision")
+ float_format = lambda x: f"{x: .{precision:d}g}"
else:
float_format = self.float_format
@@ -1240,10 +1228,10 @@ def _format(x):
pass
return self.na_rep
elif isinstance(x, PandasObject):
- return "{x}".format(x=x)
+ return f"{x}"
else:
# object dtype
- return "{x}".format(x=formatter(x))
+ return f"{formatter(x)}"
vals = self.values
if isinstance(vals, Index):
@@ -1259,7 +1247,7 @@ def _format(x):
fmt_values = []
for i, v in enumerate(vals):
if not is_float_type[i] and leading_space:
- fmt_values.append(" {v}".format(v=_format(v)))
+ fmt_values.append(f" {_format(v)}")
elif is_float_type[i]:
fmt_values.append(float_format(v))
else:
@@ -1268,8 +1256,8 @@ def _format(x):
# to include a space if we get here.
tpl = "{v}"
else:
- tpl = " {v}"
- fmt_values.append(tpl.format(v=_format(v)))
+ tpl = f" {_format(v)}"
+ fmt_values.append(tpl)
return fmt_values
@@ -1442,7 +1430,7 @@ def _format_strings(self) -> List[str]:
class IntArrayFormatter(GenericArrayFormatter):
def _format_strings(self) -> List[str]:
- formatter = self.formatter or (lambda x: "{x: d}".format(x=x))
+ formatter = self.formatter or (lambda x: f"{x: d}")
fmt_values = [formatter(x) for x in self.values]
return fmt_values
@@ -1726,7 +1714,7 @@ def _formatter(x):
x = Timedelta(x)
result = x._repr_base(format=format)
if box:
- result = "'{res}'".format(res=result)
+ result = f"'{result}'"
return result
return _formatter
@@ -1889,16 +1877,16 @@ def __call__(self, num: Union[int, float]) -> str:
prefix = self.ENG_PREFIXES[int_pow10]
else:
if int_pow10 < 0:
- prefix = "E-{pow10:02d}".format(pow10=-int_pow10)
+ prefix = f"E-{-int_pow10:02d}"
else:
- prefix = "E+{pow10:02d}".format(pow10=int_pow10)
+ prefix = f"E+{int_pow10:02d}"
mant = sign * dnum / (10 ** pow10)
if self.accuracy is None: # pragma: no cover
format_str = "{mant: g}{prefix}"
else:
- format_str = "{{mant: .{acc:d}f}}{{prefix}}".format(acc=self.accuracy)
+ format_str = f"{{mant: .{self.accuracy:d}f}}{{prefix}}"
formatted = format_str.format(mant=mant, prefix=prefix)
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index e3161415fe2bc..3bc47cefd45c0 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -56,7 +56,7 @@ def __init__(
self.table_id = self.fmt.table_id
self.render_links = self.fmt.render_links
if isinstance(self.fmt.col_space, int):
- self.fmt.col_space = "{colspace}px".format(colspace=self.fmt.col_space)
+ self.fmt.col_space = f"{self.fmt.col_space}px"
@property
def show_row_idx_names(self) -> bool:
@@ -124,7 +124,7 @@ def write_th(
"""
if header and self.fmt.col_space is not None:
tags = tags or ""
- tags += 'style="min-width: {colspace};"'.format(colspace=self.fmt.col_space)
+ tags += f'style="min-width: {self.fmt.col_space};"'
self._write_cell(s, kind="th", indent=indent, tags=tags)
@@ -135,9 +135,9 @@ def _write_cell(
self, s: Any, kind: str = "td", indent: int = 0, tags: Optional[str] = None
) -> None:
if tags is not None:
- start_tag = "<{kind} {tags}>".format(kind=kind, tags=tags)
+ start_tag = f"<{kind} {tags}>"
else:
- start_tag = "<{kind}>".format(kind=kind)
+ start_tag = f"<{kind}>"
if self.escape:
# escape & first to prevent double escaping of &
@@ -149,16 +149,13 @@ def _write_cell(
if self.render_links and is_url(rs):
rs_unescaped = pprint_thing(s, escape_chars={}).strip()
- start_tag += '<a href="{url}" target="_blank">'.format(url=rs_unescaped)
+ start_tag += f'<a href="{rs_unescaped}" target="_blank">'
end_a = "</a>"
else:
end_a = ""
self.write(
- "{start}{rs}{end_a}</{kind}>".format(
- start=start_tag, rs=rs, end_a=end_a, kind=kind
- ),
- indent,
+ f"{start_tag}{rs}{end_a}</{kind}>", indent,
)
def write_tr(
@@ -177,7 +174,7 @@ def write_tr(
if align is None:
self.write("<tr>", indent)
else:
- self.write('<tr style="text-align: {align};">'.format(align=align), indent)
+ self.write(f'<tr style="text-align: {align};">', indent)
indent += indent_delta
for i, s in enumerate(line):
@@ -196,9 +193,7 @@ def render(self) -> List[str]:
if self.should_show_dimensions:
by = chr(215) # ×
self.write(
- "<p>{rows} rows {by} {cols} columns</p>".format(
- rows=len(self.frame), by=by, cols=len(self.frame.columns)
- )
+ f"<p>{len(self.frame)} rows {by} {len(self.frame.columns)} columns</p>"
)
return self.elements
@@ -216,7 +211,7 @@ def _write_table(self, indent: int = 0) -> None:
self.classes = self.classes.split()
if not isinstance(self.classes, (list, tuple)):
raise TypeError(
- "classes must be a string, list, "
+ f"classes must be a string, list, "
f"or tuple, not {type(self.classes)}"
)
_classes.extend(self.classes)
@@ -224,12 +219,10 @@ def _write_table(self, indent: int = 0) -> None:
if self.table_id is None:
id_section = ""
else:
- id_section = ' id="{table_id}"'.format(table_id=self.table_id)
+ id_section = f' id="{self.table_id}"'
self.write(
- '<table border="{border}" class="{cls}"{id_section}>'.format(
- border=self.border, cls=" ".join(_classes), id_section=id_section
- ),
+ f'<table border="{self.border}" class="{" ".join(_classes)}"{id_section}>',
indent,
)
diff --git a/pandas/io/formats/latex.py b/pandas/io/formats/latex.py
index 8ab56437d5c05..c6d2055ecfea8 100644
--- a/pandas/io/formats/latex.py
+++ b/pandas/io/formats/latex.py
@@ -59,10 +59,9 @@ def write_result(self, buf: IO[str]) -> None:
# string representation of the columns
if len(self.frame.columns) == 0 or len(self.frame.index) == 0:
- info_line = "Empty {name}\nColumns: {col}\nIndex: {idx}".format(
- name=type(self.frame).__name__,
- col=self.frame.columns,
- idx=self.frame.index,
+ info_line = (
+ f"Empty {type(self.frame).__name__}\n"
+ f"Columns: {self.frame.columns}\nIndex: {self.frame.index}"
)
strcols = [[info_line]]
else:
@@ -141,8 +140,8 @@ def pad_empties(x):
buf.write("\\endhead\n")
buf.write("\\midrule\n")
buf.write(
- "\\multicolumn{{{n}}}{{r}}{{{{Continued on next "
- "page}}}} \\\\\n".format(n=len(row))
+ f"\\multicolumn{{{len(row)}}}{{r}}{{{{Continued on next "
+ f"page}}}} \\\\\n"
)
buf.write("\\midrule\n")
buf.write("\\endfoot\n\n")
@@ -172,7 +171,7 @@ def pad_empties(x):
if self.bold_rows and self.fmt.index:
# bold row labels
crow = [
- "\\textbf{{{x}}}".format(x=x)
+ f"\\textbf{{{x}}}"
if j < ilevels and x.strip() not in ["", "{}"]
else x
for j, x in enumerate(crow)
@@ -211,9 +210,8 @@ def append_col():
# write multicolumn if needed
if ncol > 1:
row2.append(
- "\\multicolumn{{{ncol:d}}}{{{fmt:s}}}{{{txt:s}}}".format(
- ncol=ncol, fmt=self.multicolumn_format, txt=coltext.strip()
- )
+ f"\\multicolumn{{{ncol:d}}}{{{self.multicolumn_format:s}}}"
+ f"{{{coltext.strip():s}}}"
)
# don't modify where not needed
else:
@@ -256,9 +254,7 @@ def _format_multirow(
break
if nrow > 1:
# overwrite non-multirow entry
- row[j] = "\\multirow{{{nrow:d}}}{{*}}{{{row:s}}}".format(
- nrow=nrow, row=row[j].strip()
- )
+ row[j] = f"\\multirow{{{nrow:d}}}{{*}}{{{row[j].strip():s}}}"
# save when to end the current block with \cline
self.clinebuf.append([i + nrow - 1, j + 1])
return row
@@ -269,7 +265,7 @@ def _print_cline(self, buf: IO[str], i: int, icol: int) -> None:
"""
for cl in self.clinebuf:
if cl[0] == i:
- buf.write("\\cline{{{cl:d}-{icol:d}}}\n".format(cl=cl[1], icol=icol))
+ buf.write(f"\\cline{{{cl[1]:d}-{icol:d}}}\n")
# remove entries that have been written to buffer
self.clinebuf = [x for x in self.clinebuf if x[0] != i]
@@ -293,19 +289,19 @@ def _write_tabular_begin(self, buf, column_format: str):
if self.caption is None:
caption_ = ""
else:
- caption_ = "\n\\caption{{{}}}".format(self.caption)
+ caption_ = f"\n\\caption{{{self.caption}}}"
if self.label is None:
label_ = ""
else:
- label_ = "\n\\label{{{}}}".format(self.label)
+ label_ = f"\n\\label{{{self.label}}}"
- buf.write("\\begin{{table}}\n\\centering{}{}\n".format(caption_, label_))
+ buf.write(f"\\begin{{table}}\n\\centering{caption_}{label_}\n")
else:
# then write output only in a tabular environment
pass
- buf.write("\\begin{{tabular}}{{{fmt}}}\n".format(fmt=column_format))
+ buf.write(f"\\begin{{tabular}}{{{column_format}}}\n")
def _write_tabular_end(self, buf):
"""
@@ -341,18 +337,18 @@ def _write_longtable_begin(self, buf, column_format: str):
<https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl'
for 3 columns
"""
- buf.write("\\begin{{longtable}}{{{fmt}}}\n".format(fmt=column_format))
+ buf.write(f"\\begin{{longtable}}{{{column_format}}}\n")
if self.caption is not None or self.label is not None:
if self.caption is None:
pass
else:
- buf.write("\\caption{{{}}}".format(self.caption))
+ buf.write(f"\\caption{{{self.caption}}}")
if self.label is None:
pass
else:
- buf.write("\\label{{{}}}".format(self.label))
+ buf.write(f"\\label{{{self.label}}}")
# a double-backslash is required at the end of the line
# as discussed here:
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 13b18a0b5fb6f..36e774305b577 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -229,7 +229,7 @@ def as_escaped_string(
max_seq_items=max_seq_items,
)
elif isinstance(thing, str) and quote_strings:
- result = "'{thing}'".format(thing=as_escaped_string(thing))
+ result = f"'{as_escaped_string(thing)}'"
else:
result = as_escaped_string(thing)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 4e26ceef0af26..b661770dc80a2 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1493,10 +1493,8 @@ def extract(r):
for n in range(len(columns[0])):
if all(ensure_str(col[n]) in self.unnamed_cols for col in columns):
raise ParserError(
- "Passed header=[{header}] are too many rows for this "
- "multi_index of columns".format(
- header=",".join(str(x) for x in self.header)
- )
+ f"Passed header=[{','.join(str(x) for x in self.header)}] "
+ f"are too many rows for this multi_index of columns"
)
# Clean the column names (if we have an index_col).
@@ -3613,8 +3611,8 @@ def get_rows(self, infer_nrows, skiprows=None):
def detect_colspecs(self, infer_nrows=100, skiprows=None):
# Regex escape the delimiters
- delimiters = "".join(r"\{}".format(x) for x in self.delimiter)
- pattern = re.compile("([^{}]+)".format(delimiters))
+ delimiters = "".join(fr"\{x}" for x in self.delimiter)
+ pattern = re.compile(f"([^{delimiters}]+)")
rows = self.get_rows(infer_nrows, skiprows)
if not rows:
raise EmptyDataError("No rows from which to infer column width")
diff --git a/pandas/tests/arrays/categorical/test_dtypes.py b/pandas/tests/arrays/categorical/test_dtypes.py
index 19746d7d72162..9922a8863ebc2 100644
--- a/pandas/tests/arrays/categorical/test_dtypes.py
+++ b/pandas/tests/arrays/categorical/test_dtypes.py
@@ -92,22 +92,20 @@ def test_codes_dtypes(self):
result = Categorical(["foo", "bar", "baz"])
assert result.codes.dtype == "int8"
- result = Categorical(["foo{i:05d}".format(i=i) for i in range(400)])
+ result = Categorical([f"foo{i:05d}" for i in range(400)])
assert result.codes.dtype == "int16"
- result = Categorical(["foo{i:05d}".format(i=i) for i in range(40000)])
+ result = Categorical([f"foo{i:05d}" for i in range(40000)])
assert result.codes.dtype == "int32"
# adding cats
result = Categorical(["foo", "bar", "baz"])
assert result.codes.dtype == "int8"
- result = result.add_categories(["foo{i:05d}".format(i=i) for i in range(400)])
+ result = result.add_categories([f"foo{i:05d}" for i in range(400)])
assert result.codes.dtype == "int16"
# removing cats
- result = result.remove_categories(
- ["foo{i:05d}".format(i=i) for i in range(300)]
- )
+ result = result.remove_categories([f"foo{i:05d}" for i in range(300)])
assert result.codes.dtype == "int8"
@pytest.mark.parametrize("ordered", [True, False])
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 0c830c65e0f8b..c3006687ca6dd 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -338,7 +338,7 @@ def test_compare_unordered_different_order(self):
def test_numeric_like_ops(self):
df = DataFrame({"value": np.random.randint(0, 10000, 100)})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
@@ -353,9 +353,7 @@ def test_numeric_like_ops(self):
("__mul__", r"\*"),
("__truediv__", "/"),
]:
- msg = r"Series cannot perform the operation {}|unsupported operand".format(
- str_rep
- )
+ msg = fr"Series cannot perform the operation {str_rep}|unsupported operand"
with pytest.raises(TypeError, match=msg):
getattr(df, op)(df)
@@ -363,7 +361,7 @@ def test_numeric_like_ops(self):
# min/max)
s = df["value_group"]
for op in ["kurt", "skew", "var", "std", "mean", "sum", "median"]:
- msg = "Categorical cannot perform the operation {}".format(op)
+ msg = f"Categorical cannot perform the operation {op}"
with pytest.raises(TypeError, match=msg):
getattr(s, op)(numeric_only=False)
@@ -383,9 +381,7 @@ def test_numeric_like_ops(self):
("__mul__", r"\*"),
("__truediv__", "/"),
]:
- msg = r"Series cannot perform the operation {}|unsupported operand".format(
- str_rep
- )
+ msg = fr"Series cannot perform the operation {str_rep}|unsupported operand"
with pytest.raises(TypeError, match=msg):
getattr(s, op)(2)
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index bd9b77a2bc419..a78e4bb34e42a 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -99,7 +99,7 @@ def assert_frame_equal(cls, left, right, *args, **kwargs):
check_names=kwargs.get("check_names", True),
check_exact=kwargs.get("check_exact", False),
check_categorical=kwargs.get("check_categorical", True),
- obj="{obj}.columns".format(obj=kwargs.get("obj", "DataFrame")),
+ obj=f"{kwargs.get('obj', 'DataFrame')}.columns",
)
decimals = (left.dtypes == "decimal").index
diff --git a/pandas/tests/frame/indexing/test_categorical.py b/pandas/tests/frame/indexing/test_categorical.py
index a29c193676db2..3a472a8b58b6c 100644
--- a/pandas/tests/frame/indexing/test_categorical.py
+++ b/pandas/tests/frame/indexing/test_categorical.py
@@ -14,9 +14,7 @@ def test_assignment(self):
df = DataFrame(
{"value": np.array(np.random.randint(0, 10000, 100), dtype="int32")}
)
- labels = Categorical(
- ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
- )
+ labels = Categorical([f"{i} - {i + 499}" for i in range(0, 10000, 500)])
df = df.sort_values(by=["value"], ascending=True)
s = pd.cut(df.value, range(0, 10500, 500), right=False, labels=labels)
@@ -348,7 +346,7 @@ def test_assigning_ops(self):
def test_functions_no_warnings(self):
df = DataFrame({"value": np.random.randint(0, 100, 20)})
- labels = ["{0} - {1}".format(i, i + 9) for i in range(0, 100, 10)]
+ labels = [f"{i} - {i + 9}" for i in range(0, 100, 10)]
with tm.assert_produces_warning(False):
df["group"] = pd.cut(
df.value, range(0, 105, 10), right=False, labels=labels
diff --git a/pandas/tests/frame/methods/test_describe.py b/pandas/tests/frame/methods/test_describe.py
index 127233ed2713e..8a75e80a12f52 100644
--- a/pandas/tests/frame/methods/test_describe.py
+++ b/pandas/tests/frame/methods/test_describe.py
@@ -86,7 +86,7 @@ def test_describe_bool_frame(self):
def test_describe_categorical(self):
df = DataFrame({"value": np.random.randint(0, 10000, 100)})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
diff --git a/pandas/tests/frame/methods/test_duplicated.py b/pandas/tests/frame/methods/test_duplicated.py
index 72eec8753315c..38b9d7fd049ab 100644
--- a/pandas/tests/frame/methods/test_duplicated.py
+++ b/pandas/tests/frame/methods/test_duplicated.py
@@ -22,9 +22,7 @@ def test_duplicated_do_not_fail_on_wide_dataframes():
# gh-21524
# Given the wide dataframe with a lot of columns
# with different (important!) values
- data = {
- "col_{0:02d}".format(i): np.random.randint(0, 1000, 30000) for i in range(100)
- }
+ data = {f"col_{i:02d}": np.random.randint(0, 1000, 30000) for i in range(100)}
df = DataFrame(data).T
result = df.duplicated()
diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py
index 7b0adceb57668..40393721c4ac6 100644
--- a/pandas/tests/frame/methods/test_to_dict.py
+++ b/pandas/tests/frame/methods/test_to_dict.py
@@ -236,9 +236,9 @@ def test_to_dict_numeric_names(self):
def test_to_dict_wide(self):
# GH#24939
- df = DataFrame({("A_{:d}".format(i)): [i] for i in range(256)})
+ df = DataFrame({(f"A_{i:d}"): [i] for i in range(256)})
result = df.to_dict("records")[0]
- expected = {"A_{:d}".format(i): i for i in range(256)}
+ expected = {f"A_{i:d}": i for i in range(256)}
assert result == expected
def test_to_dict_orient_dtype(self):
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index 602ea9ca0471a..ad244d8f359a9 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -383,7 +383,7 @@ class Thing(frozenset):
def __repr__(self) -> str:
tmp = sorted(self)
# double curly brace prints one brace in format string
- return "frozenset({{{}}})".format(", ".join(map(repr, tmp)))
+ return f"frozenset({{{', '.join(map(repr, tmp))}}})"
thing1 = Thing(["One", "red"])
thing2 = Thing(["Two", "blue"])
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 17cc50661e3cb..a021dd91a7d26 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -46,19 +46,19 @@ def test_get_value(self, float_frame):
def test_add_prefix_suffix(self, float_frame):
with_prefix = float_frame.add_prefix("foo#")
- expected = pd.Index(["foo#{c}".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"foo#{c}" for c in float_frame.columns])
tm.assert_index_equal(with_prefix.columns, expected)
with_suffix = float_frame.add_suffix("#foo")
- expected = pd.Index(["{c}#foo".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"{c}#foo" for c in float_frame.columns])
tm.assert_index_equal(with_suffix.columns, expected)
with_pct_prefix = float_frame.add_prefix("%")
- expected = pd.Index(["%{c}".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"%{c}" for c in float_frame.columns])
tm.assert_index_equal(with_pct_prefix.columns, expected)
with_pct_suffix = float_frame.add_suffix("%")
- expected = pd.Index(["{c}%".format(c=c) for c in float_frame.columns])
+ expected = pd.Index([f"{c}%" for c in float_frame.columns])
tm.assert_index_equal(with_pct_suffix.columns, expected)
def test_get_axis(self, float_frame):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 5f4c78449f71d..8c9b7cd060059 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -278,7 +278,7 @@ def test_constructor_ordereddict(self):
nitems = 100
nums = list(range(nitems))
random.shuffle(nums)
- expected = ["A{i:d}".format(i=i) for i in nums]
+ expected = [f"A{i:d}" for i in nums]
df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems)))
assert expected == list(df.columns)
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 966f0d416676c..8b63f0614eebf 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -702,7 +702,7 @@ def test_astype_categorical(self, dtype):
@pytest.mark.parametrize("cls", [CategoricalDtype, DatetimeTZDtype, IntervalDtype])
def test_astype_categoricaldtype_class_raises(self, cls):
df = DataFrame({"A": ["a", "a", "b", "c"]})
- xpr = "Expected an instance of {}".format(cls.__name__)
+ xpr = f"Expected an instance of {cls.__name__}"
with pytest.raises(TypeError, match=xpr):
df.astype({"A": cls})
@@ -827,7 +827,7 @@ def test_df_where_change_dtype(self):
def test_astype_from_datetimelike_to_objectt(self, dtype, unit):
# tests astype to object dtype
# gh-19223 / gh-12425
- dtype = "{}[{}]".format(dtype, unit)
+ dtype = f"{dtype}[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(object)
@@ -844,7 +844,7 @@ def test_astype_from_datetimelike_to_objectt(self, dtype, unit):
def test_astype_to_datetimelike_unit(self, arr_dtype, dtype, unit):
# tests all units from numeric origination
# gh-19223 / gh-12425
- dtype = "{}[{}]".format(dtype, unit)
+ dtype = f"{dtype}[{unit}]"
arr = np.array([[1, 2, 3]], dtype=arr_dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -856,7 +856,7 @@ def test_astype_to_datetimelike_unit(self, arr_dtype, dtype, unit):
def test_astype_to_datetime_unit(self, unit):
# tests all units from datetime origination
# gh-19223
- dtype = "M8[{}]".format(unit)
+ dtype = f"M8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -868,7 +868,7 @@ def test_astype_to_datetime_unit(self, unit):
def test_astype_to_timedelta_unit_ns(self, unit):
# preserver the timedelta conversion
# gh-19223
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -880,7 +880,7 @@ def test_astype_to_timedelta_unit_ns(self, unit):
def test_astype_to_timedelta_unit(self, unit):
# coerce to float
# gh-19223
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
arr = np.array([[1, 2, 3]], dtype=dtype)
df = DataFrame(arr)
result = df.astype(dtype)
@@ -892,21 +892,21 @@ def test_astype_to_timedelta_unit(self, unit):
def test_astype_to_incorrect_datetimelike(self, unit):
# trying to astype a m to a M, or vice-versa
# gh-19224
- dtype = "M8[{}]".format(unit)
- other = "m8[{}]".format(unit)
+ dtype = f"M8[{unit}]"
+ other = f"m8[{unit}]"
df = DataFrame(np.array([[1, 2, 3]], dtype=dtype))
msg = (
- r"cannot astype a datetimelike from \[datetime64\[ns\]\] to "
- r"\[timedelta64\[{}\]\]"
- ).format(unit)
+ fr"cannot astype a datetimelike from \[datetime64\[ns\]\] to "
+ fr"\[timedelta64\[{unit}\]\]"
+ )
with pytest.raises(TypeError, match=msg):
df.astype(other)
msg = (
- r"cannot astype a timedelta from \[timedelta64\[ns\]\] to "
- r"\[datetime64\[{}\]\]"
- ).format(unit)
+ fr"cannot astype a timedelta from \[timedelta64\[ns\]\] to "
+ fr"\[datetime64\[{unit}\]\]"
+ )
df = DataFrame(np.array([[1, 2, 3]], dtype=other))
with pytest.raises(TypeError, match=msg):
df.astype(dtype)
diff --git a/pandas/tests/frame/test_join.py b/pandas/tests/frame/test_join.py
index c6e28f3c64f12..8c388a887158f 100644
--- a/pandas/tests/frame/test_join.py
+++ b/pandas/tests/frame/test_join.py
@@ -161,7 +161,7 @@ def test_join_overlap(float_frame):
def test_join_period_index(frame_with_period_index):
- other = frame_with_period_index.rename(columns=lambda x: "{key}{key}".format(key=x))
+ other = frame_with_period_index.rename(columns=lambda key: f"{key}{key}")
joined_values = np.concatenate([frame_with_period_index.values] * 2, axis=1)
diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py
index 162f3c114fa5d..df40c2e7e2a11 100644
--- a/pandas/tests/frame/test_operators.py
+++ b/pandas/tests/frame/test_operators.py
@@ -840,8 +840,8 @@ def test_inplace_ops_identity2(self, op):
df["a"] = [True, False, True]
df_copy = df.copy()
- iop = "__i{}__".format(op)
- op = "__{}__".format(op)
+ iop = f"__i{op}__"
+ op = f"__{op}__"
# no id change and value is correct
getattr(df, iop)(operand)
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index 703e05998e93c..fede07e28dbef 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -101,10 +101,10 @@ def test_ops(self):
np.tile(m.values, n).reshape(n, -1), columns=list("abcd")
)
- expected = eval("base{op}df".format(op=op_str))
+ expected = eval(f"base{op_str}df")
# ops as strings
- result = eval("m{op}df".format(op=op_str))
+ result = eval(f"m{op_str}df")
tm.assert_frame_equal(result, expected)
# these are commutative
@@ -451,9 +451,7 @@ def test_date_query_with_non_date(self):
for op in ["<", ">", "<=", ">="]:
with pytest.raises(TypeError):
- df.query(
- "dates {op} nondate".format(op=op), parser=parser, engine=engine
- )
+ df.query(f"dates {op} nondate", parser=parser, engine=engine)
def test_query_syntax_error(self):
engine, parser = self.engine, self.parser
@@ -690,7 +688,7 @@ def test_inf(self):
ops = "==", "!="
d = dict(zip(ops, (operator.eq, operator.ne)))
for op, f in d.items():
- q = "a {op} inf".format(op=op)
+ q = f"a {op} inf"
expected = df[f(df.a, np.inf)]
result = df.query(q, engine=self.engine, parser=self.parser)
tm.assert_frame_equal(result, expected)
@@ -854,7 +852,7 @@ def test_str_query_method(self, parser, engine):
ops = 2 * ([eq] + [ne])
for lhs, op, rhs in zip(lhs, ops, rhs):
- ex = "{lhs} {op} {rhs}".format(lhs=lhs, op=op, rhs=rhs)
+ ex = f"{lhs} {op} {rhs}"
msg = r"'(Not)?In' nodes are not implemented"
with pytest.raises(NotImplementedError, match=msg):
df.query(
@@ -895,7 +893,7 @@ def test_str_list_query_method(self, parser, engine):
ops = 2 * ([eq] + [ne])
for lhs, op, rhs in zip(lhs, ops, rhs):
- ex = "{lhs} {op} {rhs}".format(lhs=lhs, op=op, rhs=rhs)
+ ex = f"{lhs} {op} {rhs}"
with pytest.raises(NotImplementedError):
df.query(ex, engine=engine, parser=parser)
else:
@@ -1042,7 +1040,7 @@ def test_invalid_type_for_operator_raises(self, parser, engine, op):
msg = r"unsupported operand type\(s\) for .+: '.+' and '.+'"
with pytest.raises(TypeError, match=msg):
- df.eval("a {0} b".format(op), engine=engine, parser=parser)
+ df.eval(f"a {op} b", engine=engine, parser=parser)
class TestDataFrameQueryBacktickQuoting:
diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py
index b3af5a7b7317e..68519d1dfa33c 100644
--- a/pandas/tests/frame/test_reshape.py
+++ b/pandas/tests/frame/test_reshape.py
@@ -765,7 +765,7 @@ def test_unstack_unused_level(self, cols):
tm.assert_frame_equal(result, expected)
def test_unstack_nan_index(self): # GH7466
- cast = lambda val: "{0:1}".format("" if val != val else val)
+ cast = lambda val: f"{'' if val != val else val:1}"
def verify(df):
mk_list = lambda a: list(a) if isinstance(a, tuple) else [a]
diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py
index e89f4ee07ea00..5e06b6402c34f 100644
--- a/pandas/tests/frame/test_timeseries.py
+++ b/pandas/tests/frame/test_timeseries.py
@@ -54,7 +54,7 @@ def test_frame_append_datetime64_col_other_units(self):
ns_dtype = np.dtype("M8[ns]")
for unit in units:
- dtype = np.dtype("M8[{unit}]".format(unit=unit))
+ dtype = np.dtype(f"M8[{unit}]")
vals = np.arange(n, dtype=np.int64).view(dtype)
df = DataFrame({"ints": np.arange(n)}, index=np.arange(n))
@@ -70,7 +70,7 @@ def test_frame_append_datetime64_col_other_units(self):
df["dates"] = np.arange(n, dtype=np.int64).view(ns_dtype)
for unit in units:
- dtype = np.dtype("M8[{unit}]".format(unit=unit))
+ dtype = np.dtype(f"M8[{unit}]")
vals = np.arange(n, dtype=np.int64).view(dtype)
tmp = df.copy()
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index 84eee2419f0b8..21ee8649172da 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -248,21 +248,21 @@ def test_round_int64(self, start, index_freq, periods, round_freq):
result = dt.floor(round_freq)
diff = dt.asi8 - result.asi8
mod = result.asi8 % unit
- assert (mod == 0).all(), "floor not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"floor not a {round_freq} multiple"
assert (0 <= diff).all() and (diff < unit).all(), "floor error"
# test ceil
result = dt.ceil(round_freq)
diff = result.asi8 - dt.asi8
mod = result.asi8 % unit
- assert (mod == 0).all(), "ceil not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"ceil not a {round_freq} multiple"
assert (0 <= diff).all() and (diff < unit).all(), "ceil error"
# test round
result = dt.round(round_freq)
diff = abs(result.asi8 - dt.asi8)
mod = result.asi8 % unit
- assert (mod == 0).all(), "round not a {} multiple".format(round_freq)
+ assert (mod == 0).all(), f"round not a {round_freq} multiple"
assert (diff <= unit // 2).all(), "round error"
if unit % 2 == 0:
assert (
diff --git a/pandas/tests/indexes/datetimes/test_tools.py b/pandas/tests/indexes/datetimes/test_tools.py
index df3a49fb7c292..13723f6455bff 100644
--- a/pandas/tests/indexes/datetimes/test_tools.py
+++ b/pandas/tests/indexes/datetimes/test_tools.py
@@ -199,7 +199,7 @@ def test_to_datetime_format_microsecond(self, cache):
# these are locale dependent
lang, _ = locale.getlocale()
month_abbr = calendar.month_abbr[4]
- val = "01-{}-2011 00:00:01.978".format(month_abbr)
+ val = f"01-{month_abbr}-2011 00:00:01.978"
format = "%d-%b-%Y %H:%M:%S.%f"
result = to_datetime(val, format=format, cache=cache)
@@ -551,7 +551,7 @@ def test_to_datetime_dt64s(self, cache):
)
@pytest.mark.parametrize("cache", [True, False])
def test_to_datetime_dt64s_out_of_bounds(self, cache, dt):
- msg = "Out of bounds nanosecond timestamp: {}".format(dt)
+ msg = f"Out of bounds nanosecond timestamp: {dt}"
with pytest.raises(OutOfBoundsDatetime, match=msg):
pd.to_datetime(dt, errors="raise")
with pytest.raises(OutOfBoundsDatetime, match=msg):
diff --git a/pandas/tests/indexes/interval/test_indexing.py b/pandas/tests/indexes/interval/test_indexing.py
index 87b72f702e2aa..0e5721bfd83fd 100644
--- a/pandas/tests/indexes/interval/test_indexing.py
+++ b/pandas/tests/indexes/interval/test_indexing.py
@@ -24,11 +24,7 @@ def test_get_loc_interval(self, closed, side):
for bound in [[0, 1], [1, 2], [2, 3], [3, 4], [0, 2], [2.5, 3], [-1, 4]]:
# if get_loc is supplied an interval, it should only search
# for exact matches, not overlaps or covers, else KeyError.
- msg = re.escape(
- "Interval({bound[0]}, {bound[1]}, closed='{side}')".format(
- bound=bound, side=side
- )
- )
+ msg = re.escape(f"Interval({bound[0]}, {bound[1]}, closed='{side}')")
if closed == side:
if bound == [0, 1]:
assert idx.get_loc(Interval(0, 1, closed=side)) == 0
@@ -86,11 +82,7 @@ def test_get_loc_length_one_interval(self, left, right, closed, other_closed):
else:
with pytest.raises(
KeyError,
- match=re.escape(
- "Interval({left}, {right}, closed='{other_closed}')".format(
- left=left, right=right, other_closed=other_closed
- )
- ),
+ match=re.escape(f"Interval({left}, {right}, closed='{other_closed}')"),
):
index.get_loc(interval)
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index d010060880703..c2b209c810af9 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -845,7 +845,7 @@ def test_set_closed(self, name, closed, new_closed):
def test_set_closed_errors(self, bad_closed):
# GH 21670
index = interval_range(0, 5)
- msg = "invalid option for 'closed': {closed}".format(closed=bad_closed)
+ msg = f"invalid option for 'closed': {bad_closed}"
with pytest.raises(ValueError, match=msg):
index.set_closed(bad_closed)
diff --git a/pandas/tests/indexes/interval/test_setops.py b/pandas/tests/indexes/interval/test_setops.py
index 3246ac6bafde9..b9eb8b7c41018 100644
--- a/pandas/tests/indexes/interval/test_setops.py
+++ b/pandas/tests/indexes/interval/test_setops.py
@@ -180,8 +180,8 @@ def test_set_incompatible_types(self, closed, op_name, sort):
# GH 19016: incompatible dtypes
other = interval_range(Timestamp("20180101"), periods=9, closed=closed)
msg = (
- "can only do {op} between two IntervalIndex objects that have "
- "compatible dtypes"
- ).format(op=op_name)
+ f"can only do {op_name} between two IntervalIndex objects that have "
+ f"compatible dtypes"
+ )
with pytest.raises(TypeError, match=msg):
set_op(other, sort=sort)
diff --git a/pandas/tests/indexes/multi/test_compat.py b/pandas/tests/indexes/multi/test_compat.py
index 9a76f0623eb31..ef549beccda5d 100644
--- a/pandas/tests/indexes/multi/test_compat.py
+++ b/pandas/tests/indexes/multi/test_compat.py
@@ -29,7 +29,7 @@ def test_numeric_compat(idx):
@pytest.mark.parametrize("method", ["all", "any"])
def test_logical_compat(idx, method):
- msg = "cannot perform {method}".format(method=method)
+ msg = f"cannot perform {method}"
with pytest.raises(TypeError, match=msg):
getattr(idx, method)()
diff --git a/pandas/tests/indexes/period/test_constructors.py b/pandas/tests/indexes/period/test_constructors.py
index fcbadce3d63b1..418f53591b913 100644
--- a/pandas/tests/indexes/period/test_constructors.py
+++ b/pandas/tests/indexes/period/test_constructors.py
@@ -364,7 +364,7 @@ def test_constructor_year_and_quarter(self):
year = pd.Series([2001, 2002, 2003])
quarter = year - 2000
idx = PeriodIndex(year=year, quarter=quarter)
- strs = ["{t[0]:d}Q{t[1]:d}".format(t=t) for t in zip(quarter, year)]
+ strs = [f"{t[0]:d}Q{t[1]:d}" for t in zip(quarter, year)]
lops = list(map(Period, strs))
p = PeriodIndex(lops)
tm.assert_index_equal(p, idx)
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 0de10b5d82171..8e54561df1624 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -155,7 +155,7 @@ def test_constructor(self):
def test_constructor_iso(self):
# GH #21877
expected = timedelta_range("1s", periods=9, freq="s")
- durations = ["P0DT0H0M{}S".format(i) for i in range(1, 10)]
+ durations = [f"P0DT0H0M{i}S" for i in range(1, 10)]
result = to_timedelta(durations)
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 6cc18a3989266..e85561fce0668 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -53,8 +53,8 @@ def test_scalar_error(self, index_func):
s.iloc[3.0]
msg = (
- "cannot do positional indexing on {klass} with these "
- r"indexers \[3\.0\] of type float".format(klass=type(i).__name__)
+ fr"cannot do positional indexing on {type(i).__name__} with these "
+ fr"indexers \[3\.0\] of type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
@@ -94,11 +94,11 @@ def test_scalar_non_numeric(self, index_func):
else:
error = TypeError
msg = (
- r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float|"
- "Cannot index by location index with a "
- "non-integer key".format(klass=type(i).__name__)
+ fr"cannot do (label|positional) indexing "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ fr"type float|"
+ fr"Cannot index by location index with a "
+ fr"non-integer key"
)
with pytest.raises(error, match=msg):
idxr(s)[3.0]
@@ -115,9 +115,9 @@ def test_scalar_non_numeric(self, index_func):
else:
error = TypeError
msg = (
- r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"cannot do (label|positional) indexing "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ fr"type float"
)
with pytest.raises(error, match=msg):
s.loc[3.0]
@@ -127,9 +127,9 @@ def test_scalar_non_numeric(self, index_func):
# setting with a float fails with iloc
msg = (
- r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"cannot do (label|positional) indexing "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
@@ -164,9 +164,9 @@ def test_scalar_non_numeric(self, index_func):
s = Series(np.arange(len(i)), index=i)
s[3]
msg = (
- r"cannot do (label|positional) indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=type(i).__name__)
+ fr"cannot do (label|positional) indexing "
+ fr"on {type(i).__name__} with these indexers \[3\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[3.0]
@@ -181,12 +181,10 @@ def test_scalar_with_mixed(self):
for idxr in [lambda x: x, lambda x: x.iloc]:
msg = (
- r"cannot do label indexing "
- r"on {klass} with these indexers \[1\.0\] of "
- r"type float|"
- "Cannot index by location index with a non-integer key".format(
- klass=Index.__name__
- )
+ fr"cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ fr"type float|"
+ fr"Cannot index by location index with a non-integer key"
)
with pytest.raises(TypeError, match=msg):
idxr(s2)[1.0]
@@ -203,9 +201,9 @@ def test_scalar_with_mixed(self):
for idxr in [lambda x: x]:
msg = (
- r"cannot do label indexing "
- r"on {klass} with these indexers \[1\.0\] of "
- r"type float".format(klass=Index.__name__)
+ fr"cannot do label indexing "
+ fr"on {Index.__name__} with these indexers \[1\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
idxr(s3)[1.0]
@@ -321,9 +319,9 @@ def test_scalar_float(self):
s.iloc[3.0]
msg = (
- r"cannot do positional indexing "
- r"on {klass} with these indexers \[3\.0\] of "
- r"type float".format(klass=Float64Index.__name__)
+ fr"cannot do positional indexing "
+ fr"on {Float64Index.__name__} with these indexers \[3\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s2.iloc[3.0] = 0
@@ -354,9 +352,9 @@ def test_slice_non_numeric(self, index_func):
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
msg = (
- "cannot do positional indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do positional indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[l]
@@ -364,10 +362,10 @@ def test_slice_non_numeric(self, index_func):
for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
- "cannot do (slice|positional) indexing "
- r"on {klass} with these indexers "
- r"\[(3|4)(\.0)?\] "
- r"of type (float|int)".format(klass=type(index).__name__)
+ fr"cannot do (slice|positional) indexing "
+ fr"on {type(index).__name__} with these indexers "
+ fr"\[(3|4)(\.0)?\] "
+ fr"of type (float|int)"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
@@ -376,19 +374,19 @@ def test_slice_non_numeric(self, index_func):
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
msg = (
- "cannot do positional indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do positional indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s.iloc[l] = 0
for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
- "cannot do (slice|positional) indexing "
- r"on {klass} with these indexers "
- r"\[(3|4)(\.0)?\] "
- r"of type (float|int)".format(klass=type(index).__name__)
+ fr"cannot do (slice|positional) indexing "
+ fr"on {type(index).__name__} with these indexers "
+ fr"\[(3|4)(\.0)?\] "
+ fr"of type (float|int)"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l] = 0
@@ -426,9 +424,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -450,9 +448,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[-6\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[-6\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[slice(-6.0, 6.0)]
@@ -476,9 +474,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[(2|3)\.5\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(2|3)\.5\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -494,9 +492,9 @@ def test_slice_integer(self):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
@@ -517,9 +515,9 @@ def test_integer_positional_indexing(self):
klass = RangeIndex
msg = (
- "cannot do (slice|positional) indexing "
- r"on {klass} with these indexers \[(2|4)\.0\] of "
- "type float".format(klass=klass.__name__)
+ fr"cannot do (slice|positional) indexing "
+ fr"on {klass.__name__} with these indexers \[(2|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
@@ -545,9 +543,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[(0|1)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(0|1)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -560,9 +558,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[-10\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[-10\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[slice(-10.0, 10.0)]
@@ -579,9 +577,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[0\.5\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[0\.5\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l]
@@ -596,9 +594,9 @@ def f(idxr):
# positional indexing
msg = (
- "cannot do slice indexing "
- r"on {klass} with these indexers \[(3|4)\.0\] of "
- "type float".format(klass=type(index).__name__)
+ fr"cannot do slice indexing "
+ fr"on {type(index).__name__} with these indexers \[(3|4)\.0\] of "
+ fr"type float"
)
with pytest.raises(TypeError, match=msg):
s[l] = 0
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index aa966caa63238..e12167c980dec 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -91,9 +91,7 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
elif typestr in ("complex", "c16", "c8"):
values = 1.0j * (mat.astype(typestr) + num_offset)
elif typestr in ("object", "string", "O"):
- values = np.reshape(
- ["A{i:d}".format(i=i) for i in mat.ravel() + num_offset], shape
- )
+ values = np.reshape([f"A{i:d}" for i in mat.ravel() + num_offset], shape)
elif typestr in ("b", "bool"):
values = np.ones(shape, dtype=np.bool_)
elif typestr in ("datetime", "dt", "M8[ns]"):
@@ -101,7 +99,7 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
elif typestr.startswith("M8[ns"):
# datetime with tz
m = re.search(r"M8\[ns,\s*(\w+\/?\w*)\]", typestr)
- assert m is not None, "incompatible typestr -> {0}".format(typestr)
+ assert m is not None, f"incompatible typestr -> {typestr}"
tz = m.groups()[0]
assert num_items == 1, "must have only 1 num items for a tz-aware"
values = DatetimeIndex(np.arange(N) * 1e9, tz=tz)
@@ -610,9 +608,9 @@ def test_interleave(self):
# self
for dtype in ["f8", "i8", "object", "bool", "complex", "M8[ns]", "m8[ns]"]:
- mgr = create_mgr("a: {0}".format(dtype))
+ mgr = create_mgr(f"a: {dtype}")
assert mgr.as_array().dtype == dtype
- mgr = create_mgr("a: {0}; b: {0}".format(dtype))
+ mgr = create_mgr(f"a: {dtype}; b: {dtype}")
assert mgr.as_array().dtype == dtype
# will be converted according the actual dtype of the underlying
@@ -1164,7 +1162,7 @@ def __array__(self):
return np.array(self.value, dtype=self.dtype)
def __str__(self) -> str:
- return "DummyElement({}, {})".format(self.value, self.dtype)
+ return f"DummyElement({self.value}, {self.dtype})"
def __repr__(self) -> str:
return str(self)
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 8d00ef1b7fe3e..d18f83982ce25 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -596,7 +596,7 @@ def test_read_from_file_url(self, read_ext, datapath):
# fails on some systems
import platform
- pytest.skip("failing on {}".format(" ".join(platform.uname()).strip()))
+ pytest.skip(f"failing on {' '.join(platform.uname()).strip()}")
tm.assert_frame_equal(url_table, local_table)
@@ -957,7 +957,7 @@ def test_excel_passes_na_filter(self, read_ext, na_filter):
def test_unexpected_kwargs_raises(self, read_ext, arg):
# gh-17964
kwarg = {arg: "Sheet1"}
- msg = r"unexpected keyword argument `{}`".format(arg)
+ msg = fr"unexpected keyword argument `{arg}`"
with pd.ExcelFile("test1" + read_ext) as excel:
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/io/excel/test_style.py b/pandas/tests/io/excel/test_style.py
index 88f4c3736bc0d..31b033f381f0c 100644
--- a/pandas/tests/io/excel/test_style.py
+++ b/pandas/tests/io/excel/test_style.py
@@ -45,10 +45,7 @@ def style(df):
def assert_equal_style(cell1, cell2, engine):
if engine in ["xlsxwriter", "openpyxl"]:
pytest.xfail(
- reason=(
- "GH25351: failing on some attribute "
- "comparisons in {}".format(engine)
- )
+ reason=(f"GH25351: failing on some attribute comparisons in {engine}")
)
# XXX: should find a better way to check equality
assert cell1.alignment.__dict__ == cell2.alignment.__dict__
@@ -108,7 +105,7 @@ def custom_converter(css):
for col1, col2 in zip(wb["frame"].columns, wb["styled"].columns):
assert len(col1) == len(col2)
for cell1, cell2 in zip(col1, col2):
- ref = "{cell2.column}{cell2.row:d}".format(cell2=cell2)
+ ref = f"{cell2.column}{cell2.row:d}"
# XXX: this isn't as strong a test as ideal; we should
# confirm that differences are exclusive
if ref == "B2":
@@ -156,7 +153,7 @@ def custom_converter(css):
for col1, col2 in zip(wb["frame"].columns, wb["custom"].columns):
assert len(col1) == len(col2)
for cell1, cell2 in zip(col1, col2):
- ref = "{cell2.column}{cell2.row:d}".format(cell2=cell2)
+ ref = f"{cell2.column}{cell2.row:d}"
if ref in ("B2", "C3", "D4", "B5", "C6", "D7", "B8", "B9"):
assert not cell1.font.bold
assert cell2.font.bold
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 91665a24fc4c5..506d223dbedb4 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -41,7 +41,7 @@ def set_engine(engine, ext):
which engine should be used to write Excel files. After executing
the test it rolls back said change to the global option.
"""
- option_name = "io.excel.{ext}.writer".format(ext=ext.strip("."))
+ option_name = f"io.excel.{ext.strip('.')}.writer"
prev_engine = get_option(option_name)
set_option(option_name, engine)
yield
@@ -1206,7 +1206,7 @@ def test_path_path_lib(self, engine, ext):
writer = partial(df.to_excel, engine=engine)
reader = partial(pd.read_excel, index_col=0)
- result = tm.round_trip_pathlib(writer, reader, path="foo.{ext}".format(ext=ext))
+ result = tm.round_trip_pathlib(writer, reader, path=f"foo.{ext}")
tm.assert_frame_equal(result, df)
def test_path_local_path(self, engine, ext):
@@ -1214,7 +1214,7 @@ def test_path_local_path(self, engine, ext):
writer = partial(df.to_excel, engine=engine)
reader = partial(pd.read_excel, index_col=0)
- result = tm.round_trip_pathlib(writer, reader, path="foo.{ext}".format(ext=ext))
+ result = tm.round_trip_pathlib(writer, reader, path=f"foo.{ext}")
tm.assert_frame_equal(result, df)
def test_merged_cell_custom_objects(self, merge_cells, path):
diff --git a/pandas/tests/io/excel/test_xlrd.py b/pandas/tests/io/excel/test_xlrd.py
index cc7e2311f362a..d456afe4ed351 100644
--- a/pandas/tests/io/excel/test_xlrd.py
+++ b/pandas/tests/io/excel/test_xlrd.py
@@ -37,7 +37,7 @@ def test_read_xlrd_book(read_ext, frame):
# TODO: test for openpyxl as well
def test_excel_table_sheet_by_index(datapath, read_ext):
- path = datapath("io", "data", "excel", "test1{}".format(read_ext))
+ path = datapath("io", "data", "excel", f"test1{read_ext}")
with pd.ExcelFile(path) as excel:
with pytest.raises(xlrd.XLRDError):
pd.read_excel(excel, "asdf")
diff --git a/pandas/tests/io/formats/test_console.py b/pandas/tests/io/formats/test_console.py
index e56d14885f11e..b57a2393461a2 100644
--- a/pandas/tests/io/formats/test_console.py
+++ b/pandas/tests/io/formats/test_console.py
@@ -34,8 +34,8 @@ def test_detect_console_encoding_from_stdout_stdin(monkeypatch, empty, filled):
# they have values filled.
# GH 21552
with monkeypatch.context() as context:
- context.setattr("sys.{}".format(empty), MockEncoding(""))
- context.setattr("sys.{}".format(filled), MockEncoding(filled))
+ context.setattr(f"sys.{empty}", MockEncoding(""))
+ context.setattr(f"sys.{filled}", MockEncoding(filled))
assert detect_console_encoding() == filled
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index d3f044a42eb28..9a14022d6f776 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -300,7 +300,7 @@ def test_to_html_border(option, result, expected):
else:
with option_context("display.html.border", option):
result = result(df)
- expected = 'border="{}"'.format(expected)
+ expected = f'border="{expected}"'
assert expected in result
@@ -318,7 +318,7 @@ def test_to_html(biggie_df_fixture):
assert isinstance(s, str)
df.to_html(columns=["B", "A"], col_space=17)
- df.to_html(columns=["B", "A"], formatters={"A": lambda x: "{x:.1f}".format(x=x)})
+ df.to_html(columns=["B", "A"], formatters={"A": lambda x: f"{x:.1f}"})
df.to_html(columns=["B", "A"], float_format=str)
df.to_html(columns=["B", "A"], col_space=12, float_format=str)
@@ -745,7 +745,7 @@ def test_to_html_with_col_space_units(unit):
if isinstance(unit, int):
unit = str(unit) + "px"
for h in hdrs:
- expected = '<th style="min-width: {unit};">'.format(unit=unit)
+ expected = f'<th style="min-width: {unit};">'
assert expected in h
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index bd681032f155d..c2fbc59b8f482 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -117,10 +117,10 @@ def test_to_latex_with_formatters(self):
formatters = {
"datetime64": lambda x: x.strftime("%Y-%m"),
- "float": lambda x: "[{x: 4.1f}]".format(x=x),
- "int": lambda x: "0x{x:x}".format(x=x),
- "object": lambda x: "-{x!s}-".format(x=x),
- "__index__": lambda x: "index: {x}".format(x=x),
+ "float": lambda x: f"[{x: 4.1f}]",
+ "int": lambda x: f"0x{x:x}",
+ "object": lambda x: f"-{x!s}-",
+ "__index__": lambda x: f"index: {x}",
}
result = df.to_latex(formatters=dict(formatters))
@@ -744,9 +744,7 @@ def test_to_latex_multiindex_names(self, name0, name1, axes):
idx_names = tuple(n or "{}" for n in names)
idx_names_row = (
- "{idx_names[0]} & {idx_names[1]} & & & & \\\\\n".format(
- idx_names=idx_names
- )
+ f"{idx_names[0]} & {idx_names[1]} & & & & \\\\\n"
if (0 in axes and any(names))
else ""
)
diff --git a/pandas/tests/io/generate_legacy_storage_files.py b/pandas/tests/io/generate_legacy_storage_files.py
index 67b767a337a89..62cbcacc7e1f9 100755
--- a/pandas/tests/io/generate_legacy_storage_files.py
+++ b/pandas/tests/io/generate_legacy_storage_files.py
@@ -324,17 +324,17 @@ def write_legacy_pickles(output_dir):
"This script generates a storage file for the current arch, system, "
"and python version"
)
- print(" pandas version: {0}".format(version))
- print(" output dir : {0}".format(output_dir))
+ print(f" pandas version: {version}")
+ print(f" output dir : {output_dir}")
print(" storage format: pickle")
- pth = "{0}.pickle".format(platform_name())
+ pth = f"{platform_name()}.pickle"
fh = open(os.path.join(output_dir, pth), "wb")
pickle.dump(create_pickle_data(), fh, pickle.HIGHEST_PROTOCOL)
fh.close()
- print("created pickle file: {pth}".format(pth=pth))
+ print(f"created pickle file: {pth}")
def write_legacy_file():
diff --git a/pandas/tests/io/parser/test_c_parser_only.py b/pandas/tests/io/parser/test_c_parser_only.py
index 1737f14e7adf9..5bbabc8e18c47 100644
--- a/pandas/tests/io/parser/test_c_parser_only.py
+++ b/pandas/tests/io/parser/test_c_parser_only.py
@@ -158,7 +158,7 @@ def test_precise_conversion(c_parser_only):
# test numbers between 1 and 2
for num in np.linspace(1.0, 2.0, num=500):
# 25 decimal digits of precision
- text = "a\n{0:.25}".format(num)
+ text = f"a\n{num:.25}"
normal_val = float(parser.read_csv(StringIO(text))["a"][0])
precise_val = float(
@@ -170,7 +170,7 @@ def test_precise_conversion(c_parser_only):
actual_val = Decimal(text[2:])
def error(val):
- return abs(Decimal("{0:.100}".format(val)) - actual_val)
+ return abs(Decimal(f"{val:.100}") - actual_val)
normal_errors.append(error(normal_val))
precise_errors.append(error(precise_val))
@@ -299,9 +299,7 @@ def test_grow_boundary_at_cap(c_parser_only):
def test_empty_header_read(count):
s = StringIO("," * count)
- expected = DataFrame(
- columns=["Unnamed: {i}".format(i=i) for i in range(count + 1)]
- )
+ expected = DataFrame(columns=[f"Unnamed: {i}" for i in range(count + 1)])
df = parser.read_csv(s)
tm.assert_frame_equal(df, expected)
@@ -489,7 +487,7 @@ def test_comment_whitespace_delimited(c_parser_only, capsys):
captured = capsys.readouterr()
# skipped lines 2, 3, 4, 9
for line_num in (2, 3, 4, 9):
- assert "Skipping line {}".format(line_num) in captured.err
+ assert f"Skipping line {line_num}" in captured.err
expected = DataFrame([[1, 2], [5, 2], [6, 2], [7, np.nan], [8, np.nan]])
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/io/parser/test_common.py b/pandas/tests/io/parser/test_common.py
index c19056d434ec3..b3aa1aa14a509 100644
--- a/pandas/tests/io/parser/test_common.py
+++ b/pandas/tests/io/parser/test_common.py
@@ -957,7 +957,7 @@ def test_nonexistent_path(all_parsers):
# gh-14086: raise more helpful FileNotFoundError
# GH#29233 "File foo" instead of "File b'foo'"
parser = all_parsers
- path = "{}.csv".format(tm.rands(10))
+ path = f"{tm.rands(10)}.csv"
msg = f"File {path} does not exist" if parser.engine == "c" else r"\[Errno 2\]"
with pytest.raises(FileNotFoundError, match=msg) as e:
@@ -1872,7 +1872,7 @@ def test_internal_eof_byte_to_file(all_parsers):
parser = all_parsers
data = b'c1,c2\r\n"test \x1a test", test\r\n'
expected = DataFrame([["test \x1a test", " test"]], columns=["c1", "c2"])
- path = "__{}__.csv".format(tm.rands(10))
+ path = f"__{tm.rands(10)}__.csv"
with tm.ensure_clean(path) as path:
with open(path, "wb") as f:
diff --git a/pandas/tests/io/parser/test_compression.py b/pandas/tests/io/parser/test_compression.py
index dc03370daa1e2..41bf022b7458c 100644
--- a/pandas/tests/io/parser/test_compression.py
+++ b/pandas/tests/io/parser/test_compression.py
@@ -145,7 +145,7 @@ def test_invalid_compression(all_parsers, invalid_compression):
parser = all_parsers
compress_kwargs = dict(compression=invalid_compression)
- msg = "Unrecognized compression type: {compression}".format(**compress_kwargs)
+ msg = f"Unrecognized compression type: {compress_kwargs['compression']}"
with pytest.raises(ValueError, match=msg):
parser.read_csv("test_file.zip", **compress_kwargs)
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index 13f72a0414bac..3661e4e056db2 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -45,7 +45,7 @@ def test_utf16_bom_skiprows(all_parsers, sep, encoding):
4,5,6""".replace(
",", sep
)
- path = "__{}__.csv".format(tm.rands(10))
+ path = f"__{tm.rands(10)}__.csv"
kwargs = dict(sep=sep, skiprows=2)
utf8 = "utf-8"
diff --git a/pandas/tests/io/parser/test_multi_thread.py b/pandas/tests/io/parser/test_multi_thread.py
index 64ccaf60ec230..458ff4da55ed3 100644
--- a/pandas/tests/io/parser/test_multi_thread.py
+++ b/pandas/tests/io/parser/test_multi_thread.py
@@ -41,9 +41,7 @@ def test_multi_thread_string_io_read_csv(all_parsers):
num_files = 100
bytes_to_df = [
- "\n".join(
- ["{i:d},{i:d},{i:d}".format(i=i) for i in range(max_row_range)]
- ).encode()
+ "\n".join([f"{i:d},{i:d},{i:d}" for i in range(max_row_range)]).encode()
for _ in range(num_files)
]
files = [BytesIO(b) for b in bytes_to_df]
diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py
index f9a083d7f5d22..da9930d910043 100644
--- a/pandas/tests/io/parser/test_na_values.py
+++ b/pandas/tests/io/parser/test_na_values.py
@@ -111,10 +111,10 @@ def f(i, v):
elif i > 0:
buf = "".join([","] * i)
- buf = "{0}{1}".format(buf, v)
+ buf = f"{buf}{v}"
if i < nv - 1:
- buf = "{0}{1}".format(buf, "".join([","] * (nv - i - 1)))
+ buf = f"{buf}{''.join([','] * (nv - i - 1))}"
return buf
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index b01b22e811ee3..31573e4e6ecce 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -1101,7 +1101,7 @@ def test_bad_date_parse(all_parsers, cache_dates, value):
# if we have an invalid date make sure that we handle this with
# and w/o the cache properly
parser = all_parsers
- s = StringIO(("{value},\n".format(value=value)) * 50000)
+ s = StringIO((f"{value},\n") * 50000)
parser.read_csv(
s,
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index 27aef2376e87d..e982667f06f31 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -260,7 +260,7 @@ def test_fwf_regression():
# Turns out "T060" is parsable as a datetime slice!
tz_list = [1, 10, 20, 30, 60, 80, 100]
widths = [16] + [8] * len(tz_list)
- names = ["SST"] + ["T{z:03d}".format(z=z) for z in tz_list[1:]]
+ names = ["SST"] + [f"T{z:03d}" for z in tz_list[1:]]
data = """ 2009164202000 9.5403 9.4105 8.6571 7.8372 6.0612 5.8843 5.5192
2009164203000 9.5435 9.2010 8.6167 7.8176 6.0804 5.8728 5.4869
diff --git a/pandas/tests/io/pytables/conftest.py b/pandas/tests/io/pytables/conftest.py
index 214f95c6fb441..38ffcb3b0e8ec 100644
--- a/pandas/tests/io/pytables/conftest.py
+++ b/pandas/tests/io/pytables/conftest.py
@@ -6,7 +6,7 @@
@pytest.fixture
def setup_path():
"""Fixture for setup path"""
- return "tmp.__{}__.h5".format(tm.rands(10))
+ return f"tmp.__{tm.rands(10)}__.h5"
@pytest.fixture(scope="module", autouse=True)
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 547de39eec5e0..7f5217cc0d566 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -653,7 +653,7 @@ def test_getattr(self, setup_path):
# not stores
for x in ["mode", "path", "handle", "complib"]:
- getattr(store, "_{x}".format(x=x))
+ getattr(store, f"_{x}")
def test_put(self, setup_path):
@@ -690,9 +690,7 @@ def test_put_string_index(self, setup_path):
with ensure_clean_store(setup_path) as store:
- index = Index(
- ["I am a very long string index: {i}".format(i=i) for i in range(20)]
- )
+ index = Index([f"I am a very long string index: {i}" for i in range(20)])
s = Series(np.arange(20), index=index)
df = DataFrame({"A": s, "B": s})
@@ -705,7 +703,7 @@ def test_put_string_index(self, setup_path):
# mixed length
index = Index(
["abcdefghijklmnopqrstuvwxyz1234567890"]
- + ["I am a very long string index: {i}".format(i=i) for i in range(20)]
+ + [f"I am a very long string index: {i}" for i in range(20)]
)
s = Series(np.arange(21), index=index)
df = DataFrame({"A": s, "B": s})
@@ -2044,7 +2042,7 @@ def test_unimplemented_dtypes_table_columns(self, setup_path):
df = tm.makeDataFrame()
df[n] = f
with pytest.raises(TypeError):
- store.append("df1_{n}".format(n=n), df)
+ store.append(f"df1_{n}", df)
# frame
df = tm.makeDataFrame()
@@ -2689,16 +2687,12 @@ def test_select_dtypes(self, setup_path):
expected = df[df.boolv == True].reindex(columns=["A", "boolv"]) # noqa
for v in [True, "true", 1]:
- result = store.select(
- "df", "boolv == {v!s}".format(v=v), columns=["A", "boolv"]
- )
+ result = store.select("df", f"boolv == {v!s}", columns=["A", "boolv"])
tm.assert_frame_equal(expected, result)
expected = df[df.boolv == False].reindex(columns=["A", "boolv"]) # noqa
for v in [False, "false", 0]:
- result = store.select(
- "df", "boolv == {v!s}".format(v=v), columns=["A", "boolv"]
- )
+ result = store.select("df", f"boolv == {v!s}", columns=["A", "boolv"])
tm.assert_frame_equal(expected, result)
# integer index
@@ -2784,7 +2778,7 @@ def test_select_with_many_inputs(self, setup_path):
users=["a"] * 50
+ ["b"] * 50
+ ["c"] * 100
- + ["a{i:03d}".format(i=i) for i in range(100)],
+ + [f"a{i:03d}" for i in range(100)],
)
)
_maybe_remove(store, "df")
@@ -2805,7 +2799,7 @@ def test_select_with_many_inputs(self, setup_path):
tm.assert_frame_equal(expected, result)
# big selector along the columns
- selector = ["a", "b", "c"] + ["a{i:03d}".format(i=i) for i in range(60)]
+ selector = ["a", "b", "c"] + [f"a{i:03d}" for i in range(60)]
result = store.select(
"df", "ts>=Timestamp('2012-02-01') and users=selector"
)
@@ -2914,21 +2908,19 @@ def test_select_iterator_complete_8014(self, setup_path):
# select w/o iterator and where clause, single term, begin
# of range, works
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
# select w/o iterator and where clause, single term, end
# of range, works
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
# select w/o iterator and where clause, inclusive range,
# works
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
result = store.select("df", where=where)
tm.assert_frame_equal(expected, result)
@@ -2948,21 +2940,19 @@ def test_select_iterator_complete_8014(self, setup_path):
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
tm.assert_frame_equal(expected, result)
@@ -2984,23 +2974,21 @@ def test_select_iterator_non_complete_8014(self, setup_path):
end_dt = expected.index[-2]
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index >= beg_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index <= end_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[
@@ -3018,7 +3006,7 @@ def test_select_iterator_non_complete_8014(self, setup_path):
end_dt = expected.index[-1]
# select w/iterator and where clause, single term, begin of range
- where = "index > '{end_dt}'".format(end_dt=end_dt)
+ where = f"index > '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
assert 0 == len(results)
@@ -3040,14 +3028,14 @@ def test_select_iterator_many_empty_frames(self, setup_path):
end_dt = expected.index[chunksize - 1]
# select w/iterator and where clause, single term, begin of range
- where = "index >= '{beg_dt}'".format(beg_dt=beg_dt)
+ where = f"index >= '{beg_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
result = concat(results)
rexpected = expected[expected.index >= beg_dt]
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, single term, end of range
- where = "index <= '{end_dt}'".format(end_dt=end_dt)
+ where = f"index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
assert len(results) == 1
@@ -3056,9 +3044,7 @@ def test_select_iterator_many_empty_frames(self, setup_path):
tm.assert_frame_equal(rexpected, result)
# select w/iterator and where clause, inclusive range
- where = "index >= '{beg_dt}' & index <= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index >= '{beg_dt}' & index <= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
# should be 1, is 10
@@ -3076,9 +3062,7 @@ def test_select_iterator_many_empty_frames(self, setup_path):
# return [] e.g. `for e in []: print True` never prints
# True.
- where = "index <= '{beg_dt}' & index >= '{end_dt}'".format(
- beg_dt=beg_dt, end_dt=end_dt
- )
+ where = f"index <= '{beg_dt}' & index >= '{end_dt}'"
results = list(store.select("df", where=where, chunksize=chunksize))
# should be []
@@ -3807,8 +3791,8 @@ def test_start_stop_fixed(self, setup_path):
def test_select_filter_corner(self, setup_path):
df = DataFrame(np.random.randn(50, 100))
- df.index = ["{c:3d}".format(c=c) for c in df.index]
- df.columns = ["{c:3d}".format(c=c) for c in df.columns]
+ df.index = [f"{c:3d}" for c in df.index]
+ df.columns = [f"{c:3d}" for c in df.columns]
with ensure_clean_store(setup_path) as store:
store.put("frame", df, format="table")
@@ -4259,7 +4243,7 @@ def test_append_with_diff_col_name_types_raises_value_error(self, setup_path):
df5 = DataFrame({("1", 2, object): np.random.randn(10)})
with ensure_clean_store(setup_path) as store:
- name = "df_{}".format(tm.rands(10))
+ name = f"df_{tm.rands(10)}"
store.append(name, df)
for d in (df2, df3, df4, df5):
@@ -4543,9 +4527,7 @@ def test_to_hdf_with_object_column_names(self, setup_path):
with ensure_clean_path(setup_path) as path:
with catch_warnings(record=True):
df.to_hdf(path, "df", format="table", data_columns=True)
- result = pd.read_hdf(
- path, "df", where="index = [{0}]".format(df.index[0])
- )
+ result = pd.read_hdf(path, "df", where=f"index = [{df.index[0]}]")
assert len(result)
def test_read_hdf_open_store(self, setup_path):
@@ -4678,16 +4660,16 @@ def test_query_long_float_literal(self, setup_path):
store.append("test", df, format="table", data_columns=True)
cutoff = 1000000000.0006
- result = store.select("test", "A < {cutoff:.4f}".format(cutoff=cutoff))
+ result = store.select("test", f"A < {cutoff:.4f}")
assert result.empty
cutoff = 1000000000.0010
- result = store.select("test", "A > {cutoff:.4f}".format(cutoff=cutoff))
+ result = store.select("test", f"A > {cutoff:.4f}")
expected = df.loc[[1, 2], :]
tm.assert_frame_equal(expected, result)
exact = 1000000000.0011
- result = store.select("test", "A == {exact:.4f}".format(exact=exact))
+ result = store.select("test", f"A == {exact:.4f}")
expected = df.loc[[1], :]
tm.assert_frame_equal(expected, result)
@@ -4714,21 +4696,21 @@ def test_query_compare_column_type(self, setup_path):
for op in ["<", ">", "=="]:
# non strings to string column always fail
for v in [2.1, True, pd.Timestamp("2014-01-01"), pd.Timedelta(1, "s")]:
- query = "date {op} v".format(op=op)
+ query = f"date {op} v"
with pytest.raises(TypeError):
store.select("test", where=query)
# strings to other columns must be convertible to type
v = "a"
for col in ["int", "float", "real_date"]:
- query = "{col} {op} v".format(op=op, col=col)
+ query = f"{col} {op} v"
with pytest.raises(ValueError):
store.select("test", where=query)
for v, col in zip(
["1", "1.1", "2014-01-01"], ["int", "float", "real_date"]
):
- query = "{col} {op} v".format(op=op, col=col)
+ query = f"{col} {op} v"
result = store.select("test", where=query)
if op == "==":
diff --git a/pandas/tests/io/pytables/test_timezones.py b/pandas/tests/io/pytables/test_timezones.py
index 2bf22d982e5fe..74d5a77f86827 100644
--- a/pandas/tests/io/pytables/test_timezones.py
+++ b/pandas/tests/io/pytables/test_timezones.py
@@ -24,9 +24,7 @@ def _compare_with_tz(a, b):
a_e = a.loc[i, c]
b_e = b.loc[i, c]
if not (a_e == b_e and a_e.tz == b_e.tz):
- raise AssertionError(
- "invalid tz comparison [{a_e}] [{b_e}]".format(a_e=a_e, b_e=b_e)
- )
+ raise AssertionError(f"invalid tz comparison [{a_e}] [{b_e}]")
def test_append_with_timezones_dateutil(setup_path):
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index b649e394c780b..3fa7f8a966bda 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -39,9 +39,9 @@ def html_encoding_file(request, datapath):
def assert_framelist_equal(list1, list2, *args, **kwargs):
assert len(list1) == len(list2), (
- "lists are not of equal size "
- "len(list1) == {0}, "
- "len(list2) == {1}".format(len(list1), len(list2))
+ f"lists are not of equal size "
+ f"len(list1) == {len(list1)}, "
+ f"len(list2) == {len(list2)}"
)
msg = "not all list elements are DataFrames"
both_frames = all(
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index cb2112b481952..1c9da43d4ddd6 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1715,7 +1715,7 @@ def test_invalid_file_not_written(self, version):
"'ascii' codec can't decode byte 0xef in position 14: "
r"ordinal not in range\(128\)"
)
- with pytest.raises(UnicodeEncodeError, match=r"{}|{}".format(msg1, msg2)):
+ with pytest.raises(UnicodeEncodeError, match=fr"{msg1}|{msg2}"):
with tm.assert_produces_warning(ResourceWarning):
df.to_stata(path)
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index ff303b808f6f5..f9461860a5eed 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -96,9 +96,7 @@ def test_selection(self, index, freq, kind, kwargs):
def test_annual_upsample_cases(
self, targ, conv, meth, month, simple_period_range_series
):
- ts = simple_period_range_series(
- "1/1/1990", "12/31/1991", freq="A-{month}".format(month=month)
- )
+ ts = simple_period_range_series("1/1/1990", "12/31/1991", freq=f"A-{month}")
result = getattr(ts.resample(targ, convention=conv), meth)()
expected = result.to_timestamp(targ, how=conv)
@@ -130,9 +128,9 @@ def test_not_subperiod(self, simple_period_range_series, rule, expected_error_ms
# These are incompatible period rules for resampling
ts = simple_period_range_series("1/1/1990", "6/30/1995", freq="w-wed")
msg = (
- "Frequency <Week: weekday=2> cannot be resampled to {}, as they "
- "are not sub or super periods"
- ).format(expected_error_msg)
+ f"Frequency <Week: weekday=2> cannot be resampled to "
+ f"{expected_error_msg}, as they are not sub or super periods"
+ )
with pytest.raises(IncompatibleFrequency, match=msg):
ts.resample(rule).mean()
@@ -176,7 +174,7 @@ def test_annual_upsample(self, simple_period_range_series):
def test_quarterly_upsample(
self, month, target, convention, simple_period_range_series
):
- freq = "Q-{month}".format(month=month)
+ freq = f"Q-{month}"
ts = simple_period_range_series("1/1/1990", "12/31/1995", freq=freq)
result = ts.resample(target, convention=convention).ffill()
expected = result.to_timestamp(target, how=convention)
@@ -351,7 +349,7 @@ def test_fill_method_and_how_upsample(self):
@pytest.mark.parametrize("target", ["D", "B"])
@pytest.mark.parametrize("convention", ["start", "end"])
def test_weekly_upsample(self, day, target, convention, simple_period_range_series):
- freq = "W-{day}".format(day=day)
+ freq = f"W-{day}"
ts = simple_period_range_series("1/1/1990", "12/31/1995", freq=freq)
result = ts.resample(target, convention=convention).ffill()
expected = result.to_timestamp(target, how=convention)
@@ -367,16 +365,14 @@ def test_resample_to_timestamps(self, simple_period_range_series):
def test_resample_to_quarterly(self, simple_period_range_series):
for month in MONTHS:
- ts = simple_period_range_series(
- "1990", "1992", freq="A-{month}".format(month=month)
- )
- quar_ts = ts.resample("Q-{month}".format(month=month)).ffill()
+ ts = simple_period_range_series("1990", "1992", freq=f"A-{month}")
+ quar_ts = ts.resample(f"Q-{month}").ffill()
stamps = ts.to_timestamp("D", how="start")
qdates = period_range(
ts.index[0].asfreq("D", "start"),
ts.index[-1].asfreq("D", "end"),
- freq="Q-{month}".format(month=month),
+ freq=f"Q-{month}",
)
expected = stamps.reindex(qdates.to_timestamp("D", "s"), method="ffill")
diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py
index 7020d373caf82..7be60590966db 100644
--- a/pandas/tests/reshape/merge/test_join.py
+++ b/pandas/tests/reshape/merge/test_join.py
@@ -262,8 +262,9 @@ def test_join_on_fails_with_wrong_object_type(self, wrong_type):
# Edited test to remove the Series object from test parameters
df = DataFrame({"a": [1, 1]})
- msg = "Can only merge Series or DataFrame objects, a {} was passed".format(
- str(type(wrong_type))
+ msg = (
+ f"Can only merge Series or DataFrame objects, "
+ f"a {str(type(wrong_type))} was passed"
)
with pytest.raises(TypeError, match=msg):
merge(wrong_type, df, left_on="a", right_on="a")
@@ -812,9 +813,7 @@ def _check_join(left, right, result, join_col, how="left", lsuffix="_x", rsuffix
except KeyError:
if how in ("left", "inner"):
raise AssertionError(
- "key {group_key!s} should not have been in the join".format(
- group_key=group_key
- )
+ f"key {group_key!s} should not have been in the join"
)
_assert_all_na(l_joined, left.columns, join_col)
@@ -826,9 +825,7 @@ def _check_join(left, right, result, join_col, how="left", lsuffix="_x", rsuffix
except KeyError:
if how in ("right", "inner"):
raise AssertionError(
- "key {group_key!s} should not have been in the join".format(
- group_key=group_key
- )
+ f"key {group_key!s} should not have been in the join"
)
_assert_all_na(r_joined, right.columns, join_col)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index fd189c7435b29..8fb903cc5acc1 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -710,7 +710,7 @@ def test_other_timedelta_unit(self, unit):
df1 = pd.DataFrame({"entity_id": [101, 102]})
s = pd.Series([None, None], index=[101, 102], name="days")
- dtype = "m8[{}]".format(unit)
+ dtype = f"m8[{unit}]"
df2 = s.astype(dtype).to_frame("days")
assert df2["days"].dtype == "m8[ns]"
@@ -1011,10 +1011,10 @@ def test_indicator(self):
df_badcolumn = DataFrame({"col1": [1, 2], i: [2, 2]})
msg = (
- "Cannot use `indicator=True` option when data contains a "
- "column named {}|"
- "Cannot use name of an existing column for indicator column"
- ).format(i)
+ f"Cannot use `indicator=True` option when data contains a "
+ f"column named {i}|"
+ f"Cannot use name of an existing column for indicator column"
+ )
with pytest.raises(ValueError, match=msg):
merge(df1, df_badcolumn, on="col1", how="outer", indicator=True)
with pytest.raises(ValueError, match=msg):
@@ -1555,11 +1555,9 @@ def test_merge_incompat_dtypes_error(self, df1_vals, df2_vals):
df2 = DataFrame({"A": df2_vals})
msg = (
- "You are trying to merge on {lk_dtype} and "
- "{rk_dtype} columns. If you wish to proceed "
- "you should use pd.concat".format(
- lk_dtype=df1["A"].dtype, rk_dtype=df2["A"].dtype
- )
+ f"You are trying to merge on {df1['A'].dtype} and "
+ f"{df2['A'].dtype} columns. If you wish to proceed "
+ f"you should use pd.concat"
)
msg = re.escape(msg)
with pytest.raises(ValueError, match=msg):
@@ -1567,11 +1565,9 @@ def test_merge_incompat_dtypes_error(self, df1_vals, df2_vals):
# Check that error still raised when swapping order of dataframes
msg = (
- "You are trying to merge on {lk_dtype} and "
- "{rk_dtype} columns. If you wish to proceed "
- "you should use pd.concat".format(
- lk_dtype=df2["A"].dtype, rk_dtype=df1["A"].dtype
- )
+ f"You are trying to merge on {df2['A'].dtype} and "
+ f"{df1['A'].dtype} columns. If you wish to proceed "
+ f"you should use pd.concat"
)
msg = re.escape(msg)
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 8037095aff0b9..3275520bff4de 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -1198,7 +1198,7 @@ def test_merge_groupby_multiple_column_with_categorical_column(self):
@pytest.mark.parametrize("side", ["left", "right"])
def test_merge_on_nans(self, func, side):
# GH 23189
- msg = "Merge keys contain null values on {} side".format(side)
+ msg = f"Merge keys contain null values on {side} side"
nulls = func([1.0, 5.0, np.nan])
non_nulls = func([1.0, 5.0, 10.0])
df_null = pd.DataFrame({"a": nulls, "left_val": ["a", "b", "c"]})
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index 814325844cb4c..6a670e6c729e9 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -364,8 +364,8 @@ def test_pairs(self):
df = DataFrame(data)
spec = {
- "visitdt": ["visitdt{i:d}".format(i=i) for i in range(1, 4)],
- "wt": ["wt{i:d}".format(i=i) for i in range(1, 4)],
+ "visitdt": [f"visitdt{i:d}" for i in range(1, 4)],
+ "wt": [f"wt{i:d}" for i in range(1, 4)],
}
result = lreshape(df, spec)
@@ -557,8 +557,8 @@ def test_pairs(self):
result = lreshape(df, spec, dropna=False, label="foo")
spec = {
- "visitdt": ["visitdt{i:d}".format(i=i) for i in range(1, 3)],
- "wt": ["wt{i:d}".format(i=i) for i in range(1, 4)],
+ "visitdt": [f"visitdt{i:d}" for i in range(1, 3)],
+ "wt": [f"wt{i:d}" for i in range(1, 4)],
}
msg = "All column lists must be same length"
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index fe75aef1ca3d7..bf761b515767a 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -1161,9 +1161,9 @@ def test_margins_no_values_two_row_two_cols(self):
def test_pivot_table_with_margins_set_margin_name(self, margin_name):
# see gh-3335
msg = (
- r'Conflicting name "{}" in margins|'
- "margins_name argument must be a string"
- ).format(margin_name)
+ fr'Conflicting name "{margin_name}" in margins|'
+ fr"margins_name argument must be a string"
+ )
with pytest.raises(ValueError, match=msg):
# multi-index index
pivot_table(
diff --git a/pandas/tests/scalar/timedelta/test_constructors.py b/pandas/tests/scalar/timedelta/test_constructors.py
index 25c9fc19981be..d32d1994cac74 100644
--- a/pandas/tests/scalar/timedelta/test_constructors.py
+++ b/pandas/tests/scalar/timedelta/test_constructors.py
@@ -239,7 +239,7 @@ def test_iso_constructor(fmt, exp):
],
)
def test_iso_constructor_raises(fmt):
- msg = "Invalid ISO 8601 Duration format - {}".format(fmt)
+ msg = f"Invalid ISO 8601 Duration format - {fmt}"
with pytest.raises(ValueError, match=msg):
Timedelta(fmt)
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 737a85faa4c9b..b4a7173da84d0 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -314,7 +314,7 @@ def test_constructor_nanosecond(self, result):
def test_constructor_invalid_Z0_isostring(self, z):
# GH 8910
with pytest.raises(ValueError):
- Timestamp("2014-11-02 01:00{}".format(z))
+ Timestamp(f"2014-11-02 01:00{z}")
@pytest.mark.parametrize(
"arg",
@@ -455,9 +455,7 @@ def test_disallow_setting_tz(self, tz):
@pytest.mark.parametrize("offset", ["+0300", "+0200"])
def test_construct_timestamp_near_dst(self, offset):
# GH 20854
- expected = Timestamp(
- "2016-10-30 03:00:00{}".format(offset), tz="Europe/Helsinki"
- )
+ expected = Timestamp(f"2016-10-30 03:00:00{offset}", tz="Europe/Helsinki")
result = Timestamp(expected).tz_convert("Europe/Helsinki")
assert result == expected
diff --git a/pandas/tests/scalar/timestamp/test_rendering.py b/pandas/tests/scalar/timestamp/test_rendering.py
index cab6946bb8d02..a27d233d5ab88 100644
--- a/pandas/tests/scalar/timestamp/test_rendering.py
+++ b/pandas/tests/scalar/timestamp/test_rendering.py
@@ -17,7 +17,7 @@ class TestTimestampRendering:
)
def test_repr(self, date, freq, tz):
# avoid to match with timezone name
- freq_repr = "'{0}'".format(freq)
+ freq_repr = f"'{freq}'"
if tz.startswith("dateutil"):
tz_repr = tz.replace("dateutil", "")
else:
diff --git a/pandas/tests/scalar/timestamp/test_unary_ops.py b/pandas/tests/scalar/timestamp/test_unary_ops.py
index 65066fd0099ba..726695401cb91 100644
--- a/pandas/tests/scalar/timestamp/test_unary_ops.py
+++ b/pandas/tests/scalar/timestamp/test_unary_ops.py
@@ -233,17 +233,17 @@ def test_round_int64(self, timestamp, freq):
# test floor
result = dt.floor(freq)
- assert result.value % unit == 0, "floor not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"floor not a {freq} multiple"
assert 0 <= dt.value - result.value < unit, "floor error"
# test ceil
result = dt.ceil(freq)
- assert result.value % unit == 0, "ceil not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"ceil not a {freq} multiple"
assert 0 <= result.value - dt.value < unit, "ceil error"
# test round
result = dt.round(freq)
- assert result.value % unit == 0, "round not a {} multiple".format(freq)
+ assert result.value % unit == 0, f"round not a {freq} multiple"
assert abs(result.value - dt.value) <= unit // 2, "round error"
if unit % 2 == 0 and abs(result.value - dt.value) == unit // 2:
# round half to even
diff --git a/pandas/tests/series/methods/test_nlargest.py b/pandas/tests/series/methods/test_nlargest.py
index a029965c7394f..b1aa09f387a13 100644
--- a/pandas/tests/series/methods/test_nlargest.py
+++ b/pandas/tests/series/methods/test_nlargest.py
@@ -98,7 +98,7 @@ class TestSeriesNLargestNSmallest:
)
def test_nlargest_error(self, r):
dt = r.dtype
- msg = "Cannot use method 'n(larg|small)est' with dtype {dt}".format(dt=dt)
+ msg = f"Cannot use method 'n(larg|small)est' with dtype {dt}"
args = 2, len(r), 0, -1
methods = r.nlargest, r.nsmallest
for method, arg in product(methods, args):
diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index e6e91b5d4f5f4..1f6330b1a625d 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -169,10 +169,10 @@ def test_validate_any_all_out_keepdims_raises(self, kwargs, func):
name = func.__name__
msg = (
- r"the '{arg}' parameter is not "
- r"supported in the pandas "
- r"implementation of {fname}\(\)"
- ).format(arg=param, fname=name)
+ fr"the '{param}' parameter is not "
+ fr"supported in the pandas "
+ fr"implementation of {name}\(\)"
+ )
with pytest.raises(ValueError, match=msg):
func(s, **kwargs)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index f96d6ddfc357e..33706c00c53f4 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -136,9 +136,7 @@ def test_constructor_subclass_dict(self, dict_subclass):
def test_constructor_ordereddict(self):
# GH3283
- data = OrderedDict(
- ("col{i}".format(i=i), np.random.random()) for i in range(12)
- )
+ data = OrderedDict((f"col{i}", np.random.random()) for i in range(12))
series = Series(data)
expected = Series(list(data.values()), list(data.keys()))
@@ -258,7 +256,7 @@ def get_dir(s):
tm.makeIntIndex(10),
tm.makeFloatIndex(10),
Index([True, False]),
- Index(["a{}".format(i) for i in range(101)]),
+ Index([f"a{i}" for i in range(101)]),
pd.MultiIndex.from_tuples(zip("ABCD", "EFGH")),
pd.MultiIndex.from_tuples(zip([0, 1, 2, 3], "EFGH")),
],
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index b0d06793dbe13..7f14a388b85c6 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -1351,7 +1351,6 @@ def test_constructor_cant_cast_datetimelike(self, index):
# We don't care whether the error message says
# PeriodIndex or PeriodArray
msg = f"Cannot cast {type(index).__name__.rstrip('Index')}.*? to "
-
with pytest.raises(TypeError, match=msg):
Series(index, dtype=float)
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 1fc582156a884..80a024eda7848 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -261,7 +261,7 @@ def test_astype_categorical_to_other(self):
value = np.random.RandomState(0).randint(0, 10000, 100)
df = DataFrame({"value": value})
- labels = ["{0} - {1}".format(i, i + 499) for i in range(0, 10000, 500)]
+ labels = [f"{i} - {i + 499}" for i in range(0, 10000, 500)]
cat_labels = Categorical(labels, labels)
df = df.sort_values(by=["value"], ascending=True)
@@ -384,9 +384,9 @@ def test_astype_generic_timestamp_no_frequency(self, dtype):
s = Series(data)
msg = (
- r"The '{dtype}' dtype has no unit\. "
- r"Please pass in '{dtype}\[ns\]' instead."
- ).format(dtype=dtype.__name__)
+ fr"The '{dtype.__name__}' dtype has no unit\. "
+ fr"Please pass in '{dtype.__name__}\[ns\]' instead."
+ )
with pytest.raises(ValueError, match=msg):
s.astype(dtype)
diff --git a/pandas/tests/series/test_ufunc.py b/pandas/tests/series/test_ufunc.py
index ece7f1f21ab23..536f15ea75d69 100644
--- a/pandas/tests/series/test_ufunc.py
+++ b/pandas/tests/series/test_ufunc.py
@@ -287,7 +287,7 @@ def __eq__(self, other) -> bool:
return type(other) is Thing and self.value == other.value
def __repr__(self) -> str:
- return "Thing({})".format(self.value)
+ return f"Thing({self.value})"
s = pd.Series([Thing(1), Thing(2)])
result = np.add(s, Thing(1))
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 02898988ca8aa..122ef1f47968e 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -19,7 +19,7 @@ def import_module(name):
try:
return importlib.import_module(name)
except ModuleNotFoundError: # noqa
- pytest.skip("skipping as {} not available".format(name))
+ pytest.skip(f"skipping as {name} not available")
@pytest.fixture
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index b377ca2869bd3..bd07fd3edc24d 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -895,10 +895,7 @@ def test_stack_unstack_unordered_multiindex(self):
# GH 18265
values = np.arange(5)
data = np.vstack(
- [
- ["b{}".format(x) for x in values], # b0, b1, ..
- ["a{}".format(x) for x in values],
- ]
+ [[f"b{x}" for x in values], [f"a{x}" for x in values]] # b0, b1, ..
) # a0, a1, ..
df = pd.DataFrame(data.T, columns=["b", "a"])
df.columns.name = "first"
diff --git a/pandas/tests/tools/test_numeric.py b/pandas/tests/tools/test_numeric.py
index 2fd39d5a7b703..19385e797467c 100644
--- a/pandas/tests/tools/test_numeric.py
+++ b/pandas/tests/tools/test_numeric.py
@@ -308,7 +308,7 @@ def test_really_large_in_arr_consistent(large_val, signed, multiple_elts, errors
if errors in (None, "raise"):
index = int(multiple_elts)
- msg = "Integer out of range. at position {index}".format(index=index)
+ msg = f"Integer out of range. at position {index}"
with pytest.raises(ValueError, match=msg):
to_numeric(arr, **kwargs)
diff --git a/pandas/tests/tslibs/test_parse_iso8601.py b/pandas/tests/tslibs/test_parse_iso8601.py
index a58f227c20c7f..1c01e826d9794 100644
--- a/pandas/tests/tslibs/test_parse_iso8601.py
+++ b/pandas/tests/tslibs/test_parse_iso8601.py
@@ -51,7 +51,7 @@ def test_parsers_iso8601(date_str, exp):
],
)
def test_parsers_iso8601_invalid(date_str):
- msg = 'Error parsing datetime string "{s}"'.format(s=date_str)
+ msg = f'Error parsing datetime string "{date_str}"'
with pytest.raises(ValueError, match=msg):
tslib._test_parse_iso8601(date_str)
diff --git a/pandas/tests/window/moments/test_moments_rolling.py b/pandas/tests/window/moments/test_moments_rolling.py
index 83e4ee25558b5..4e70cc8ad5cd8 100644
--- a/pandas/tests/window/moments/test_moments_rolling.py
+++ b/pandas/tests/window/moments/test_moments_rolling.py
@@ -860,7 +860,7 @@ def get_result(obj, window, min_periods=None, center=False):
tm.assert_series_equal(result, expected)
# shifter index
- s = ["x{x:d}".format(x=x) for x in range(12)]
+ s = [f"x{x:d}" for x in range(12)]
if has_min_periods:
minp = 10
@@ -1438,13 +1438,9 @@ def test_rolling_median_memory_error(self):
def test_rolling_min_max_numeric_types(self):
# GH12373
- types_test = [np.dtype("f{}".format(width)) for width in [4, 8]]
+ types_test = [np.dtype(f"f{width}") for width in [4, 8]]
types_test.extend(
- [
- np.dtype("{}{}".format(sign, width))
- for width in [1, 2, 4, 8]
- for sign in "ui"
- ]
+ [np.dtype(f"{sign}{width}") for width in [1, 2, 4, 8] for sign in "ui"]
)
for data_type in types_test:
# Just testing that these don't throw exceptions and that
| https://api.github.com/repos/pandas-dev/pandas/pulls/31844 | 2020-02-10T06:45:14Z | 2020-02-11T15:05:34Z | null | 2020-02-12T17:25:10Z | |
BUG: Fix raw parameter not being respected in groupby.rolling.apply | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index b055b44274bd8..cf33b2f8de3f3 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -17,6 +17,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.to_excel` when ``columns`` kwarg is passed (:issue:`31677`)
- Fixed regression in :meth:`Series.align` when ``other`` is a DataFrame and ``method`` is not None (:issue:`31785`)
+- Fixed regression in :meth:`pandas.core.groupby.RollingGroupby.apply` where the ``raw`` parameter was ignored (:issue:`31754`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 580c7cc0554d0..8506b2ff6ee9e 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -1296,13 +1296,14 @@ def apply(
raise ValueError("engine must be either 'numba' or 'cython'")
# TODO: Why do we always pass center=False?
- # name=func for WindowGroupByMixin._apply
+ # name=func & raw=raw for WindowGroupByMixin._apply
return self._apply(
apply_func,
center=False,
floor=0,
name=func,
use_numba_cache=engine == "numba",
+ raw=raw,
)
def _generate_cython_apply_func(self, args, kwargs, raw, offset, func):
diff --git a/pandas/tests/window/test_grouper.py b/pandas/tests/window/test_grouper.py
index 355ef3a90d424..5b2687271f9d6 100644
--- a/pandas/tests/window/test_grouper.py
+++ b/pandas/tests/window/test_grouper.py
@@ -190,3 +190,21 @@ def test_expanding_apply(self, raw):
result = r.apply(lambda x: x.sum(), raw=raw)
expected = g.apply(lambda x: x.expanding().apply(lambda y: y.sum(), raw=raw))
tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("expected_value,raw_value", [[1.0, True], [0.0, False]])
+ def test_groupby_rolling(self, expected_value, raw_value):
+ # GH 31754
+
+ def foo(x):
+ return int(isinstance(x, np.ndarray))
+
+ df = pd.DataFrame({"id": [1, 1, 1], "value": [1, 2, 3]})
+ result = df.groupby("id").value.rolling(1).apply(foo, raw=raw_value)
+ expected = Series(
+ [expected_value] * 3,
+ index=pd.MultiIndex.from_tuples(
+ ((1, 0), (1, 1), (1, 2)), names=["id", None]
+ ),
+ name="value",
+ )
+ tm.assert_series_equal(result, expected)
| - [x] closes #31754
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31842 | 2020-02-10T05:31:20Z | 2020-02-10T12:50:32Z | 2020-02-10T12:50:32Z | 2020-02-19T17:36:05Z |
BUG: Fix rolling.corr with time frequency | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index cf33b2f8de3f3..f4bb8c580fb08 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -18,6 +18,7 @@ Fixed regressions
- Fixed regression in :meth:`DataFrame.to_excel` when ``columns`` kwarg is passed (:issue:`31677`)
- Fixed regression in :meth:`Series.align` when ``other`` is a DataFrame and ``method`` is not None (:issue:`31785`)
- Fixed regression in :meth:`pandas.core.groupby.RollingGroupby.apply` where the ``raw`` parameter was ignored (:issue:`31754`)
+- Fixed regression in :meth:`rolling(..).corr() <pandas.core.window.Rolling.corr>` when using a time offset (:issue:`31789`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 8506b2ff6ee9e..5c18796deb07a 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -1782,7 +1782,7 @@ def corr(self, other=None, pairwise=None, **kwargs):
# only default unset
pairwise = True if pairwise is None else pairwise
other = self._shallow_copy(other)
- window = self._get_window(other)
+ window = self._get_window(other) if not self.is_freq_type else self.win_freq
def _get_corr(a, b):
a = a.rolling(
diff --git a/pandas/tests/window/test_pairwise.py b/pandas/tests/window/test_pairwise.py
index 717273cff64ea..bb305e93a3cf1 100644
--- a/pandas/tests/window/test_pairwise.py
+++ b/pandas/tests/window/test_pairwise.py
@@ -1,8 +1,9 @@
import warnings
+import numpy as np
import pytest
-from pandas import DataFrame, Series
+from pandas import DataFrame, Series, date_range
import pandas._testing as tm
from pandas.core.algorithms import safe_sort
@@ -181,3 +182,10 @@ def test_pairwise_with_series(self, f):
for i, result in enumerate(results):
if i > 0:
self.compare(result, results[0])
+
+ def test_corr_freq_memory_error(self):
+ # GH 31789
+ s = Series(range(5), index=date_range("2020", periods=5))
+ result = s.rolling("12H").corr(s)
+ expected = Series([np.nan] * 5, index=date_range("2020", periods=5))
+ tm.assert_series_equal(result, expected)
| - [x] closes #31789
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31841 | 2020-02-10T04:25:10Z | 2020-02-10T17:51:32Z | 2020-02-10T17:51:31Z | 2020-02-19T07:40:50Z |
BUG/DEPR: loc.__setitem__ incorrectly accepting positional slices | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 44deab25db695..121db0f86f7ae 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -170,7 +170,7 @@ Deprecations
~~~~~~~~~~~~
- Lookups on a :class:`Series` with a single-item list containing a slice (e.g. ``ser[[slice(0, 4)]]``) are deprecated, will raise in a future version. Either convert the list to tuple, or pass the slice directly instead (:issue:`31333`)
- :meth:`DataFrame.mean` and :meth:`DataFrame.median` with ``numeric_only=None`` will include datetime64 and datetime64tz columns in a future version (:issue:`29941`)
--
+- Setting values with ``.loc`` using a positional slice is deprecated and will raise in a future version. Use ``.loc`` with labels or ``.iloc`` with positions instead (:issue:`31840`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 6f44b5abf5b04..e32f597c9e378 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -3137,8 +3137,18 @@ def is_int(v):
pass
if com.is_null_slice(key):
+ # It doesn't matter if we are positional or label based
indexer = key
elif is_positional:
+ if kind == "loc":
+ # GH#16121, GH#24612, GH#31810
+ warnings.warn(
+ "Slicing a positional slice with .loc is not supported, "
+ "and will raise TypeError in a future version. "
+ "Use .loc with labels or .iloc with positions instead.",
+ FutureWarning,
+ stacklevel=6,
+ )
indexer = key
else:
indexer = self.slice_indexer(start, stop, step, kind=kind)
diff --git a/pandas/tests/frame/conftest.py b/pandas/tests/frame/conftest.py
index 03598b6bb5eca..e89b2c6f1fec0 100644
--- a/pandas/tests/frame/conftest.py
+++ b/pandas/tests/frame/conftest.py
@@ -40,8 +40,8 @@ def float_frame_with_na():
"""
df = DataFrame(tm.getSeriesData())
# set some NAs
- df.loc[5:10] = np.nan
- df.loc[15:20, -2:] = np.nan
+ df.iloc[5:10] = np.nan
+ df.iloc[15:20, -2:] = np.nan
return df
@@ -74,8 +74,8 @@ def bool_frame_with_na():
df = DataFrame(tm.getSeriesData()) > 0
df = df.astype(object)
# set some NAs
- df.loc[5:10] = np.nan
- df.loc[15:20, -2:] = np.nan
+ df.iloc[5:10] = np.nan
+ df.iloc[15:20, -2:] = np.nan
return df
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index b0cb988720c25..ade17860a99b7 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -1209,7 +1209,7 @@ def test_setitem_frame_mixed(self, float_string_frame):
piece = DataFrame(
[[1.0, 2.0], [3.0, 4.0]], index=f.index[0:2], columns=["A", "B"]
)
- key = (slice(None, 2), ["A", "B"])
+ key = (f.index[slice(None, 2)], ["A", "B"])
f.loc[key] = piece
tm.assert_almost_equal(f.loc[f.index[0:2], ["A", "B"]].values, piece.values)
@@ -1220,7 +1220,7 @@ def test_setitem_frame_mixed(self, float_string_frame):
index=list(f.index[0:2]) + ["foo", "bar"],
columns=["A", "B"],
)
- key = (slice(None, 2), ["A", "B"])
+ key = (f.index[slice(None, 2)], ["A", "B"])
f.loc[key] = piece
tm.assert_almost_equal(
f.loc[f.index[0:2:], ["A", "B"]].values, piece.values[0:2]
@@ -1230,7 +1230,7 @@ def test_setitem_frame_mixed(self, float_string_frame):
f = float_string_frame.copy()
piece = f.loc[f.index[:2], ["A"]]
piece.index = f.index[-2:]
- key = (slice(-2, None), ["A", "B"])
+ key = (f.index[slice(-2, None)], ["A", "B"])
f.loc[key] = piece
piece["B"] = np.nan
tm.assert_almost_equal(f.loc[f.index[-2:], ["A", "B"]].values, piece.values)
@@ -1238,7 +1238,7 @@ def test_setitem_frame_mixed(self, float_string_frame):
# ndarray
f = float_string_frame.copy()
piece = float_string_frame.loc[f.index[:2], ["A", "B"]]
- key = (slice(-2, None), ["A", "B"])
+ key = (f.index[slice(-2, None)], ["A", "B"])
f.loc[key] = piece.values
tm.assert_almost_equal(f.loc[f.index[-2:], ["A", "B"]].values, piece.values)
@@ -1873,7 +1873,7 @@ def test_setitem_datetimelike_with_inference(self):
df = DataFrame(index=date_range("20130101", periods=4))
df["A"] = np.array([1 * one_hour] * 4, dtype="m8[ns]")
df.loc[:, "B"] = np.array([2 * one_hour] * 4, dtype="m8[ns]")
- df.loc[:3, "C"] = np.array([3 * one_hour] * 3, dtype="m8[ns]")
+ df.loc[df.index[:3], "C"] = np.array([3 * one_hour] * 3, dtype="m8[ns]")
df.loc[:, "D"] = np.array([4 * one_hour] * 4, dtype="m8[ns]")
df.loc[df.index[:3], "E"] = np.array([5 * one_hour] * 3, dtype="m8[ns]")
df["F"] = np.timedelta64("NaT")
diff --git a/pandas/tests/frame/methods/test_asof.py b/pandas/tests/frame/methods/test_asof.py
index e2b417972638e..91d920c706bb6 100644
--- a/pandas/tests/frame/methods/test_asof.py
+++ b/pandas/tests/frame/methods/test_asof.py
@@ -21,7 +21,7 @@ class TestFrameAsof:
def test_basic(self, date_range_frame):
df = date_range_frame
N = 50
- df.loc[15:30, "A"] = np.nan
+ df.loc[df.index[15:30], "A"] = np.nan
dates = date_range("1/1/1990", periods=N * 3, freq="25s")
result = df.asof(dates)
@@ -41,7 +41,7 @@ def test_basic(self, date_range_frame):
def test_subset(self, date_range_frame):
N = 10
df = date_range_frame.iloc[:N].copy()
- df.loc[4:8, "A"] = np.nan
+ df.loc[df.index[4:8], "A"] = np.nan
dates = date_range("1/1/1990", periods=N * 3, freq="25s")
# with a subset of A should be the same
@@ -149,7 +149,7 @@ def test_is_copy(self, date_range_frame):
# doesn't track the parent dataframe / doesn't give SettingWithCopy warnings
df = date_range_frame
N = 50
- df.loc[15:30, "A"] = np.nan
+ df.loc[df.index[15:30], "A"] = np.nan
dates = date_range("1/1/1990", periods=N * 3, freq="25s")
result = df.asof(dates)
diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py
index 61802956addeb..07e30d41c216d 100644
--- a/pandas/tests/frame/test_analytics.py
+++ b/pandas/tests/frame/test_analytics.py
@@ -913,8 +913,8 @@ def test_sum_bools(self):
def test_idxmin(self, float_frame, int_frame):
frame = float_frame
- frame.loc[5:10] = np.nan
- frame.loc[15:20, -2:] = np.nan
+ frame.iloc[5:10] = np.nan
+ frame.iloc[15:20, -2:] = np.nan
for skipna in [True, False]:
for axis in [0, 1]:
for df in [frame, int_frame]:
@@ -928,8 +928,8 @@ def test_idxmin(self, float_frame, int_frame):
def test_idxmax(self, float_frame, int_frame):
frame = float_frame
- frame.loc[5:10] = np.nan
- frame.loc[15:20, -2:] = np.nan
+ frame.iloc[5:10] = np.nan
+ frame.iloc[15:20, -2:] = np.nan
for skipna in [True, False]:
for axis in [0, 1]:
for df in [frame, int_frame]:
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index fe6abef97acc4..11705cd77a325 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -339,7 +339,7 @@ def test_apply_yield_list(self, float_frame):
tm.assert_frame_equal(result, float_frame)
def test_apply_reduce_Series(self, float_frame):
- float_frame.loc[::2, "A"] = np.nan
+ float_frame["A"].iloc[::2] = np.nan
expected = float_frame.mean(1)
result = float_frame.apply(np.mean, axis=1)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py
index a5f5e6f36cd58..e67fef9efef6d 100644
--- a/pandas/tests/frame/test_block_internals.py
+++ b/pandas/tests/frame/test_block_internals.py
@@ -478,7 +478,7 @@ def test_convert_objects(self, float_string_frame):
length = len(float_string_frame)
float_string_frame["J"] = "1."
float_string_frame["K"] = "1"
- float_string_frame.loc[0:5, ["J", "K"]] = "garbled"
+ float_string_frame.loc[float_string_frame.index[0:5], ["J", "K"]] = "garbled"
converted = float_string_frame._convert(datetime=True, numeric=True)
assert converted["H"].dtype == "float64"
assert converted["I"].dtype == "int64"
diff --git a/pandas/tests/frame/test_cumulative.py b/pandas/tests/frame/test_cumulative.py
index 486cbfb2761e0..1b7e70dd28c63 100644
--- a/pandas/tests/frame/test_cumulative.py
+++ b/pandas/tests/frame/test_cumulative.py
@@ -23,9 +23,9 @@ def test_cumsum_corner(self):
result = dm.cumsum() # noqa
def test_cumsum(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
+ datetime_frame.iloc[5:10, 0] = np.nan
+ datetime_frame.iloc[10:15, 1] = np.nan
+ datetime_frame.iloc[15:, 2] = np.nan
# axis = 0
cumsum = datetime_frame.cumsum()
@@ -46,9 +46,9 @@ def test_cumsum(self, datetime_frame):
assert np.shape(cumsum_xs) == np.shape(datetime_frame)
def test_cumprod(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
+ datetime_frame.iloc[5:10, 0] = np.nan
+ datetime_frame.iloc[10:15, 1] = np.nan
+ datetime_frame.iloc[15:, 2] = np.nan
# axis = 0
cumprod = datetime_frame.cumprod()
@@ -80,9 +80,9 @@ def test_cumprod(self, datetime_frame):
strict=False,
)
def test_cummin(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
+ datetime_frame.iloc[5:10, 0] = np.nan
+ datetime_frame.iloc[10:15, 1] = np.nan
+ datetime_frame.iloc[15:, 2] = np.nan
# axis = 0
cummin = datetime_frame.cummin()
@@ -108,9 +108,9 @@ def test_cummin(self, datetime_frame):
strict=False,
)
def test_cummax(self, datetime_frame):
- datetime_frame.loc[5:10, 0] = np.nan
- datetime_frame.loc[10:15, 1] = np.nan
- datetime_frame.loc[15:, 2] = np.nan
+ datetime_frame.iloc[5:10, 0] = np.nan
+ datetime_frame.iloc[10:15, 1] = np.nan
+ datetime_frame.iloc[15:, 2] = np.nan
# axis = 0
cummax = datetime_frame.cummax()
diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py
index 86c9a98377f3f..cec2bd4b634c1 100644
--- a/pandas/tests/frame/test_to_csv.py
+++ b/pandas/tests/frame/test_to_csv.py
@@ -761,7 +761,7 @@ def create_cols(name):
)
# add in some nans
- df_float.loc[30:50, 1:3] = np.nan
+ df_float.iloc[30:50, 1:3] = np.nan
# ## this is a bug in read_csv right now ####
# df_dt.loc[30:50,1:3] = np.nan
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 4d042af8d59b4..3073fe085de15 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -863,6 +863,7 @@ def test_loc_setitem_empty_append_raises(self):
data = [1, 2]
df = DataFrame(columns=["x", "y"])
+ df.index = df.index.astype(np.int64)
msg = (
r"None of \[Int64Index\(\[0, 1\], dtype='int64'\)\] "
r"are in the \[index\]"
@@ -975,3 +976,42 @@ def test_loc_mixed_int_float():
result = ser.loc[1]
assert result == 0
+
+
+def test_loc_with_positional_slice_deprecation():
+ # GH#31840
+ ser = pd.Series(range(4), index=["A", "B", "C", "D"])
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ ser.loc[:3] = 2
+
+ expected = pd.Series([2, 2, 2, 3], index=["A", "B", "C", "D"])
+ tm.assert_series_equal(ser, expected)
+
+
+def test_loc_slice_disallows_positional():
+ # GH#16121, GH#24612, GH#31810
+ dti = pd.date_range("2016-01-01", periods=3)
+ df = pd.DataFrame(np.random.random((3, 2)), index=dti)
+
+ ser = df[0]
+
+ msg = (
+ "cannot do slice indexing on DatetimeIndex with these "
+ r"indexers \[1\] of type int"
+ )
+
+ for obj in [df, ser]:
+ with pytest.raises(TypeError, match=msg):
+ obj.loc[1:3]
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # GH#31840 deprecated incorrect behavior
+ obj.loc[1:3] = 1
+
+ with pytest.raises(TypeError, match=msg):
+ df.loc[1:3, 1]
+
+ with tm.assert_produces_warning(FutureWarning):
+ # GH#31840 deprecated incorrect behavior
+ df.loc[1:3, 1] = 2
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index fd585a73f6ce6..fcc48daa59a40 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -342,7 +342,7 @@ def test_repr(self, setup_path):
df["timestamp2"] = Timestamp("20010103")
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
- df.loc[3:6, ["obj1"]] = np.nan
+ df.loc[df.index[3:6], ["obj1"]] = np.nan
df = df._consolidate()._convert(datetime=True)
with catch_warnings(record=True):
@@ -846,7 +846,7 @@ def test_put_mixed_type(self, setup_path):
df["timestamp2"] = Timestamp("20010103")
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
- df.loc[3:6, ["obj1"]] = np.nan
+ df.loc[df.index[3:6], ["obj1"]] = np.nan
df = df._consolidate()._convert(datetime=True)
with ensure_clean_store(setup_path) as store:
@@ -1372,11 +1372,11 @@ def check_col(key, name, size):
_maybe_remove(store, "df")
df = tm.makeTimeDataFrame()
df["string"] = "foo"
- df.loc[1:4, "string"] = np.nan
+ df.loc[df.index[1:4], "string"] = np.nan
df["string2"] = "bar"
- df.loc[4:8, "string2"] = np.nan
+ df.loc[df.index[4:8], "string2"] = np.nan
df["string3"] = "bah"
- df.loc[1:, "string3"] = np.nan
+ df.loc[df.index[1:], "string3"] = np.nan
store.append("df", df)
result = store.select("df")
tm.assert_frame_equal(result, df)
@@ -1492,8 +1492,8 @@ def test_append_with_data_columns(self, setup_path):
# data column selection with a string data_column
df_new = df.copy()
df_new["string"] = "foo"
- df_new.loc[1:4, "string"] = np.nan
- df_new.loc[5:6, "string"] = "bar"
+ df_new.loc[df_new.index[1:4], "string"] = np.nan
+ df_new.loc[df_new.index[5:6], "string"] = "bar"
_maybe_remove(store, "df")
store.append("df", df_new, data_columns=["string"])
result = store.select("df", "string='foo'")
@@ -1574,12 +1574,12 @@ def check_col(key, name, size):
# doc example
df_dc = df.copy()
df_dc["string"] = "foo"
- df_dc.loc[4:6, "string"] = np.nan
- df_dc.loc[7:9, "string"] = "bar"
+ df_dc.loc[df_dc.index[4:6], "string"] = np.nan
+ df_dc.loc[df_dc.index[7:9], "string"] = "bar"
df_dc["string2"] = "cool"
df_dc["datetime"] = Timestamp("20010102")
df_dc = df_dc._convert(datetime=True)
- df_dc.loc[3:5, ["A", "B", "datetime"]] = np.nan
+ df_dc.loc[df_dc.index[3:5], ["A", "B", "datetime"]] = np.nan
_maybe_remove(store, "df_dc")
store.append(
@@ -1602,8 +1602,8 @@ def check_col(key, name, size):
np.random.randn(8, 3), index=index, columns=["A", "B", "C"]
)
df_dc["string"] = "foo"
- df_dc.loc[4:6, "string"] = np.nan
- df_dc.loc[7:9, "string"] = "bar"
+ df_dc.loc[df_dc.index[4:6], "string"] = np.nan
+ df_dc.loc[df_dc.index[7:9], "string"] = "bar"
df_dc.loc[:, ["B", "C"]] = df_dc.loc[:, ["B", "C"]].abs()
df_dc["string2"] = "cool"
@@ -2024,7 +2024,7 @@ def test_table_mixed_dtypes(self, setup_path):
df["timestamp2"] = Timestamp("20010103")
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
- df.loc[3:6, ["obj1"]] = np.nan
+ df.loc[df.index[3:6], ["obj1"]] = np.nan
df = df._consolidate()._convert(datetime=True)
with ensure_clean_store(setup_path) as store:
@@ -2200,7 +2200,7 @@ def test_invalid_terms(self, setup_path):
df = tm.makeTimeDataFrame()
df["string"] = "foo"
- df.loc[0:4, "string"] = "bar"
+ df.loc[df.index[0:4], "string"] = "bar"
store.put("df", df, format="table")
@@ -3343,7 +3343,7 @@ def test_string_select(self, setup_path):
# test string ==/!=
df["x"] = "none"
- df.loc[2:7, "x"] = ""
+ df.loc[df.index[2:7], "x"] = ""
store.append("df", df, data_columns=["x"])
@@ -3365,7 +3365,7 @@ def test_string_select(self, setup_path):
# int ==/!=
df["int"] = 1
- df.loc[2:7, "int"] = 2
+ df.loc[df.index[2:7], "int"] = 2
store.append("df3", df, data_columns=["int"])
@@ -3419,7 +3419,7 @@ def test_read_column(self, setup_path):
# a data column with NaNs, result excludes the NaNs
df3 = df.copy()
df3["string"] = "foo"
- df3.loc[4:6, "string"] = np.nan
+ df3.loc[df3.index[4:6], "string"] = np.nan
store.append("df3", df3, data_columns=["string"])
result = store.select_column("df3", "string")
tm.assert_almost_equal(result.values, df3["string"].values)
diff --git a/pandas/tests/series/indexing/test_datetime.py b/pandas/tests/series/indexing/test_datetime.py
index fc9d4ec5290a5..b5d04fd499c08 100644
--- a/pandas/tests/series/indexing/test_datetime.py
+++ b/pandas/tests/series/indexing/test_datetime.py
@@ -293,7 +293,7 @@ def test_getitem_setitem_datetimeindex():
result = ts.copy()
result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
+ result.iloc[4:8] = ts.iloc[4:8]
tm.assert_series_equal(result, ts)
# also test partial date slicing
@@ -349,7 +349,7 @@ def test_getitem_setitem_periodindex():
result = ts.copy()
result[ts.index[4:8]] = 0
- result[4:8] = ts[4:8]
+ result.iloc[4:8] = ts.iloc[4:8]
tm.assert_series_equal(result, ts)
diff --git a/pandas/tests/series/methods/test_asof.py b/pandas/tests/series/methods/test_asof.py
index b121efd202744..40fb605bf0ae1 100644
--- a/pandas/tests/series/methods/test_asof.py
+++ b/pandas/tests/series/methods/test_asof.py
@@ -12,7 +12,7 @@ def test_basic(self):
N = 50
rng = date_range("1/1/1990", periods=N, freq="53s")
ts = Series(np.random.randn(N), index=rng)
- ts[15:30] = np.nan
+ ts.iloc[15:30] = np.nan
dates = date_range("1/1/1990", periods=N * 3, freq="25s")
result = ts.asof(dates)
@@ -37,8 +37,8 @@ def test_scalar(self):
N = 30
rng = date_range("1/1/1990", periods=N, freq="53s")
ts = Series(np.arange(N), index=rng)
- ts[5:10] = np.NaN
- ts[15:20] = np.NaN
+ ts.iloc[5:10] = np.NaN
+ ts.iloc[15:20] = np.NaN
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
@@ -94,7 +94,7 @@ def test_periodindex(self):
N = 50
rng = period_range("1/1/1990", periods=N, freq="H")
ts = Series(np.random.randn(N), index=rng)
- ts[15:30] = np.nan
+ ts.iloc[15:30] = np.nan
dates = date_range("1/1/1990", periods=N * 3, freq="37min")
result = ts.asof(dates)
@@ -112,8 +112,8 @@ def test_periodindex(self):
rs = result[mask]
assert (rs == ts[lb]).all()
- ts[5:10] = np.nan
- ts[15:20] = np.nan
+ ts.iloc[5:10] = np.nan
+ ts.iloc[15:20] = np.nan
val1 = ts.asof(ts.index[7])
val2 = ts.asof(ts.index[19])
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 10197766ce4a6..95d04c9a45d25 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -74,7 +74,7 @@ def test_add_series_with_period_index(self):
result = ts + ts[::2]
expected = ts + ts
- expected[1::2] = np.nan
+ expected.iloc[1::2] = np.nan
tm.assert_series_equal(result, expected)
result = ts + _permute(ts[::2])
| - [x] closes #26412
- [x] closes #16121
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
I plan to flesh out the tests before this goes in.
cc @toobaz | https://api.github.com/repos/pandas-dev/pandas/pulls/31840 | 2020-02-10T02:42:55Z | 2020-03-08T15:45:26Z | 2020-03-08T15:45:26Z | 2020-03-08T16:08:01Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.