title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: update style.ipynb for 2.0 | diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb
index c612751dc5a1d..90e57ad4ad90f 100644
--- a/doc/source/user_guide/style.ipynb
+++ b/doc/source/user_guide/style.ipynb
@@ -9,23 +9,33 @@
"This section demonstrates visualization of tabular data using the [Styler][styler]\n",
"class. For information on visualization with charting please see [Chart Visualization][viz]. This document is written as a Jupyter Notebook, and can be viewed or downloaded [here][download].\n",
"\n",
- "[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n",
- "[viz]: visualization.rst\n",
- "[download]: https://nbviewer.ipython.org/github/pandas-dev/pandas/blob/main/doc/source/user_guide/style.ipynb"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Styler Object and HTML \n",
+ "## Styler Object and Customising the Display\n",
+ "Styling and output display customisation should be performed **after** the data in a DataFrame has been processed. The Styler is **not** dynamically updated if further changes to the DataFrame are made. The `DataFrame.style` attribute is a property that returns a [Styler][styler] object. It has a `_repr_html_` method defined on it so it is rendered automatically in Jupyter Notebook.\n",
"\n",
- "Styling should be performed after the data in a DataFrame has been processed. The [Styler][styler] creates an HTML `<table>` and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See [here][w3schools] for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs.\n",
- " \n",
- "The `DataFrame.style` attribute is a property that returns a [Styler][styler] object. It has a `_repr_html_` method defined on it so they are rendered automatically in Jupyter Notebook.\n",
+ "The Styler, which can be used for large data but is primarily designed for small data, currently has the ability to output to these formats:\n",
+ "\n",
+ " - HTML\n",
+ " - LaTeX\n",
+ " - String (and CSV by extension)\n",
+ " - Excel\n",
+ " - (JSON is not currently available)\n",
+ "\n",
+ "The first three of these have display customisation methods designed to format and customise the output. These include:\n",
"\n",
+ " - Formatting values, the index and columns headers, using [.format()][formatfunc] and [.format_index()][formatfuncindex],\n",
+ " - Renaming the index or column header labels, using [.relabel_index()][relabelfunc]\n",
+ " - Hiding certain columns, the index and/or column headers, or index names, using [.hide()][hidefunc]\n",
+ " - Concatenating similar DataFrames, using [.concat()][concatfunc]\n",
+ " \n",
"[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n",
- "[w3schools]: https://www.w3schools.com/html/html_tables.asp"
+ "[viz]: visualization.rst\n",
+ "[download]: https://nbviewer.ipython.org/github/pandas-dev/pandas/blob/main/doc/source/user_guide/style.ipynb\n",
+ "[format]: https://docs.python.org/3/library/string.html#format-specification-mini-language\n",
+ "[formatfunc]: ../reference/api/pandas.io.formats.style.Styler.format.rst\n",
+ "[formatfuncindex]: ../reference/api/pandas.io.formats.style.Styler.format_index.rst\n",
+ "[relabelfunc]: ../reference/api/pandas.io.formats.style.Styler.relabel_index.rst\n",
+ "[hidefunc]: ../reference/api/pandas.io.formats.style.Styler.hide.rst\n",
+ "[concatfunc]: ../reference/api/pandas.io.formats.style.Styler.concat.rst"
]
},
{
@@ -41,6 +51,25 @@
"# This cell is hidden from the output"
]
},
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Formatting the Display\n",
+ "\n",
+ "### Formatting Values\n",
+ "\n",
+ "The [Styler][styler] distinguishes the *display* value from the *actual* value, in both data values and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the [.format()][formatfunc] and [.format_index()][formatfuncindex] methods to manipulate this according to a [format spec string][format] or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels. We can also overwrite index names\n",
+ "\n",
+ "Additionally, the format function has a **precision** argument to specifically help formatting floats, as well as **decimal** and **thousands** separators to support other locales, an **na_rep** argument to display missing data, and an **escape** and **hyperlinks** arguments to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' global options such as `styler.format.precision` option, controllable using `with pd.option_context('format.precision', 2):` \n",
+ "\n",
+ "[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n",
+ "[format]: https://docs.python.org/3/library/string.html#format-specification-mini-language\n",
+ "[formatfunc]: ../reference/api/pandas.io.formats.style.Styler.format.rst\n",
+ "[formatfuncindex]: ../reference/api/pandas.io.formats.style.Styler.format_index.rst\n",
+ "[relabelfunc]: ../reference/api/pandas.io.formats.style.Styler.relabel_index.rst"
+ ]
+ },
{
"cell_type": "code",
"execution_count": null,
@@ -51,19 +80,157 @@
"import numpy as np\n",
"import matplotlib as mpl\n",
"\n",
- "df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]], \n",
- " index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'), \n",
- " columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))\n",
- "df.style"
+ "df = pd.DataFrame({\n",
+ " \"strings\": [\"Adam\", \"Mike\"],\n",
+ " \"ints\": [1, 3],\n",
+ " \"floats\": [1.123, 1000.23]\n",
+ "})\n",
+ "df.style \\\n",
+ " .format(precision=3, thousands=\".\", decimal=\",\") \\\n",
+ " .format_index(str.upper, axis=1) \\\n",
+ " .relabel_index([\"row 1\", \"row 2\"], axis=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
- "The above output looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][tohtml] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in [More about CSS and HTML](#More-About-CSS-and-HTML). Below we will show how we can use these to format the DataFrame to be more communicative. For example how we can build `s`:\n",
+ "Using Styler to manipulate the display is a useful feature because maintaining the indexing and data values for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is a more comprehensive example of using the formatting functions whilst still relying on the underlying data for indexing and calculations."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "weather_df = pd.DataFrame(np.random.rand(10,2)*5, \n",
+ " index=pd.date_range(start=\"2021-01-01\", periods=10),\n",
+ " columns=[\"Tokyo\", \"Beijing\"])\n",
"\n",
- "[tohtml]: ../reference/api/pandas.io.formats.style.Styler.to_html.rst"
+ "def rain_condition(v): \n",
+ " if v < 1.75:\n",
+ " return \"Dry\"\n",
+ " elif v < 2.75:\n",
+ " return \"Rain\"\n",
+ " return \"Heavy Rain\"\n",
+ "\n",
+ "def make_pretty(styler):\n",
+ " styler.set_caption(\"Weather Conditions\")\n",
+ " styler.format(rain_condition)\n",
+ " styler.format_index(lambda v: v.strftime(\"%A\"))\n",
+ " styler.background_gradient(axis=None, vmin=1, vmax=5, cmap=\"YlGnBu\")\n",
+ " return styler\n",
+ "\n",
+ "weather_df"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "weather_df.loc[\"2021-01-04\":\"2021-01-08\"].style.pipe(make_pretty)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Hiding Data\n",
+ "\n",
+ "The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.\n",
+ "\n",
+ "The index can be hidden from rendering by calling [.hide()][hideidx] without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling [.hide(axis=\"columns\")][hideidx] without any further arguments.\n",
+ "\n",
+ "Specific rows or columns can be hidden from rendering by calling the same [.hide()][hideidx] method and passing in a row/column label, a list-like or a slice of row/column labels to for the ``subset`` argument.\n",
+ "\n",
+ "Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at `col2`, since `col0` and `col1` are simply ignored.\n",
+ "\n",
+ "[hideidx]: ../reference/api/pandas.io.formats.style.Styler.hide.rst"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df = pd.DataFrame(np.random.randn(5, 5))\n",
+ "df.style \\\n",
+ " .hide(subset=[0, 2, 4], axis=0) \\\n",
+ " .hide(subset=[0, 2, 4], axis=1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To invert the function to a **show** functionality it is best practice to compose a list of hidden items."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "show = [0, 2, 4]\n",
+ "df.style \\\n",
+ " .hide([row for row in df.index if row not in show], axis=0) \\\n",
+ " .hide([col for col in df.columns if col not in show], axis=1)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "### Concatenating DataFrame Outputs\n",
+ "\n",
+ "Two or more Stylers can be concatenated together provided they share the same columns. This is very useful for showing summary statistics for a DataFrame, and is often used in combination with DataFrame.agg.\n",
+ "\n",
+ "Since the objects concatenated are Stylers they can independently be styled as will be shown below and their concatenation preserves those styles."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "summary_styler = df.agg([\"sum\", \"mean\"]).style \\\n",
+ " .format(precision=3) \\\n",
+ " .relabel_index([\"Sum\", \"Average\"])\n",
+ "df.style.format(precision=1).concat(summary_styler)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "## Styler Object and HTML \n",
+ "\n",
+ "The [Styler][styler] was originally constructed to support the wide array of HTML formatting options. Its HTML output creates an HTML `<table>` and leverages CSS styling language to manipulate many parameters including colors, fonts, borders, background, etc. See [here][w3schools] for more information on styling HTML tables. This allows a lot of flexibility out of the box, and even enables web developers to integrate DataFrames into their exiting user interface designs.\n",
+ "\n",
+ "Below we demonstrate the default output, which looks very similar to the standard DataFrame HTML representation. But the HTML here has already attached some CSS classes to each cell, even if we haven't yet created any styles. We can view these by calling the [.to_html()][tohtml] method, which returns the raw HTML as string, which is useful for further processing or adding to a file - read on in [More about CSS and HTML](#More-About-CSS-and-HTML). This section will also provide a walkthrough for how to convert this default output to represent a DataFrame output that is more communicative. For example how we can build `s`:\n",
+ "\n",
+ "[tohtml]: ../reference/api/pandas.io.formats.style.Styler.to_html.rst\n",
+ "\n",
+ "[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n",
+ "[w3schools]: https://www.w3schools.com/html/html_tables.asp"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "df = pd.DataFrame([[38.0, 2.0, 18.0, 22.0, 21, np.nan],[19, 439, 6, 452, 226,232]], \n",
+ " index=pd.Index(['Tumour (Positive)', 'Non-Tumour (Negative)'], name='Actual Label:'), \n",
+ " columns=pd.MultiIndex.from_product([['Decision Tree', 'Regression', 'Random'],['Tumour', 'Non-Tumour']], names=['Model:', 'Predicted:']))\n",
+ "df.style"
]
},
{
@@ -147,90 +314,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "## Formatting the Display\n",
- "\n",
- "### Formatting Values\n",
- "\n",
- "Before adding styles it is useful to show that the [Styler][styler] can distinguish the *display* value from the *actual* value, in both datavalues and index or columns headers. To control the display value, the text is printed in each cell as string, and we can use the [.format()][formatfunc] and [.format_index()][formatfuncindex] methods to manipulate this according to a [format spec string][format] or a callable that takes a single value and returns a string. It is possible to define this for the whole table, or index, or for individual columns, or MultiIndex levels. \n",
- "\n",
- "Additionally, the format function has a **precision** argument to specifically help formatting floats, as well as **decimal** and **thousands** separators to support other locales, an **na_rep** argument to display missing data, and an **escape** argument to help displaying safe-HTML or safe-LaTeX. The default formatter is configured to adopt pandas' `styler.format.precision` option, controllable using `with pd.option_context('format.precision', 2):` \n",
- "\n",
- "[styler]: ../reference/api/pandas.io.formats.style.Styler.rst\n",
- "[format]: https://docs.python.org/3/library/string.html#format-specification-mini-language\n",
- "[formatfunc]: ../reference/api/pandas.io.formats.style.Styler.format.rst\n",
- "[formatfuncindex]: ../reference/api/pandas.io.formats.style.Styler.format_index.rst"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "df.style.format(precision=0, na_rep='MISSING', thousands=\" \",\n",
- " formatter={('Decision Tree', 'Tumour'): \"{:.2f}\",\n",
- " ('Regression', 'Non-Tumour'): lambda x: \"$ {:,.1f}\".format(x*-1e6)\n",
- " })"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Using Styler to manipulate the display is a useful feature because maintaining the indexing and datavalues for other purposes gives greater control. You do not have to overwrite your DataFrame to display it how you like. Here is an example of using the formatting functions whilst still relying on the underlying data for indexing and calculations."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "weather_df = pd.DataFrame(np.random.rand(10,2)*5, \n",
- " index=pd.date_range(start=\"2021-01-01\", periods=10),\n",
- " columns=[\"Tokyo\", \"Beijing\"])\n",
- "\n",
- "def rain_condition(v): \n",
- " if v < 1.75:\n",
- " return \"Dry\"\n",
- " elif v < 2.75:\n",
- " return \"Rain\"\n",
- " return \"Heavy Rain\"\n",
- "\n",
- "def make_pretty(styler):\n",
- " styler.set_caption(\"Weather Conditions\")\n",
- " styler.format(rain_condition)\n",
- " styler.format_index(lambda v: v.strftime(\"%A\"))\n",
- " styler.background_gradient(axis=None, vmin=1, vmax=5, cmap=\"YlGnBu\")\n",
- " return styler\n",
- "\n",
- "weather_df"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "weather_df.loc[\"2021-01-04\":\"2021-01-08\"].style.pipe(make_pretty)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Hiding Data\n",
- "\n",
- "The index and column headers can be completely hidden, as well subselecting rows or columns that one wishes to exclude. Both these options are performed using the same methods.\n",
- "\n",
- "The index can be hidden from rendering by calling [.hide()][hideidx] without any arguments, which might be useful if your index is integer based. Similarly column headers can be hidden by calling [.hide(axis=\"columns\")][hideidx] without any further arguments.\n",
- "\n",
- "Specific rows or columns can be hidden from rendering by calling the same [.hide()][hideidx] method and passing in a row/column label, a list-like or a slice of row/column labels to for the ``subset`` argument.\n",
- "\n",
- "Hiding does not change the integer arrangement of CSS classes, e.g. hiding the first two columns of a DataFrame means the column class indexing will still start at `col2`, since `col0` and `col1` are simply ignored.\n",
- "\n",
- "We can update our `Styler` object from before to hide some data and format the values.\n",
+ "The first step we have taken is the create the Styler object from the DataFrame and then select the range of interest by hiding unwanted columns with [.hide()][hideidx].\n",
"\n",
"[hideidx]: ../reference/api/pandas.io.formats.style.Styler.hide.rst"
]
| This PR tries to frame Styler more as a general display output tool, by including the more generalist "formatting" functions at the start,


| https://api.github.com/repos/pandas-dev/pandas/pulls/50973 | 2023-01-25T19:13:37Z | 2023-02-09T21:42:39Z | 2023-02-09T21:42:39Z | 2023-02-15T19:06:47Z |
TST: Test ArrowExtensionArray with decimal types | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 00e949b1dd318..69ca809e4f498 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -215,6 +215,7 @@
FLOAT_PYARROW_DTYPES_STR_REPR = [
str(ArrowDtype(typ)) for typ in FLOAT_PYARROW_DTYPES
]
+ DECIMAL_PYARROW_DTYPES = [pa.decimal128(7, 3)]
STRING_PYARROW_DTYPES = [pa.string()]
BINARY_PYARROW_DTYPES = [pa.binary()]
@@ -239,6 +240,7 @@
ALL_PYARROW_DTYPES = (
ALL_INT_PYARROW_DTYPES
+ FLOAT_PYARROW_DTYPES
+ + DECIMAL_PYARROW_DTYPES
+ STRING_PYARROW_DTYPES
+ BINARY_PYARROW_DTYPES
+ TIME_PYARROW_DTYPES
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index ab4f87d4cf374..5c09a7a28856f 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1098,6 +1098,7 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
pa.types.is_integer(pa_type)
or pa.types.is_floating(pa_type)
or pa.types.is_duration(pa_type)
+ or pa.types.is_decimal(pa_type)
):
# pyarrow only supports any/all for boolean dtype, we allow
# for other dtypes, matching our non-pyarrow behavior
diff --git a/pandas/core/arrays/arrow/dtype.py b/pandas/core/arrays/arrow/dtype.py
index fdb9ac877831b..e6c1b2107e7f6 100644
--- a/pandas/core/arrays/arrow/dtype.py
+++ b/pandas/core/arrays/arrow/dtype.py
@@ -201,7 +201,7 @@ def construct_from_string(cls, string: str) -> ArrowDtype:
try:
pa_dtype = pa.type_for_alias(base_type)
except ValueError as err:
- has_parameters = re.search(r"\[.*\]", base_type)
+ has_parameters = re.search(r"[\[\(].*[\]\)]", base_type)
if has_parameters:
# Fallback to try common temporal types
try:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index bd631c0c0d948..acebe8a498f03 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -810,7 +810,11 @@ def _engine(
target_values = self._get_engine_target()
if isinstance(target_values, ExtensionArray):
if isinstance(target_values, (BaseMaskedArray, ArrowExtensionArray)):
- return _masked_engines[target_values.dtype.name](target_values)
+ try:
+ return _masked_engines[target_values.dtype.name](target_values)
+ except KeyError:
+ # Not supported yet e.g. decimal
+ pass
elif self._engine_type is libindex.ObjectEngine:
return libindex.ExtensionEngine(target_values)
@@ -4948,6 +4952,8 @@ def _get_engine_target(self) -> ArrayLike:
and not (
isinstance(self._values, ArrowExtensionArray)
and is_numeric_dtype(self.dtype)
+ # Exclude decimal
+ and self.dtype.kind != "O"
)
):
# TODO(ExtensionIndex): remove special-case, just use self._values
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 75e5744f498b4..8ccf63541658c 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -16,6 +16,7 @@
time,
timedelta,
)
+from decimal import Decimal
from io import (
BytesIO,
StringIO,
@@ -79,6 +80,14 @@ def data(dtype):
data = [1, 0] * 4 + [None] + [-2, -1] * 44 + [None] + [1, 99]
elif pa.types.is_unsigned_integer(pa_dtype):
data = [1, 0] * 4 + [None] + [2, 1] * 44 + [None] + [1, 99]
+ elif pa.types.is_decimal(pa_dtype):
+ data = (
+ [Decimal("1"), Decimal("0.0")] * 4
+ + [None]
+ + [Decimal("-2.0"), Decimal("-1.0")] * 44
+ + [None]
+ + [Decimal("0.5"), Decimal("33.123")]
+ )
elif pa.types.is_date(pa_dtype):
data = (
[date(2022, 1, 1), date(1999, 12, 31)] * 4
@@ -188,6 +197,10 @@ def data_for_grouping(dtype):
A = b"a"
B = b"b"
C = b"c"
+ elif pa.types.is_decimal(pa_dtype):
+ A = Decimal("-1.1")
+ B = Decimal("0.0")
+ C = Decimal("1.1")
else:
raise NotImplementedError
return pd.array([B, B, None, None, A, A, B, C], dtype=dtype)
@@ -250,9 +263,12 @@ def test_astype_str(self, data, request):
class TestConstructors(base.BaseConstructorsTests):
def test_from_dtype(self, data, request):
pa_dtype = data.dtype.pyarrow_dtype
+ if pa.types.is_string(pa_dtype) or pa.types.is_decimal(pa_dtype):
+ if pa.types.is_string(pa_dtype):
+ reason = "ArrowDtype(pa.string()) != StringDtype('pyarrow')"
+ else:
+ reason = f"pyarrow.type_for_alias cannot infer {pa_dtype}"
- if pa.types.is_string(pa_dtype):
- reason = "ArrowDtype(pa.string()) != StringDtype('pyarrow')"
request.node.add_marker(
pytest.mark.xfail(
reason=reason,
@@ -260,7 +276,7 @@ def test_from_dtype(self, data, request):
)
super().test_from_dtype(data)
- def test_from_sequence_pa_array(self, data, request):
+ def test_from_sequence_pa_array(self, data):
# https://github.com/pandas-dev/pandas/pull/47034#discussion_r955500784
# data._data = pa.ChunkedArray
result = type(data)._from_sequence(data._data)
@@ -285,7 +301,9 @@ def test_from_sequence_of_strings_pa_array(self, data, request):
reason="Nanosecond time parsing not supported.",
)
)
- elif pa_version_under11p0 and pa.types.is_duration(pa_dtype):
+ elif pa_version_under11p0 and (
+ pa.types.is_duration(pa_dtype) or pa.types.is_decimal(pa_dtype)
+ ):
request.node.add_marker(
pytest.mark.xfail(
raises=pa.ArrowNotImplementedError,
@@ -384,7 +402,9 @@ def test_accumulate_series(self, data, all_numeric_accumulations, skipna, reques
# renders the exception messages even when not showing them
pytest.skip(f"{all_numeric_accumulations} not implemented for pyarrow < 9")
- elif all_numeric_accumulations == "cumsum" and pa.types.is_boolean(pa_type):
+ elif all_numeric_accumulations == "cumsum" and (
+ pa.types.is_boolean(pa_type) or pa.types.is_decimal(pa_type)
+ ):
request.node.add_marker(
pytest.mark.xfail(
reason=f"{all_numeric_accumulations} not implemented for {pa_type}",
@@ -468,6 +488,12 @@ def test_reduce_series(self, data, all_numeric_reductions, skipna, request):
)
if all_numeric_reductions in {"skew", "kurt"}:
request.node.add_marker(xfail_mark)
+ elif (
+ all_numeric_reductions in {"var", "std", "median"}
+ and pa_version_under7p0
+ and pa.types.is_decimal(pa_dtype)
+ ):
+ request.node.add_marker(xfail_mark)
elif all_numeric_reductions == "sem" and pa_version_under8p0:
request.node.add_marker(xfail_mark)
@@ -590,8 +616,26 @@ def test_in_numeric_groupby(self, data_for_grouping):
class TestBaseDtype(base.BaseDtypeTests):
+ def test_check_dtype(self, data, request):
+ pa_dtype = data.dtype.pyarrow_dtype
+ if pa.types.is_decimal(pa_dtype) and pa_version_under8p0:
+ request.node.add_marker(
+ pytest.mark.xfail(
+ raises=ValueError,
+ reason="decimal string repr affects numpy comparison",
+ )
+ )
+ super().test_check_dtype(data)
+
def test_construct_from_string_own_name(self, dtype, request):
pa_dtype = dtype.pyarrow_dtype
+ if pa.types.is_decimal(pa_dtype):
+ request.node.add_marker(
+ pytest.mark.xfail(
+ raises=NotImplementedError,
+ reason=f"pyarrow.type_for_alias cannot infer {pa_dtype}",
+ )
+ )
if pa.types.is_string(pa_dtype):
# We still support StringDtype('pyarrow') over ArrowDtype(pa.string())
@@ -609,6 +653,13 @@ def test_is_dtype_from_name(self, dtype, request):
# We still support StringDtype('pyarrow') over ArrowDtype(pa.string())
assert not type(dtype).is_dtype(dtype.name)
else:
+ if pa.types.is_decimal(pa_dtype):
+ request.node.add_marker(
+ pytest.mark.xfail(
+ raises=NotImplementedError,
+ reason=f"pyarrow.type_for_alias cannot infer {pa_dtype}",
+ )
+ )
super().test_is_dtype_from_name(dtype)
def test_construct_from_string_another_type_raises(self, dtype):
@@ -627,6 +678,7 @@ def test_get_common_dtype(self, dtype, request):
)
or (pa.types.is_duration(pa_dtype) and pa_dtype.unit != "ns")
or pa.types.is_binary(pa_dtype)
+ or pa.types.is_decimal(pa_dtype)
):
request.node.add_marker(
pytest.mark.xfail(
@@ -700,6 +752,13 @@ def test_EA_types(self, engine, data, request):
request.node.add_marker(
pytest.mark.xfail(raises=TypeError, reason="GH 47534")
)
+ elif pa.types.is_decimal(pa_dtype):
+ request.node.add_marker(
+ pytest.mark.xfail(
+ raises=NotImplementedError,
+ reason=f"Parameterized types {pa_dtype} not supported.",
+ )
+ )
elif pa.types.is_timestamp(pa_dtype) and pa_dtype.unit in ("us", "ns"):
request.node.add_marker(
pytest.mark.xfail(
@@ -782,6 +841,13 @@ def test_argmin_argmax(
reason=f"{pa_dtype} only has 2 unique possible values",
)
)
+ elif pa.types.is_decimal(pa_dtype) and pa_version_under7p0:
+ request.node.add_marker(
+ pytest.mark.xfail(
+ reason=f"No pyarrow kernel for {pa_dtype}",
+ raises=pa.ArrowNotImplementedError,
+ )
+ )
super().test_argmin_argmax(data_for_sorting, data_missing_for_sorting, na_value)
@pytest.mark.parametrize(
@@ -800,6 +866,14 @@ def test_argmin_argmax(
def test_argreduce_series(
self, data_missing_for_sorting, op_name, skipna, expected, request
):
+ pa_dtype = data_missing_for_sorting.dtype.pyarrow_dtype
+ if pa.types.is_decimal(pa_dtype) and pa_version_under7p0 and skipna:
+ request.node.add_marker(
+ pytest.mark.xfail(
+ reason=f"No pyarrow kernel for {pa_dtype}",
+ raises=pa.ArrowNotImplementedError,
+ )
+ )
super().test_argreduce_series(
data_missing_for_sorting, op_name, skipna, expected
)
@@ -888,6 +962,21 @@ def test_basic_equals(self, data):
class TestBaseArithmeticOps(base.BaseArithmeticOpsTests):
divmod_exc = NotImplementedError
+ @classmethod
+ def assert_equal(cls, left, right, **kwargs):
+ if isinstance(left, pd.DataFrame):
+ left_pa_type = left.iloc[:, 0].dtype.pyarrow_dtype
+ right_pa_type = right.iloc[:, 0].dtype.pyarrow_dtype
+ else:
+ left_pa_type = left.dtype.pyarrow_dtype
+ right_pa_type = right.dtype.pyarrow_dtype
+ if pa.types.is_decimal(left_pa_type) or pa.types.is_decimal(right_pa_type):
+ # decimal precision can resize in the result type depending on data
+ # just compare the float values
+ left = left.astype("float[pyarrow]")
+ right = right.astype("float[pyarrow]")
+ tm.assert_equal(left, right, **kwargs)
+
def get_op_from_name(self, op_name):
short_opname = op_name.strip("_")
if short_opname == "rtruediv":
@@ -967,7 +1056,11 @@ def _get_scalar_exception(self, opname, pa_dtype):
pa.types.is_string(pa_dtype) or pa.types.is_binary(pa_dtype)
):
exc = None
- elif not (pa.types.is_floating(pa_dtype) or pa.types.is_integer(pa_dtype)):
+ elif not (
+ pa.types.is_floating(pa_dtype)
+ or pa.types.is_integer(pa_dtype)
+ or pa.types.is_decimal(pa_dtype)
+ ):
exc = pa.ArrowNotImplementedError
else:
exc = None
@@ -980,7 +1073,11 @@ def _get_arith_xfail_marker(self, opname, pa_dtype):
if (
opname == "__rpow__"
- and (pa.types.is_floating(pa_dtype) or pa.types.is_integer(pa_dtype))
+ and (
+ pa.types.is_floating(pa_dtype)
+ or pa.types.is_integer(pa_dtype)
+ or pa.types.is_decimal(pa_dtype)
+ )
and not pa_version_under7p0
):
mark = pytest.mark.xfail(
@@ -998,14 +1095,32 @@ def _get_arith_xfail_marker(self, opname, pa_dtype):
),
)
elif (
- opname in {"__rfloordiv__"}
- and pa.types.is_integer(pa_dtype)
+ opname == "__rfloordiv__"
+ and (pa.types.is_integer(pa_dtype) or pa.types.is_decimal(pa_dtype))
and not pa_version_under7p0
):
mark = pytest.mark.xfail(
raises=pa.ArrowInvalid,
reason="divide by 0",
)
+ elif (
+ opname == "__rtruediv__"
+ and pa.types.is_decimal(pa_dtype)
+ and not pa_version_under7p0
+ ):
+ mark = pytest.mark.xfail(
+ raises=pa.ArrowInvalid,
+ reason="divide by 0",
+ )
+ elif (
+ opname == "__pow__"
+ and pa.types.is_decimal(pa_dtype)
+ and pa_version_under7p0
+ ):
+ mark = pytest.mark.xfail(
+ raises=pa.ArrowInvalid,
+ reason="Invalid decimal function: power_checked",
+ )
return mark
@@ -1226,6 +1341,9 @@ def test_arrowdtype_construct_from_string_type_with_unsupported_parameters():
expected = ArrowDtype(pa.timestamp("s", "UTC"))
assert dtype == expected
+ with pytest.raises(NotImplementedError, match="Passing pyarrow type"):
+ ArrowDtype.construct_from_string("decimal(7, 2)[pyarrow]")
+
@pytest.mark.parametrize(
"interpolation", ["linear", "lower", "higher", "nearest", "midpoint"]
@@ -1252,7 +1370,11 @@ def test_quantile(data, interpolation, quantile, request):
ser.quantile(q=quantile, interpolation=interpolation)
return
- if pa.types.is_integer(pa_dtype) or pa.types.is_floating(pa_dtype):
+ if (
+ pa.types.is_integer(pa_dtype)
+ or pa.types.is_floating(pa_dtype)
+ or (pa.types.is_decimal(pa_dtype) and not pa_version_under7p0)
+ ):
pass
elif pa.types.is_temporal(data._data.type):
pass
@@ -1293,7 +1415,11 @@ def test_quantile(data, interpolation, quantile, request):
else:
# Just check the values
expected = pd.Series(data.take([0, 0]), index=[0.5, 0.5])
- if pa.types.is_integer(pa_dtype) or pa.types.is_floating(pa_dtype):
+ if (
+ pa.types.is_integer(pa_dtype)
+ or pa.types.is_floating(pa_dtype)
+ or pa.types.is_decimal(pa_dtype)
+ ):
expected = expected.astype("float64[pyarrow]")
result = result.astype("float64[pyarrow]")
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index 40440bd8e0ff8..d8e17e48a19b3 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -449,12 +449,7 @@ def test_hasnans_isnans(self, index_flat):
@pytest.mark.parametrize("na_position", [None, "middle"])
def test_sort_values_invalid_na_position(index_with_missing, na_position):
with pytest.raises(ValueError, match=f"invalid na_position: {na_position}"):
- with tm.maybe_produces_warning(
- PerformanceWarning,
- getattr(index_with_missing.dtype, "storage", "") == "pyarrow",
- check_stacklevel=False,
- ):
- index_with_missing.sort_values(na_position=na_position)
+ index_with_missing.sort_values(na_position=na_position)
@pytest.mark.parametrize("na_position", ["first", "last"])
| closes #34166 | https://api.github.com/repos/pandas-dev/pandas/pulls/50964 | 2023-01-24T22:33:36Z | 2023-02-22T20:31:24Z | 2023-02-22T20:31:23Z | 2023-02-22T23:52:50Z |
BUG: is_string_dtype returns True for ArrowDtype(pa.string()) | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 3c9c861afd989..c28c9fdad1804 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -1021,7 +1021,7 @@ Conversion
Strings
^^^^^^^
-- Bug in :func:`pandas.api.dtypes.is_string_dtype` that would not return ``True`` for :class:`StringDtype` (:issue:`15585`)
+- Bug in :func:`pandas.api.dtypes.is_string_dtype` that would not return ``True`` for :class:`StringDtype` or :class:`ArrowDtype` with ``pyarrow.string()`` (:issue:`15585`)
- Bug in converting string dtypes to "datetime64[ns]" or "timedelta64[ns]" incorrectly raising ``TypeError`` (:issue:`36153`)
-
diff --git a/pandas/core/arrays/arrow/dtype.py b/pandas/core/arrays/arrow/dtype.py
index f5f87bea83b8f..3e3213b48670f 100644
--- a/pandas/core/arrays/arrow/dtype.py
+++ b/pandas/core/arrays/arrow/dtype.py
@@ -95,6 +95,9 @@ def name(self) -> str: # type: ignore[override]
@cache_readonly
def numpy_dtype(self) -> np.dtype:
"""Return an instance of the related numpy dtype"""
+ if pa.types.is_string(self.pyarrow_dtype):
+ # pa.string().to_pandas_dtype() = object which we don't want
+ return np.dtype(str)
try:
return np.dtype(self.pyarrow_dtype.to_pandas_dtype())
except (NotImplementedError, TypeError):
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index eef77ceabb6fe..9db49470edaf2 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -46,6 +46,7 @@
is_integer_dtype,
is_numeric_dtype,
is_signed_integer_dtype,
+ is_string_dtype,
is_unsigned_integer_dtype,
)
from pandas.tests.extension import base
@@ -651,6 +652,24 @@ def test_groupby_extension_agg(self, as_index, data_for_grouping, request):
):
super().test_groupby_extension_agg(as_index, data_for_grouping)
+ def test_in_numeric_groupby(self, data_for_grouping):
+ if is_string_dtype(data_for_grouping.dtype):
+ df = pd.DataFrame(
+ {
+ "A": [1, 1, 2, 2, 3, 3, 1, 4],
+ "B": data_for_grouping,
+ "C": [1, 1, 1, 1, 1, 1, 1, 1],
+ }
+ )
+
+ expected = pd.Index(["C"])
+ with pytest.raises(TypeError, match="does not support"):
+ df.groupby("A").sum().columns
+ result = df.groupby("A").sum(numeric_only=True).columns
+ tm.assert_index_equal(result, expected)
+ else:
+ super().test_in_numeric_groupby(data_for_grouping)
+
class TestBaseDtype(base.BaseDtypeTests):
def test_construct_from_string_own_name(self, dtype, request):
@@ -730,7 +749,6 @@ def test_get_common_dtype(self, dtype, request):
and (pa_dtype.unit != "ns" or pa_dtype.tz is not None)
)
or (pa.types.is_duration(pa_dtype) and pa_dtype.unit != "ns")
- or pa.types.is_string(pa_dtype)
or pa.types.is_binary(pa_dtype)
):
request.node.add_marker(
@@ -743,6 +761,13 @@ def test_get_common_dtype(self, dtype, request):
)
super().test_get_common_dtype(dtype)
+ def test_is_not_string_type(self, dtype):
+ pa_dtype = dtype.pyarrow_dtype
+ if pa.types.is_string(pa_dtype):
+ assert is_string_dtype(dtype)
+ else:
+ super().test_is_not_string_type(dtype)
+
class TestBaseIndex(base.BaseIndexTests):
pass
| Broken off #50325 | https://api.github.com/repos/pandas-dev/pandas/pulls/50963 | 2023-01-24T19:59:18Z | 2023-01-26T00:40:53Z | 2023-01-26T00:40:53Z | 2023-01-26T00:53:13Z |
Enable pylint undefined-loop-variable warning | diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index e0b18047aa0ec..350914cc50556 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -71,12 +71,14 @@ def combine_hash_arrays(
mult = np.uint64(1000003)
out = np.zeros_like(first) + np.uint64(0x345678)
+ last_i = 0
for i, a in enumerate(arrays):
inverse_i = num_items - i
out ^= a
out *= mult
mult += np.uint64(82520 + inverse_i + inverse_i)
- assert i + 1 == num_items, "Fed in wrong num_items"
+ last_i = i
+ assert last_i + 1 == num_items, "Fed in wrong num_items"
out += np.uint64(97531)
return out
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index d44bdc466aed9..6fb6d72bab099 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -736,7 +736,9 @@ def parse(
output = {}
+ last_sheetname = None
for asheetname in sheets:
+ last_sheetname = asheetname
if verbose:
print(f"Reading sheet {asheetname}")
@@ -888,10 +890,13 @@ def parse(
err.args = (f"{err.args[0]} (sheet: {asheetname})", *err.args[1:])
raise err
+ if last_sheetname is None:
+ raise ValueError("Sheet name is an empty list")
+
if ret_dict:
return output
else:
- return output[asheetname]
+ return output[last_sheetname]
@doc(storage_options=_shared_docs["storage_options"])
diff --git a/pandas/tests/frame/methods/test_to_dict_of_blocks.py b/pandas/tests/frame/methods/test_to_dict_of_blocks.py
index 9705f24d0286c..cc4860beea491 100644
--- a/pandas/tests/frame/methods/test_to_dict_of_blocks.py
+++ b/pandas/tests/frame/methods/test_to_dict_of_blocks.py
@@ -20,30 +20,34 @@ def test_copy_blocks(self, float_frame):
column = df.columns[0]
# use the default copy=True, change a column
+ _last_df = None
blocks = df._to_dict_of_blocks(copy=True)
for _df in blocks.values():
+ _last_df = _df
if column in _df:
_df.loc[:, column] = _df[column] + 1
# make sure we did not change the original DataFrame
- assert not _df[column].equals(df[column])
+ assert _last_df is not None and not _last_df[column].equals(df[column])
def test_no_copy_blocks(self, float_frame, using_copy_on_write):
# GH#9607
df = DataFrame(float_frame, copy=True)
column = df.columns[0]
+ _last_df = None
# use the copy=False, change a column
blocks = df._to_dict_of_blocks(copy=False)
for _df in blocks.values():
+ _last_df = _df
if column in _df:
_df.loc[:, column] = _df[column] + 1
if not using_copy_on_write:
# make sure we did change the original DataFrame
- assert _df[column].equals(df[column])
+ assert _last_df is not None and _last_df[column].equals(df[column])
else:
- assert not _df[column].equals(df[column])
+ assert _last_df is not None and not _last_df[column].equals(df[column])
def test_to_dict_of_blocks_item_cache(request, using_copy_on_write):
diff --git a/pandas/tests/io/formats/test_printing.py b/pandas/tests/io/formats/test_printing.py
index 3532f979665ec..803ed8d342a2f 100644
--- a/pandas/tests/io/formats/test_printing.py
+++ b/pandas/tests/io/formats/test_printing.py
@@ -122,14 +122,16 @@ class TestTableSchemaRepr:
def test_publishes(self, ip):
ipython = ip.instance(config=ip.config)
df = pd.DataFrame({"A": [1, 2]})
- objects = [df["A"], df, df] # dataframe / series
+ objects = [df["A"], df] # dataframe / series
expected_keys = [
{"text/plain", "application/vnd.dataresource+json"},
{"text/plain", "text/html", "application/vnd.dataresource+json"},
]
opt = pd.option_context("display.html.table_schema", True)
+ last_obj = None
for obj, expected in zip(objects, expected_keys):
+ last_obj = obj
with opt:
formatted = ipython.display_formatter.format(obj)
assert set(formatted[0].keys()) == expected
@@ -137,7 +139,7 @@ def test_publishes(self, ip):
with_latex = pd.option_context("styler.render.repr", "latex")
with opt, with_latex:
- formatted = ipython.display_formatter.format(obj)
+ formatted = ipython.display_formatter.format(last_obj)
expected = {
"text/plain",
diff --git a/pyproject.toml b/pyproject.toml
index dc237d32c022c..b8a2cb89ff3a0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -287,7 +287,6 @@ disable = [
"too-many-public-methods",
"too-many-return-statements",
"too-many-statements",
- "undefined-loop-variable",
"unexpected-keyword-arg",
"ungrouped-imports",
"unsubscriptable-object",
| - [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Enables the Pylint warning `undefined-loop-variable`. This contributes to (now-closed) #48855 and extends #49740.
Replaces several instances of
```
for x in my_list:
foo(x)
bar(x) # undefined-loop-variable
```
with
```
last_element = None
for x in my_list:
last_element = x
foo(x)
bar(last_element) # no undefined-loop-variable
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/50961 | 2023-01-24T17:00:29Z | 2023-01-26T17:47:16Z | 2023-01-26T17:47:16Z | 2023-01-28T19:48:15Z |
DOC: add documentation to DataFrameGroupBy.skew and SeriesGroupBy.skew | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index d6482b61a536d..815f9936057f4 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1006,7 +1006,6 @@ def take(
result = self._op_via_apply("take", indices=indices, axis=axis, **kwargs)
return result
- @doc(Series.skew.__doc__)
def skew(
self,
axis: Axis | lib.NoDefault = lib.no_default,
@@ -1014,6 +1013,58 @@ def skew(
numeric_only: bool = False,
**kwargs,
) -> Series:
+ """
+ Return unbiased skew within groups.
+
+ Normalized by N-1.
+
+ Parameters
+ ----------
+ axis : {0 or 'index', 1 or 'columns', None}, default 0
+ Axis for the function to be applied on.
+ This parameter is only for compatibility with DataFrame and is unused.
+
+ skipna : bool, default True
+ Exclude NA/null values when computing the result.
+
+ numeric_only : bool, default False
+ Include only float, int, boolean columns. Not implemented for Series.
+
+ **kwargs
+ Additional keyword arguments to be passed to the function.
+
+ Returns
+ -------
+ Series
+
+ See Also
+ --------
+ Series.skew : Return unbiased skew over requested axis.
+
+ Examples
+ --------
+ >>> ser = pd.Series([390., 350., 357., np.nan, 22., 20., 30.],
+ ... index=['Falcon', 'Falcon', 'Falcon', 'Falcon',
+ ... 'Parrot', 'Parrot', 'Parrot'],
+ ... name="Max Speed")
+ >>> ser
+ Falcon 390.0
+ Falcon 350.0
+ Falcon 357.0
+ Falcon NaN
+ Parrot 22.0
+ Parrot 20.0
+ Parrot 30.0
+ Name: Max Speed, dtype: float64
+ >>> ser.groupby(level=0).skew()
+ Falcon 1.525174
+ Parrot 1.457863
+ Name: Max Speed, dtype: float64
+ >>> ser.groupby(level=0).skew(skipna=False)
+ Falcon NaN
+ Parrot 1.457863
+ Name: Max Speed, dtype: float64
+ """
result = self._op_via_apply(
"skew",
axis=axis,
@@ -2473,7 +2524,6 @@ def take(
result = self._op_via_apply("take", indices=indices, axis=axis, **kwargs)
return result
- @doc(DataFrame.skew.__doc__)
def skew(
self,
axis: Axis | None | lib.NoDefault = lib.no_default,
@@ -2481,6 +2531,69 @@ def skew(
numeric_only: bool = False,
**kwargs,
) -> DataFrame:
+ """
+ Return unbiased skew within groups.
+
+ Normalized by N-1.
+
+ Parameters
+ ----------
+ axis : {0 or 'index', 1 or 'columns', None}, default 0
+ Axis for the function to be applied on.
+
+ Specifying ``axis=None`` will apply the aggregation across both axes.
+
+ .. versionadded:: 2.0.0
+
+ skipna : bool, default True
+ Exclude NA/null values when computing the result.
+
+ numeric_only : bool, default False
+ Include only float, int, boolean columns.
+
+ **kwargs
+ Additional keyword arguments to be passed to the function.
+
+ Returns
+ -------
+ DataFrame
+
+ See Also
+ --------
+ DataFrame.skew : Return unbiased skew over requested axis.
+
+ Examples
+ --------
+ >>> arrays = [['falcon', 'parrot', 'cockatoo', 'kiwi',
+ ... 'lion', 'monkey', 'rabbit'],
+ ... ['bird', 'bird', 'bird', 'bird',
+ ... 'mammal', 'mammal', 'mammal']]
+ >>> index = pd.MultiIndex.from_arrays(arrays, names=('name', 'class'))
+ >>> df = pd.DataFrame({'max_speed': [389.0, 24.0, 70.0, np.nan,
+ ... 80.5, 21.5, 15.0]},
+ ... index=index)
+ >>> df
+ max_speed
+ name class
+ falcon bird 389.0
+ parrot bird 24.0
+ cockatoo bird 70.0
+ kiwi bird NaN
+ lion mammal 80.5
+ monkey mammal 21.5
+ rabbit mammal 15.0
+ >>> gb = df.groupby(["class"])
+ >>> gb.skew()
+ max_speed
+ class
+ bird 1.628296
+ mammal 1.669046
+ >>> gb.skew(skipna=False)
+ max_speed
+ class
+ bird NaN
+ mammal 1.669046
+ """
result = self._op_via_apply(
"skew",
axis=axis,
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Contributes to https://github.com/noatamir/pyladies-berlin-sprints/issues/13
I copied the documentation from [here](https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py#L11496) and [here](https://github.com/pandas-dev/pandas/blob/main/pandas/core/generic.py#L11825). Let me know if I should delete the lines in SeriesGroupBy.skew that apply to DataFrames and vice versa.
ping @phofl @noatamir
#PyLadies Berlin sprint
| https://api.github.com/repos/pandas-dev/pandas/pulls/50958 | 2023-01-24T15:37:41Z | 2023-02-05T13:21:52Z | 2023-02-05T13:21:52Z | 2023-02-06T20:25:41Z |
DOC: improve the wording | diff --git a/doc/source/reference/window.rst b/doc/source/reference/window.rst
index 0be3184a9356c..0839bb2e52efd 100644
--- a/doc/source/reference/window.rst
+++ b/doc/source/reference/window.rst
@@ -6,9 +6,9 @@
Window
======
-Rolling objects are returned by ``.rolling`` calls: :func:`pandas.DataFrame.rolling`, :func:`pandas.Series.rolling`, etc.
-Expanding objects are returned by ``.expanding`` calls: :func:`pandas.DataFrame.expanding`, :func:`pandas.Series.expanding`, etc.
-ExponentialMovingWindow objects are returned by ``.ewm`` calls: :func:`pandas.DataFrame.ewm`, :func:`pandas.Series.ewm`, etc.
+Rolling objects are returned by ``.rolling`` calls: :func:`pandas.DataFrame.rolling` and :func:`pandas.Series.rolling`.
+Expanding objects are returned by ``.expanding`` calls: :func:`pandas.DataFrame.expanding` and :func:`pandas.Series.expanding`.
+ExponentialMovingWindow objects are returned by ``.ewm`` calls: :func:`pandas.DataFrame.ewm` and :func:`pandas.Series.ewm`.
.. _api.functions_rolling:
| there are only two .rolling(), .expanding() and .ewm() functions, so using "etc." is not appropriate.
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50957 | 2023-01-24T15:04:00Z | 2023-01-24T18:42:58Z | 2023-01-24T18:42:58Z | 2023-01-24T20:36:47Z |
DEPR: execute deprecations for str.cat in v1.0 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index cc4bab8b9a923..85d087082171f 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -62,6 +62,8 @@ Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Removed the previously deprecated :meth:`Series.get_value`, :meth:`Series.set_value`, :meth:`DataFrame.get_value`, :meth:`DataFrame.set_value` (:issue:`17739`)
- Changed the the default value of `inplace` in :meth:`DataFrame.set_index` and :meth:`Series.set_axis`. It now defaults to False (:issue:`27600`)
+- :meth:`pandas.Series.str.cat` now defaults to aligning ``others``, using ``join='left'`` (:issue:`27611`)
+- :meth:`pandas.Series.str.cat` does not accept list-likes *within* list-likes anymore (:issue:`27611`)
-
.. _whatsnew_1000.performance:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 54882d039f135..a0e73172be1e4 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -21,7 +21,12 @@
is_scalar,
is_string_like,
)
-from pandas.core.dtypes.generic import ABCIndexClass, ABCMultiIndex, ABCSeries
+from pandas.core.dtypes.generic import (
+ ABCDataFrame,
+ ABCIndexClass,
+ ABCMultiIndex,
+ ABCSeries,
+)
from pandas.core.dtypes.missing import isna
from pandas.core.algorithms import take_1d
@@ -2058,7 +2063,7 @@ def cons_row(x):
cons = self._orig._constructor
return cons(result, name=name, index=index)
- def _get_series_list(self, others, ignore_index=False):
+ def _get_series_list(self, others):
"""
Auxiliary function for :meth:`str.cat`. Turn potentially mixed input
into a list of Series (elements without an index must match the length
@@ -2066,122 +2071,56 @@ def _get_series_list(self, others, ignore_index=False):
Parameters
----------
- others : Series, Index, DataFrame, np.ndarray, list-like or list-like
- of objects that are Series, Index or np.ndarray (1-dim)
- ignore_index : boolean, default False
- Determines whether to forcefully align others with index of caller
+ others : Series, DataFrame, np.ndarray, list-like or list-like of
+ objects that are either Series, Index or np.ndarray (1-dim)
Returns
-------
- tuple : (others transformed into list of Series,
- boolean whether FutureWarning should be raised)
+ list : others transformed into list of Series
"""
-
- # Once str.cat defaults to alignment, this function can be simplified;
- # will not need `ignore_index` and the second boolean output anymore
-
from pandas import Series, DataFrame
# self._orig is either Series or Index
idx = self._orig if isinstance(self._orig, ABCIndexClass) else self._orig.index
- err_msg = (
- "others must be Series, Index, DataFrame, np.ndarray or "
- "list-like (either containing only strings or containing "
- "only objects of type Series/Index/list-like/np.ndarray)"
- )
-
# Generally speaking, all objects without an index inherit the index
# `idx` of the calling Series/Index - i.e. must have matching length.
- # Objects with an index (i.e. Series/Index/DataFrame) keep their own
- # index, *unless* ignore_index is set to True.
+ # Objects with an index (i.e. Series/Index/DataFrame) keep their own.
if isinstance(others, ABCSeries):
- warn = not others.index.equals(idx)
- # only reconstruct Series when absolutely necessary
- los = [
- Series(others.values, index=idx) if ignore_index and warn else others
- ]
- return (los, warn)
+ return [others]
elif isinstance(others, ABCIndexClass):
- warn = not others.equals(idx)
- los = [Series(others.values, index=(idx if ignore_index else others))]
- return (los, warn)
- elif isinstance(others, DataFrame):
- warn = not others.index.equals(idx)
- if ignore_index and warn:
- # without copy, this could change "others"
- # that was passed to str.cat
- others = others.copy()
- others.index = idx
- return ([others[x] for x in others], warn)
+ return [Series(others.values, index=others)]
+ elif isinstance(others, ABCDataFrame):
+ return [others[x] for x in others]
elif isinstance(others, np.ndarray) and others.ndim == 2:
others = DataFrame(others, index=idx)
- return ([others[x] for x in others], False)
+ return [others[x] for x in others]
elif is_list_like(others, allow_sets=False):
others = list(others) # ensure iterators do not get read twice etc
# in case of list-like `others`, all elements must be
- # either one-dimensional list-likes or scalars
- if all(is_list_like(x, allow_sets=False) for x in others):
+ # either Series/Index/np.ndarray (1-dim)...
+ if all(
+ isinstance(x, (ABCSeries, ABCIndexClass))
+ or (isinstance(x, np.ndarray) and x.ndim == 1)
+ for x in others
+ ):
los = []
- join_warn = False
- depr_warn = False
- # iterate through list and append list of series for each
- # element (which we check to be one-dimensional and non-nested)
- while others:
- nxt = others.pop(0) # nxt is guaranteed list-like by above
-
- # GH 21950 - DeprecationWarning
- # only allowing Series/Index/np.ndarray[1-dim] will greatly
- # simply this function post-deprecation.
- if not (
- isinstance(nxt, (Series, ABCIndexClass))
- or (isinstance(nxt, np.ndarray) and nxt.ndim == 1)
- ):
- depr_warn = True
-
- if not isinstance(
- nxt, (DataFrame, Series, ABCIndexClass, np.ndarray)
- ):
- # safety for non-persistent list-likes (e.g. iterators)
- # do not map indexed/typed objects; info needed below
- nxt = list(nxt)
-
- # known types for which we can avoid deep inspection
- no_deep = (
- isinstance(nxt, np.ndarray) and nxt.ndim == 1
- ) or isinstance(nxt, (Series, ABCIndexClass))
- # nested list-likes are forbidden:
- # -> elements of nxt must not be list-like
- is_legal = (no_deep and nxt.dtype == object) or all(
- not is_list_like(x) for x in nxt
- )
-
- # DataFrame is false positive of is_legal
- # because "x in df" returns column names
- if not is_legal or isinstance(nxt, DataFrame):
- raise TypeError(err_msg)
-
- nxt, wnx = self._get_series_list(nxt, ignore_index=ignore_index)
- los = los + nxt
- join_warn = join_warn or wnx
-
- if depr_warn:
- warnings.warn(
- "list-likes other than Series, Index, or "
- "np.ndarray WITHIN another list-like are "
- "deprecated and will be removed in a future "
- "version.",
- FutureWarning,
- stacklevel=4,
- )
- return (los, join_warn)
+ while others: # iterate through list and append each element
+ los = los + self._get_series_list(others.pop(0))
+ return los
+ # ... or just strings
elif all(not is_list_like(x) for x in others):
- return ([Series(others, index=idx)], False)
- raise TypeError(err_msg)
+ return [Series(others, index=idx)]
+ raise TypeError(
+ "others must be Series, Index, DataFrame, np.ndarrary "
+ "or list-like (either containing only strings or "
+ "containing only objects of type Series/Index/"
+ "np.ndarray[1-dim])"
+ )
@forbid_nonstring_types(["bytes", "mixed", "mixed-integer"])
- def cat(self, others=None, sep=None, na_rep=None, join=None):
+ def cat(self, others=None, sep=None, na_rep=None, join="left"):
"""
Concatenate strings in the Series/Index with given separator.
@@ -2215,16 +2154,15 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
- If `na_rep` is None, and `others` is not None, a row containing a
missing value in any of the columns (before concatenation) will
have a missing value in the result.
- join : {'left', 'right', 'outer', 'inner'}, default None
+ join : {'left', 'right', 'outer', 'inner'}, default 'left'
Determines the join-style between the calling Series/Index and any
Series/Index/DataFrame in `others` (objects without an index need
- to match the length of the calling Series/Index). If None,
- alignment is disabled, but this option will be removed in a future
- version of pandas and replaced with a default of `'left'`. To
- disable alignment, use `.values` on any Series/Index/DataFrame in
- `others`.
+ to match the length of the calling Series/Index). To disable
+ alignment, use `.values` on any Series/Index/DataFrame in `others`.
.. versionadded:: 0.23.0
+ .. versionchanged:: 1.0.0
+ Changed default of `join` from None to `'left'`.
Returns
-------
@@ -2340,39 +2278,14 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
try:
# turn anything in "others" into lists of Series
- others, warn = self._get_series_list(others, ignore_index=(join is None))
+ others = self._get_series_list(others)
except ValueError: # do not catch TypeError raised by _get_series_list
- if join is None:
- raise ValueError(
- "All arrays must be same length, except "
- "those having an index if `join` is not None"
- )
- else:
- raise ValueError(
- "If `others` contains arrays or lists (or "
- "other list-likes without an index), these "
- "must all be of the same length as the "
- "calling Series/Index."
- )
-
- if join is None and warn:
- warnings.warn(
- "A future version of pandas will perform index "
- "alignment when `others` is a Series/Index/"
- "DataFrame (or a list-like containing one). To "
- "disable alignment (the behavior before v.0.23) and "
- "silence this warning, use `.values` on any Series/"
- "Index/DataFrame in `others`. To enable alignment "
- "and silence this warning, pass `join='left'|"
- "'outer'|'inner'|'right'`. The future default will "
- "be `join='left'`.",
- FutureWarning,
- stacklevel=3,
+ raise ValueError(
+ "If `others` contains arrays or lists (or other "
+ "list-likes without an index), these must all be "
+ "of the same length as the calling Series/Index."
)
- # if join is None, _get_series_list already force-aligned indexes
- join = "left" if join is None else join
-
# align if required
if any(not data.index.equals(x.index) for x in others):
# Need to add keys for uniqueness in case of duplicate columns
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index bc848a528f2fd..bc8dc7272a83a 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -384,7 +384,7 @@ def test_str_cat_name(self, box, other):
other = other(values)
else:
other = values
- result = box(values, name="name").str.cat(other, sep=",", join="left")
+ result = box(values, name="name").str.cat(other, sep=",")
assert result.name == "name"
@pytest.mark.parametrize("box", [Series, Index])
@@ -418,12 +418,9 @@ def test_str_cat(self, box):
assert_series_or_index_equal(result, expected)
# errors for incorrect lengths
- rgx = "All arrays must be same length, except those having an index.*"
+ rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
z = Series(["1", "2", "3"])
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(z)
-
with pytest.raises(ValueError, match=rgx):
s.str.cat(z.values)
@@ -452,14 +449,12 @@ def test_str_cat_categorical(self, box, dtype_caller, dtype_target, sep):
expected = Index(["ab", "aa", "bb", "ac"])
expected = expected if box == Index else Series(expected, index=s)
- # Series/Index with unaligned Index
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- result = s.str.cat(t, sep=sep)
- assert_series_or_index_equal(result, expected)
+ # Series/Index with unaligned Index -> t.values
+ result = s.str.cat(t.values, sep=sep)
+ assert_series_or_index_equal(result, expected)
# Series/Index with Series having matching Index
- t = Series(t, index=s)
+ t = Series(t.values, index=s)
result = s.str.cat(t, sep=sep)
assert_series_or_index_equal(result, expected)
@@ -468,11 +463,14 @@ def test_str_cat_categorical(self, box, dtype_caller, dtype_target, sep):
assert_series_or_index_equal(result, expected)
# Series/Index with Series having different Index
- t = Series(t.values, index=t)
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- result = s.str.cat(t, sep=sep)
- assert_series_or_index_equal(result, expected)
+ t = Series(t.values, index=t.values)
+ expected = Index(["aa", "aa", "aa", "bb", "bb"])
+ expected = (
+ expected if box == Index else Series(expected, index=expected.str[:1])
+ )
+
+ result = s.str.cat(t, sep=sep)
+ assert_series_or_index_equal(result, expected)
# test integer/float dtypes (inferred by constructor) and mixed
@pytest.mark.parametrize(
@@ -523,55 +521,33 @@ def test_str_cat_mixed_inputs(self, box):
result = s.str.cat([t, s.values])
assert_series_or_index_equal(result, expected)
- # Series/Index with list of list-likes
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # nested list-likes will be deprecated
- result = s.str.cat([t.values, list(s)])
- assert_series_or_index_equal(result, expected)
-
# Series/Index with list of Series; different indexes
t.index = ["b", "c", "d", "a"]
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- result = s.str.cat([t, s])
- assert_series_or_index_equal(result, expected)
+ expected = box(["aDa", "bAb", "cBc", "dCd"])
+ expected = expected if box == Index else Series(expected.values, index=s.values)
+ result = s.str.cat([t, s])
+ assert_series_or_index_equal(result, expected)
- # Series/Index with mixed list; different indexes
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- result = s.str.cat([t, s.values])
- assert_series_or_index_equal(result, expected)
+ # Series/Index with mixed list; different index
+ result = s.str.cat([t, s.values])
+ assert_series_or_index_equal(result, expected)
# Series/Index with DataFrame; different indexes
d.index = ["b", "c", "d", "a"]
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # FutureWarning to switch to alignment by default
- result = s.str.cat(d)
- assert_series_or_index_equal(result, expected)
-
- # Series/Index with iterator of list-likes
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # nested list-likes will be deprecated
- result = s.str.cat(iter([t.values, list(s)]))
- assert_series_or_index_equal(result, expected)
+ expected = box(["aDd", "bAa", "cBb", "dCc"])
+ expected = expected if box == Index else Series(expected.values, index=s.values)
+ result = s.str.cat(d)
+ assert_series_or_index_equal(result, expected)
# errors for incorrect lengths
- rgx = "All arrays must be same length, except those having an index.*"
+ rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
z = Series(["1", "2", "3"])
e = concat([z, z], axis=1)
- # DataFrame
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(e)
-
# two-dimensional ndarray
with pytest.raises(ValueError, match=rgx):
s.str.cat(e.values)
- # list of Series
- with pytest.raises(ValueError, match=rgx):
- s.str.cat([z, s])
-
# list of list-likes
with pytest.raises(ValueError, match=rgx):
s.str.cat([z.values, s.values])
@@ -615,6 +591,10 @@ def test_str_cat_mixed_inputs(self, box):
with pytest.raises(TypeError, match=rgx):
s.str.cat(1)
+ # nested list-likes
+ with pytest.raises(TypeError, match=rgx):
+ s.str.cat(iter([t.values, list(s)]))
+
@pytest.mark.parametrize("join", ["left", "outer", "inner", "right"])
@pytest.mark.parametrize("box", [Series, Index])
def test_str_cat_align_indexed(self, box, join):
@@ -660,10 +640,9 @@ def test_str_cat_align_mixed_inputs(self, join):
result = s.str.cat([t, u], join=join, na_rep="-")
tm.assert_series_equal(result, expected)
- with tm.assert_produces_warning(expected_warning=FutureWarning):
- # nested list-likes will be deprecated
- result = s.str.cat([t, list(u)], join=join, na_rep="-")
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match="others must be Series,.*"):
+ # nested lists are forbidden
+ s.str.cat([t, list(u)], join=join)
# errors for incorrect lengths
rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
| - [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
This executes some deprecations from the 0.23/0.24 cycles, namely following the switch from `join=None` to `join='left'` (following #18657 / #20347), as well as the fact that list-likes *within* list-likes were deprecated (following #21950 / #22264).
Since this deprecation closes the loop to my first-ever pandas issue (#18657) and PR (#20347), I'll allow myself a bit of a digression here. After biting off a bit more than I could chew (after finding out just how many cases the method was supposed to support), things got a bit tight with the cut-off for `0.23.0rc`, and @TomAugspurger [quipped](https://github.com/pandas-dev/pandas/pull/20347#issuecomment-385686992):
> > @h-vetinari: Thanks for waiting with the cut-off! [...]
> @TomAugspurger: Right, we'll keep you on the hook for future maintenance of this :)
After #20347 #20790 #20842 #20845 #20923 #21330 #21950 #22264 #22575 #22652 #22721 (-->#23011 #23163 #23167 #23582 ... and several more issues/PRs around `.str`-accessor) #22722 #22725 #23009 (-->#23061 #23065) #23187 #23443 #23723 #23725 #24044 #24045 #26605 #26607 etc., I think I can say that I upheld my end of the bargain. :) | https://api.github.com/repos/pandas-dev/pandas/pulls/27611 | 2019-07-26T14:59:11Z | 2019-07-31T12:32:36Z | 2019-07-31T12:32:35Z | 2019-07-31T13:21:15Z |
BUG: break reference cycle in Index._engine | diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index 6541ddcb0397d..49834ae94cc38 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -1,3 +1,4 @@
+import gc
import numpy as np
import pandas.util.testing as tm
from pandas import (
@@ -225,4 +226,21 @@ def time_intersection_both_duplicate(self, N):
self.intv.intersection(self.intv2)
+class GC:
+ params = [1, 2, 5]
+
+ def create_use_drop(self):
+ idx = Index(list(range(1000 * 1000)))
+ idx._engine
+
+ def peakmem_gc_instances(self, N):
+ try:
+ gc.disable()
+
+ for _ in range(N):
+ self.create_use_drop()
+ finally:
+ gc.enable()
+
+
from .pandas_vb_common import setup # noqa: F401
diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index b5bd83fd17530..0609694a9640f 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -83,6 +83,7 @@ Indexing
^^^^^^^^
- Bug in partial-string indexing returning a NumPy array rather than a ``Series`` when indexing with a scalar like ``.loc['2015']`` (:issue:`27516`)
+- Break reference cycle involving :class:`Index` to allow garbage collection of :class:`Index` objects without running the GC. (:issue:`27585`)
-
-
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 5fc477ed3b33c..97bb360cf2726 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -689,7 +689,11 @@ def _cleanup(self):
@cache_readonly
def _engine(self):
# property, for now, slow to look up
- return self._engine_type(lambda: self._ndarray_values, len(self))
+
+ # to avoid a refernce cycle, bind `_ndarray_values` to a local variable, so
+ # `self` is not passed into the lambda.
+ _ndarray_values = self._ndarray_values
+ return self._engine_type(lambda: _ndarray_values, len(self))
# --------------------------------------------------------------------
# Array-Like Methods
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index c40a9bce9385b..34d82525495fc 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -1,5 +1,6 @@
from collections import defaultdict
from datetime import datetime, timedelta
+import gc
from io import StringIO
import math
import operator
@@ -2424,6 +2425,13 @@ def test_deprecated_contains(self):
with tm.assert_produces_warning(FutureWarning):
index.contains(1)
+ def test_engine_reference_cycle(self):
+ # https://github.com/pandas-dev/pandas/issues/27585
+ index = pd.Index([1, 2, 3])
+ nrefs_pre = len(gc.get_referrers(index))
+ index._engine
+ assert len(gc.get_referrers(index)) == nrefs_pre
+
class TestMixedIntIndex(Base):
# Mostly the tests from common.py for which the results differ
| - [X] closes #27585
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [X] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27607 | 2019-07-26T06:57:23Z | 2019-08-08T20:43:27Z | 2019-08-08T20:43:26Z | 2019-08-08T20:43:29Z |
CLN: one less try/except in Block methods | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 83849ea41d032..6e5a2aab298c7 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -644,7 +644,9 @@ def _astype(self, dtype, copy=False, errors="raise", values=None, **kwargs):
if isinstance(values, np.ndarray):
values = values.reshape(self.shape)
- except Exception: # noqa: E722
+ except Exception:
+ # e.g. astype_nansafe can fail on object-dtype of strings
+ # trying to convert to float
if errors == "raise":
raise
newb = self.copy() if copy else self
@@ -877,9 +879,17 @@ def setitem(self, indexer, value):
# coerce if block dtype can store value
values = self.values
- try:
+ if self._can_hold_element(value):
value = self._try_coerce_args(value)
- except (TypeError, ValueError):
+
+ values = self._coerce_values(values)
+ # can keep its own dtype
+ if hasattr(value, "dtype") and is_dtype_equal(values.dtype, value.dtype):
+ dtype = self.dtype
+ else:
+ dtype = "infer"
+
+ else:
# current dtype cannot store value, coerce to common dtype
find_dtype = False
@@ -902,13 +912,6 @@ def setitem(self, indexer, value):
if not is_dtype_equal(self.dtype, dtype):
b = self.astype(dtype)
return b.setitem(indexer, value)
- else:
- values = self._coerce_values(values)
- # can keep its own dtype
- if hasattr(value, "dtype") and is_dtype_equal(values.dtype, value.dtype):
- dtype = self.dtype
- else:
- dtype = "infer"
# value must be storeable at this moment
arr_value = np.array(value)
@@ -938,7 +941,7 @@ def setitem(self, indexer, value):
elif (
len(arr_value.shape)
and arr_value.shape[0] == values.shape[0]
- and np.prod(arr_value.shape) == np.prod(values.shape)
+ and arr_value.size == values.size
):
values[indexer] = value
try:
@@ -1134,9 +1137,7 @@ def coerce_to_target_dtype(self, other):
try:
return self.astype(dtype)
except (ValueError, TypeError, OverflowError):
- pass
-
- return self.astype(object)
+ return self.astype(object)
def interpolate(
self,
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 74b16f0e72883..3126b9d9d3e2e 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -213,7 +213,7 @@ def init_dict(data, index, columns, dtype=None):
arrays = Series(data, index=columns, dtype=object)
data_names = arrays.index
- missing = arrays.isnull()
+ missing = arrays.isna()
if index is None:
# GH10856
# raise ValueError if only scalars in dict
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 59f39758cb3b0..6054f592f7409 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -112,7 +112,7 @@ def remove_na(arr):
"""
warnings.warn(
- "remove_na is deprecated and is a private " "function. Do not use.",
+ "remove_na is deprecated and is a private function. Do not use.",
FutureWarning,
stacklevel=2,
)
@@ -127,7 +127,7 @@ def _coerce_method(converter):
def wrapper(self):
if len(self) == 1:
return converter(self.iloc[0])
- raise TypeError("cannot convert the series to " "{0}".format(str(converter)))
+ raise TypeError("cannot convert the series to {0}".format(str(converter)))
wrapper.__name__ = "__{name}__".format(name=converter.__name__)
return wrapper
@@ -226,7 +226,7 @@ def __init__(
if isinstance(data, MultiIndex):
raise NotImplementedError(
- "initializing a Series from a " "MultiIndex is not supported"
+ "initializing a Series from a MultiIndex is not supported"
)
elif isinstance(data, Index):
if name is None:
@@ -275,7 +275,7 @@ def __init__(
pass
elif isinstance(data, (set, frozenset)):
raise TypeError(
- "{0!r} type is unordered" "".format(data.__class__.__name__)
+ "{0!r} type is unordered".format(data.__class__.__name__)
)
elif isinstance(data, ABCSparseArray):
# handle sparse passed here (and force conversion)
@@ -604,7 +604,7 @@ def asobject(self):
*this is an internal non-public method*
"""
warnings.warn(
- "'asobject' is deprecated. Use 'astype(object)'" " instead",
+ "'asobject' is deprecated. Use 'astype(object)' instead",
FutureWarning,
stacklevel=2,
)
@@ -710,7 +710,7 @@ def put(self, *args, **kwargs):
numpy.ndarray.put
"""
warnings.warn(
- "`put` has been deprecated and will be removed in a" "future version.",
+ "`put` has been deprecated and will be removed in a future version.",
FutureWarning,
stacklevel=2,
)
@@ -955,7 +955,7 @@ def real(self):
.. deprecated 0.25.0
"""
warnings.warn(
- "`real` has be deprecated and will be removed in a " "future verison",
+ "`real` has be deprecated and will be removed in a future version",
FutureWarning,
stacklevel=2,
)
@@ -973,7 +973,7 @@ def imag(self):
.. deprecated 0.25.0
"""
warnings.warn(
- "`imag` has be deprecated and will be removed in a " "future verison",
+ "`imag` has be deprecated and will be removed in a future version",
FutureWarning,
stacklevel=2,
)
@@ -1561,7 +1561,7 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
).__finalize__(self)
elif inplace:
raise TypeError(
- "Cannot reset_index inplace on a Series " "to create a DataFrame"
+ "Cannot reset_index inplace on a Series to create a DataFrame"
)
else:
df = self.to_frame(name)
@@ -1813,7 +1813,7 @@ def to_sparse(self, kind="block", fill_value=None):
"""
warnings.warn(
- "Series.to_sparse is deprecated and will be removed " "in a future version",
+ "Series.to_sparse is deprecated and will be removed in a future version",
FutureWarning,
stacklevel=2,
)
@@ -4055,7 +4055,7 @@ def _reduce(
elif isinstance(delegate, np.ndarray):
if numeric_only:
raise NotImplementedError(
- "Series.{0} does not implement " "numeric_only.".format(name)
+ "Series.{0} does not implement numeric_only.".format(name)
)
with np.errstate(all="ignore"):
return op(delegate, skipna=skipna, **kwds)
| small unrelated cleanups | https://api.github.com/repos/pandas-dev/pandas/pulls/27606 | 2019-07-26T03:43:21Z | 2019-07-26T11:44:24Z | 2019-07-26T11:44:24Z | 2019-07-26T15:26:40Z |
CLN: simplify indexing code | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index cb33044c4e23f..a1a8619fab892 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -164,7 +164,7 @@ def _slice(self, obj, axis: int, kind=None):
def _get_setitem_indexer(self, key):
if self.axis is not None:
- return self._convert_tuple(key, is_setter=True)
+ return self._convert_tuple(key)
ax = self.obj._get_axis(0)
@@ -176,7 +176,7 @@ def _get_setitem_indexer(self, key):
if isinstance(key, tuple):
try:
- return self._convert_tuple(key, is_setter=True)
+ return self._convert_tuple(key)
except IndexingError:
pass
@@ -185,7 +185,7 @@ def _get_setitem_indexer(self, key):
axis = self.axis or 0
try:
- return self._convert_to_indexer(key, axis=axis, is_setter=True)
+ return self._convert_to_indexer(key, axis=axis)
except TypeError as e:
# invalid indexer type vs 'other' indexing errors
@@ -241,22 +241,20 @@ def _is_nested_tuple_indexer(self, tup: Tuple):
return any(is_nested_tuple(tup, ax) for ax in self.obj.axes)
return False
- def _convert_tuple(self, key, is_setter: bool = False):
+ def _convert_tuple(self, key):
keyidx = []
if self.axis is not None:
axis = self.obj._get_axis_number(self.axis)
for i in range(self.ndim):
if i == axis:
- keyidx.append(
- self._convert_to_indexer(key, axis=axis, is_setter=is_setter)
- )
+ keyidx.append(self._convert_to_indexer(key, axis=axis))
else:
keyidx.append(slice(None))
else:
for i, k in enumerate(key):
if i >= self.ndim:
raise IndexingError("Too many indexers")
- idx = self._convert_to_indexer(k, axis=i, is_setter=is_setter)
+ idx = self._convert_to_indexer(k, axis=i)
keyidx.append(idx)
return tuple(keyidx)
@@ -1184,9 +1182,7 @@ def _validate_read_indexer(
if not (ax.is_categorical() or ax.is_interval()):
warnings.warn(_missing_key_warning, FutureWarning, stacklevel=6)
- def _convert_to_indexer(
- self, obj, axis: int, is_setter: bool = False, raise_missing: bool = False
- ):
+ def _convert_to_indexer(self, obj, axis: int, raise_missing: bool = False):
"""
Convert indexing key into something we can use to do actual fancy
indexing on an ndarray
@@ -1210,10 +1206,8 @@ def _convert_to_indexer(
try:
obj = self._convert_scalar_indexer(obj, axis)
except TypeError:
-
# but we will allow setting
- if is_setter:
- pass
+ pass
# see if we are positional in nature
is_int_index = labels.is_integer()
@@ -1224,7 +1218,7 @@ def _convert_to_indexer(
return labels.get_loc(obj)
except LookupError:
if isinstance(obj, tuple) and isinstance(labels, MultiIndex):
- if is_setter and len(obj) == labels.nlevels:
+ if len(obj) == labels.nlevels:
return {"key": obj}
raise
except TypeError:
@@ -1238,17 +1232,14 @@ def _convert_to_indexer(
# if we are setting and its not a valid location
# its an insert which fails by definition
- if is_setter:
+ if self.name == "loc":
# always valid
- if self.name == "loc":
- return {"key": obj}
+ return {"key": obj}
+ if obj >= self.obj.shape[axis] and not isinstance(labels, MultiIndex):
# a positional
- if obj >= self.obj.shape[axis] and not isinstance(labels, MultiIndex):
- raise ValueError(
- "cannot set by positional indexing with enlargement"
- )
+ raise ValueError("cannot set by positional indexing with enlargement")
return obj
@@ -1263,14 +1254,13 @@ def _convert_to_indexer(
return inds
else:
# When setting, missing keys are not allowed, even with .loc:
- kwargs = {"raise_missing": True if is_setter else raise_missing}
- return self._get_listlike_indexer(obj, axis, **kwargs)[1]
+ return self._get_listlike_indexer(obj, axis, raise_missing=True)[1]
else:
try:
return labels.get_loc(obj)
except LookupError:
# allow a not found key only if we are a setter
- if not is_list_like_indexer(obj) and is_setter:
+ if not is_list_like_indexer(obj):
return {"key": obj}
raise
@@ -2127,9 +2117,7 @@ def _getitem_axis(self, key, axis: int):
return self._get_loc(key, axis=axis)
# raise_missing is included for compat with the parent class signature
- def _convert_to_indexer(
- self, obj, axis: int, is_setter: bool = False, raise_missing: bool = False
- ):
+ def _convert_to_indexer(self, obj, axis: int, raise_missing: bool = False):
""" much simpler as we only have to deal with our valid types """
# make need to convert a float key
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 59f39758cb3b0..0929eec9653fa 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1077,7 +1077,7 @@ def _ixs(self, i: int, axis: int = 0):
else:
return values[i]
- def _slice(self, slobj, axis=0, kind=None):
+ def _slice(self, slobj: slice, axis: int = 0, kind=None):
slobj = self.index._convert_slice_indexer(slobj, kind=kind or "getitem")
return self._get_values(slobj)
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index 0c9c3a682596f..fc51c06b149fd 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -20,7 +20,6 @@
from pandas.core import generic
from pandas.core.arrays import SparseArray
from pandas.core.arrays.sparse import SparseAccessor
-from pandas.core.index import Index
from pandas.core.internals import SingleBlockManager
import pandas.core.ops as ops
from pandas.core.series import Series
@@ -318,23 +317,23 @@ def _set_subtyp(self, is_all_dates):
# ----------------------------------------------------------------------
# Indexing Methods
- def _ixs(self, i, axis=0):
+ def _ixs(self, i: int, axis: int = 0):
"""
Return the i-th value or values in the SparseSeries by location
Parameters
----------
- i : int, slice, or sequence of integers
+ i : int
+ axis: int
+ default 0, ignored
Returns
-------
value : scalar (int) or Series (slice, sequence)
"""
- label = self.index[i]
- if isinstance(label, Index):
- return self.take(i, axis=axis)
- else:
- return self._get_val_at(i)
+ assert is_integer(i), i
+ # equiv: self._get_val_at(i) since we have an integer
+ return self.values[i]
def _get_val_at(self, loc):
""" forward to the array """
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27604 | 2019-07-26T02:58:01Z | 2019-07-26T11:45:36Z | 2019-07-26T11:45:36Z | 2019-07-26T15:26:13Z |
Removed get_value benchmarks | diff --git a/asv_bench/benchmarks/indexing.py b/asv_bench/benchmarks/indexing.py
index 720bd0245be41..84604b8196536 100644
--- a/asv_bench/benchmarks/indexing.py
+++ b/asv_bench/benchmarks/indexing.py
@@ -129,10 +129,6 @@ def time_getitem_label_slice(self, index, index_structure):
def time_getitem_pos_slice(self, index, index_structure):
self.s[:80000]
- def time_get_value(self, index, index_structure):
- with warnings.catch_warnings(record=True):
- self.s.get_value(self.lbl)
-
def time_getitem_scalar(self, index, index_structure):
self.s[self.lbl]
@@ -151,10 +147,6 @@ def setup(self):
self.bool_indexer = self.df[self.col_scalar] > 0
self.bool_obj_indexer = self.bool_indexer.astype(object)
- def time_get_value(self):
- with warnings.catch_warnings(record=True):
- self.df.get_value(self.idx_scalar, self.col_scalar)
-
def time_ix(self):
with warnings.catch_warnings(record=True):
self.df.ix[self.idx_scalar, self.col_scalar]
| Follow up to #27377 @jbrockmendel
| https://api.github.com/repos/pandas-dev/pandas/pulls/27603 | 2019-07-26T01:10:11Z | 2019-07-26T11:43:09Z | 2019-07-26T11:43:09Z | 2020-01-16T00:35:07Z |
DEPR: NDFrame.set_axis inplace defaults to false #27525 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index c352a36bf6de1..cc4bab8b9a923 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -61,7 +61,7 @@ Deprecations
Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Removed the previously deprecated :meth:`Series.get_value`, :meth:`Series.set_value`, :meth:`DataFrame.get_value`, :meth:`DataFrame.set_value` (:issue:`17739`)
--
+- Changed the the default value of `inplace` in :meth:`DataFrame.set_index` and :meth:`Series.set_axis`. It now defaults to False (:issue:`27600`)
-
.. _whatsnew_1000.performance:
@@ -190,7 +190,6 @@ ExtensionArray
-
-
-
.. _whatsnew_1000.contributors:
Contributors
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index b900e1e82255d..97a0b04146297 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -564,7 +564,7 @@ def _obj_with_exclusions(self):
""" internal compat with SelectionMixin """
return self
- def set_axis(self, labels, axis=0, inplace=None):
+ def set_axis(self, labels, axis=0, inplace=False):
"""
Assign desired index to given axis.
@@ -587,15 +587,9 @@ def set_axis(self, labels, axis=0, inplace=None):
The axis to update. The value 0 identifies the rows, and 1
identifies the columns.
- inplace : bool, default None
+ inplace : bool, default False
Whether to return a new %(klass)s instance.
- .. warning::
-
- ``inplace=None`` currently falls back to to True, but in a
- future version, will default to False. Use inplace=True
- explicitly rather than relying on the default.
-
Returns
-------
renamed : %(klass)s or None
@@ -616,27 +610,19 @@ def set_axis(self, labels, axis=0, inplace=None):
2 3
dtype: int64
- >>> s.set_axis(['a', 'b', 'c'], axis=0, inplace=False)
+ >>> s.set_axis(['a', 'b', 'c'], axis=0)
a 1
b 2
c 3
dtype: int64
- The original object is not modified.
-
- >>> s
- 0 1
- 1 2
- 2 3
- dtype: int64
-
**DataFrame**
>>> df = pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]})
Change the row labels.
- >>> df.set_axis(['a', 'b', 'c'], axis='index', inplace=False)
+ >>> df.set_axis(['a', 'b', 'c'], axis='index')
A B
a 1 4
b 2 5
@@ -644,7 +630,7 @@ def set_axis(self, labels, axis=0, inplace=None):
Change the column labels.
- >>> df.set_axis(['I', 'II'], axis='columns', inplace=False)
+ >>> df.set_axis(['I', 'II'], axis='columns')
I II
0 1 4
1 2 5
@@ -670,15 +656,6 @@ def set_axis(self, labels, axis=0, inplace=None):
)
labels, axis = axis, labels
- if inplace is None:
- warnings.warn(
- "set_axis currently defaults to operating inplace.\nThis "
- "will change in a future version of pandas, use "
- "inplace=True to avoid this warning.",
- FutureWarning,
- stacklevel=2,
- )
- inplace = True
if inplace:
setattr(self, self._get_axis_name(axis), labels)
else:
diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py
index c57b2a6964f39..6a274c8369328 100644
--- a/pandas/tests/frame/test_alter_axes.py
+++ b/pandas/tests/frame/test_alter_axes.py
@@ -1515,30 +1515,23 @@ def test_set_axis_inplace(self):
expected["columns"] = expected[1]
for axis in expected:
- # inplace=True
- # The FutureWarning comes from the fact that we would like to have
- # inplace default to False some day
- for inplace, warn in (None, FutureWarning), (True, None):
- kwargs = {"inplace": inplace}
-
- result = df.copy()
- with tm.assert_produces_warning(warn):
- result.set_axis(list("abc"), axis=axis, **kwargs)
- tm.assert_frame_equal(result, expected[axis])
+ result = df.copy()
+ result.set_axis(list("abc"), axis=axis, inplace=True)
+ tm.assert_frame_equal(result, expected[axis])
# inplace=False
- result = df.set_axis(list("abc"), axis=axis, inplace=False)
+ result = df.set_axis(list("abc"), axis=axis)
tm.assert_frame_equal(expected[axis], result)
# omitting the "axis" parameter
with tm.assert_produces_warning(None):
- result = df.set_axis(list("abc"), inplace=False)
+ result = df.set_axis(list("abc"))
tm.assert_frame_equal(result, expected[0])
# wrong values for the "axis" parameter
for axis in 3, "foo":
with pytest.raises(ValueError, match="No axis named"):
- df.set_axis(list("abc"), axis=axis, inplace=False)
+ df.set_axis(list("abc"), axis=axis)
def test_set_axis_prior_to_deprecation_signature(self):
df = DataFrame(
diff --git a/pandas/tests/series/test_alter_axes.py b/pandas/tests/series/test_alter_axes.py
index 63baa6af7c02a..f58462c0f3576 100644
--- a/pandas/tests/series/test_alter_axes.py
+++ b/pandas/tests/series/test_alter_axes.py
@@ -277,12 +277,9 @@ def test_set_axis_inplace_axes(self, axis_series):
# inplace=True
# The FutureWarning comes from the fact that we would like to have
# inplace default to False some day
- for inplace, warn in [(None, FutureWarning), (True, None)]:
- result = ser.copy()
- kwargs = {"inplace": inplace}
- with tm.assert_produces_warning(warn):
- result.set_axis(list("abcd"), axis=axis_series, **kwargs)
- tm.assert_series_equal(result, expected)
+ result = ser.copy()
+ result.set_axis(list("abcd"), axis=axis_series, inplace=True)
+ tm.assert_series_equal(result, expected)
def test_set_axis_inplace(self):
# GH14636
| https://github.com/pandas-dev/pandas/blob/76247c142893c710e970c4cf8a25d73121aa5a2b/pandas/core/generic.py#L594-L598
has been there since <del>#20164</del> #16994 (issue was #14636), part of 0.21.0.
With discussion of plans to deprecate similar functionality from set_index in #24046, it's time to make sure `set_axis` conforms to the rest of pandas in this. | https://api.github.com/repos/pandas-dev/pandas/pulls/27600 | 2019-07-25T23:54:42Z | 2019-07-26T11:51:35Z | 2019-07-26T11:51:35Z | 2019-07-27T08:07:19Z |
TYPING: some type hints for pandas\io\common.py | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5980e3d133374..be870b9fcab1d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -732,7 +732,6 @@ def to_string(
formatter = fmt.DataFrameFormatter(
self,
- buf=buf,
columns=columns,
col_space=col_space,
na_rep=na_rep,
@@ -750,11 +749,7 @@ def to_string(
decimal=decimal,
line_width=line_width,
)
- formatter.to_string()
-
- if buf is None:
- result = formatter.buf.getvalue()
- return result
+ return formatter.to_string(buf=buf)
# ----------------------------------------------------------------------
@@ -2273,7 +2268,6 @@ def to_html(
formatter = fmt.DataFrameFormatter(
self,
- buf=buf,
columns=columns,
col_space=col_space,
na_rep=na_rep,
@@ -2294,10 +2288,9 @@ def to_html(
render_links=render_links,
)
# TODO: a generic formatter wld b in DataFrameFormatter
- formatter.to_html(classes=classes, notebook=notebook, border=border)
-
- if buf is None:
- return formatter.buf.getvalue()
+ return formatter.to_html(
+ buf=buf, classes=classes, notebook=notebook, border=border
+ )
# ----------------------------------------------------------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 821c35e0cce2f..1d87a6937ca34 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3018,7 +3018,6 @@ def to_latex(
formatter = DataFrameFormatter(
self,
- buf=buf,
columns=columns,
col_space=col_space,
na_rep=na_rep,
@@ -3032,7 +3031,8 @@ def to_latex(
escape=escape,
decimal=decimal,
)
- formatter.to_latex(
+ return formatter.to_latex(
+ buf=buf,
column_format=column_format,
longtable=longtable,
encoding=encoding,
@@ -3041,9 +3041,6 @@ def to_latex(
multirow=multirow,
)
- if buf is None:
- return formatter.buf.getvalue()
-
def to_csv(
self,
path_or_buf=None,
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 9a9620e2d0663..e01e473047b88 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -10,6 +10,7 @@
import mmap
import os
import pathlib
+from typing import IO, AnyStr, BinaryIO, Optional, TextIO, Type
from urllib.error import URLError # noqa
from urllib.parse import ( # noqa
urlencode,
@@ -32,6 +33,8 @@
from pandas.core.dtypes.common import is_file_like
+from pandas._typing import FilePathOrBuffer
+
# gh-12665: Alias for now and remove later.
CParserError = ParserError
@@ -68,14 +71,14 @@ class BaseIterator:
Useful only when the object being iterated is non-reusable (e.g. OK for a
parser, not for an in-memory table, yes for its iterator)."""
- def __iter__(self):
+ def __iter__(self) -> "BaseIterator":
return self
def __next__(self):
raise AbstractMethodError(self)
-def _is_url(url):
+def _is_url(url) -> bool:
"""Check to see if a URL has a valid protocol.
Parameters
@@ -93,7 +96,9 @@ def _is_url(url):
return False
-def _expand_user(filepath_or_buffer):
+def _expand_user(
+ filepath_or_buffer: FilePathOrBuffer[AnyStr]
+) -> FilePathOrBuffer[AnyStr]:
"""Return the argument with an initial component of ~ or ~user
replaced by that user's home directory.
@@ -111,7 +116,7 @@ def _expand_user(filepath_or_buffer):
return filepath_or_buffer
-def _validate_header_arg(header):
+def _validate_header_arg(header) -> None:
if isinstance(header, bool):
raise TypeError(
"Passing a bool to header is invalid. "
@@ -121,7 +126,9 @@ def _validate_header_arg(header):
)
-def _stringify_path(filepath_or_buffer):
+def _stringify_path(
+ filepath_or_buffer: FilePathOrBuffer[AnyStr]
+) -> FilePathOrBuffer[AnyStr]:
"""Attempt to convert a path-like object to a string.
Parameters
@@ -144,13 +151,14 @@ def _stringify_path(filepath_or_buffer):
strings, buffers, or anything else that's not even path-like.
"""
if hasattr(filepath_or_buffer, "__fspath__"):
- return filepath_or_buffer.__fspath__()
+ # https://github.com/python/mypy/issues/1424
+ return filepath_or_buffer.__fspath__() # type: ignore
elif isinstance(filepath_or_buffer, pathlib.Path):
return str(filepath_or_buffer)
return _expand_user(filepath_or_buffer)
-def is_s3_url(url):
+def is_s3_url(url) -> bool:
"""Check for an s3, s3n, or s3a url"""
try:
return parse_url(url).scheme in ["s3", "s3n", "s3a"]
@@ -158,7 +166,7 @@ def is_s3_url(url):
return False
-def is_gcs_url(url):
+def is_gcs_url(url) -> bool:
"""Check for a gcs url"""
try:
return parse_url(url).scheme in ["gcs", "gs"]
@@ -167,7 +175,10 @@ def is_gcs_url(url):
def get_filepath_or_buffer(
- filepath_or_buffer, encoding=None, compression=None, mode=None
+ filepath_or_buffer: FilePathOrBuffer,
+ encoding: Optional[str] = None,
+ compression: Optional[str] = None,
+ mode: Optional[str] = None,
):
"""
If the filepath_or_buffer is a url, translate and return the buffer.
@@ -190,7 +201,7 @@ def get_filepath_or_buffer(
"""
filepath_or_buffer = _stringify_path(filepath_or_buffer)
- if _is_url(filepath_or_buffer):
+ if isinstance(filepath_or_buffer, str) and _is_url(filepath_or_buffer):
req = urlopen(filepath_or_buffer)
content_encoding = req.headers.get("Content-Encoding", None)
if content_encoding == "gzip":
@@ -224,7 +235,7 @@ def get_filepath_or_buffer(
return filepath_or_buffer, None, compression, False
-def file_path_to_url(path):
+def file_path_to_url(path: str) -> str:
"""
converts an absolute native path to a FILE URL.
@@ -242,7 +253,9 @@ def file_path_to_url(path):
_compression_to_extension = {"gzip": ".gz", "bz2": ".bz2", "zip": ".zip", "xz": ".xz"}
-def _infer_compression(filepath_or_buffer, compression):
+def _infer_compression(
+ filepath_or_buffer: FilePathOrBuffer, compression: Optional[str]
+) -> Optional[str]:
"""
Get the compression method for filepath_or_buffer. If compression='infer',
the inferred compression method is returned. Otherwise, the input
@@ -435,7 +448,13 @@ class BytesZipFile(zipfile.ZipFile, BytesIO): # type: ignore
"""
# GH 17778
- def __init__(self, file, mode, compression=zipfile.ZIP_DEFLATED, **kwargs):
+ def __init__(
+ self,
+ file: FilePathOrBuffer,
+ mode: str,
+ compression: int = zipfile.ZIP_DEFLATED,
+ **kwargs
+ ):
if mode in ["wb", "rb"]:
mode = mode.replace("b", "")
super().__init__(file, mode, compression, **kwargs)
@@ -461,16 +480,16 @@ class MMapWrapper(BaseIterator):
"""
- def __init__(self, f):
+ def __init__(self, f: IO):
self.mmap = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_READ)
- def __getattr__(self, name):
+ def __getattr__(self, name: str):
return getattr(self.mmap, name)
- def __iter__(self):
+ def __iter__(self) -> "MMapWrapper":
return self
- def __next__(self):
+ def __next__(self) -> str:
newline = self.mmap.readline()
# readline returns bytes, not str, but Python's CSV reader
@@ -491,16 +510,16 @@ class UTF8Recoder(BaseIterator):
Iterator that reads an encoded stream and re-encodes the input to UTF-8
"""
- def __init__(self, f, encoding):
+ def __init__(self, f: BinaryIO, encoding: str):
self.reader = codecs.getreader(encoding)(f)
- def read(self, bytes=-1):
+ def read(self, bytes: int = -1) -> bytes:
return self.reader.read(bytes).encode("utf-8")
- def readline(self):
+ def readline(self) -> bytes:
return self.reader.readline().encode("utf-8")
- def next(self):
+ def next(self) -> bytes:
return next(self.reader).encode("utf-8")
@@ -511,5 +530,7 @@ def UnicodeReader(f, dialect=csv.excel, encoding="utf-8", **kwds):
return csv.reader(f, dialect=dialect, **kwds)
-def UnicodeWriter(f, dialect=csv.excel, encoding="utf-8", **kwds):
+def UnicodeWriter(
+ f: TextIO, dialect: Type[csv.Dialect] = csv.excel, encoding: str = "utf-8", **kwds
+):
return csv.writer(f, dialect=dialect, **kwds)
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 980fc4888d625..23c07ea72d40f 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -2,6 +2,9 @@
Internal module for formatting output data in csv, html,
and latex files. This module also applies to display formatting.
"""
+
+import codecs
+from contextlib import contextmanager
import decimal
from functools import partial
from io import StringIO
@@ -9,6 +12,7 @@
import re
from shutil import get_terminal_size
from typing import (
+ IO,
TYPE_CHECKING,
Any,
Callable,
@@ -16,7 +20,6 @@
Iterable,
List,
Optional,
- TextIO,
Tuple,
Type,
Union,
@@ -34,6 +37,7 @@
from pandas._libs.tslib import format_array_from_datetime
from pandas._libs.tslibs import NaT, Timedelta, Timestamp, iNaT
from pandas._libs.tslibs.nattype import NaTType
+from pandas.errors import AbstractMethodError
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -67,7 +71,7 @@
from pandas.core.indexes.datetimes import DatetimeIndex
from pandas.core.indexes.timedeltas import TimedeltaIndex
-from pandas.io.common import _expand_user, _stringify_path
+from pandas.io.common import _stringify_path
from pandas.io.formats.printing import adjoin, justify, pprint_thing
if TYPE_CHECKING:
@@ -161,7 +165,7 @@ class CategoricalFormatter:
def __init__(
self,
categorical: "Categorical",
- buf: Optional[TextIO] = None,
+ buf: Optional[IO[str]] = None,
length: bool = True,
na_rep: str = "NaN",
footer: bool = True,
@@ -224,7 +228,7 @@ class SeriesFormatter:
def __init__(
self,
series: "Series",
- buf: Optional[TextIO] = None,
+ buf: Optional[IO[str]] = None,
length: bool = True,
header: bool = True,
index: bool = True,
@@ -463,6 +467,40 @@ def _get_formatter(self, i: Union[str, int]) -> Optional[Callable]:
i = self.columns[i]
return self.formatters.get(i, None)
+ @contextmanager
+ def get_buffer(
+ self, buf: Optional[FilePathOrBuffer[str]], encoding: Optional[str] = None
+ ):
+ if buf is not None:
+ buf = _stringify_path(buf)
+ else:
+ buf = StringIO()
+
+ if encoding is None:
+ encoding = "utf-8"
+
+ if hasattr(buf, "write"):
+ yield buf
+ elif isinstance(buf, str):
+ with codecs.open(buf, "w", encoding=encoding) as f:
+ yield f
+ else:
+ raise TypeError("buf is not a file name and it has no write method")
+
+ def write_result(self, buf: IO[str]) -> None:
+ raise AbstractMethodError(self)
+
+ def get_result(
+ self,
+ buf: Optional[FilePathOrBuffer[str]] = None,
+ encoding: Optional[str] = None,
+ ) -> Optional[str]:
+ with self.get_buffer(buf, encoding=encoding) as f:
+ self.write_result(buf=f)
+ if buf is None:
+ return f.getvalue()
+ return None
+
class DataFrameFormatter(TableFormatter):
"""
@@ -480,7 +518,6 @@ class DataFrameFormatter(TableFormatter):
def __init__(
self,
frame: "DataFrame",
- buf: Optional[FilePathOrBuffer] = None,
columns: Optional[List[str]] = None,
col_space: Optional[Union[str, int]] = None,
header: Union[bool, List[str]] = True,
@@ -502,10 +539,6 @@ def __init__(
**kwds
):
self.frame = frame
- if buf is not None:
- self.buf = _expand_user(_stringify_path(buf))
- else:
- self.buf = StringIO()
self.show_index_names = index_names
if sparsify is None:
@@ -727,7 +760,7 @@ def _to_str_columns(self) -> List[List[str]]:
strcols[ix].insert(row_num + n_header_rows, dot_str)
return strcols
- def to_string(self) -> None:
+ def write_result(self, buf: IO[str]) -> None:
"""
Render a DataFrame to a console-friendly tabular output.
"""
@@ -782,10 +815,10 @@ def to_string(self) -> None:
self._chk_truncate()
strcols = self._to_str_columns()
text = self.adj.adjoin(1, *strcols)
- self.buf.writelines(text)
+ buf.writelines(text)
if self.should_show_dimensions:
- self.buf.write(
+ buf.write(
"\n\n[{nrows} rows x {ncols} columns]".format(
nrows=len(frame), ncols=len(frame.columns)
)
@@ -828,42 +861,33 @@ def _join_multiline(self, *args) -> str:
st = ed
return "\n\n".join(str_lst)
+ def to_string(self, buf: Optional[FilePathOrBuffer[str]] = None) -> Optional[str]:
+ return self.get_result(buf=buf)
+
def to_latex(
self,
+ buf: Optional[FilePathOrBuffer[str]] = None,
column_format: Optional[str] = None,
longtable: bool = False,
encoding: Optional[str] = None,
multicolumn: bool = False,
multicolumn_format: Optional[str] = None,
multirow: bool = False,
- ) -> None:
+ ) -> Optional[str]:
"""
Render a DataFrame to a LaTeX tabular/longtable environment output.
"""
from pandas.io.formats.latex import LatexFormatter
- latex_renderer = LatexFormatter(
+ return LatexFormatter(
self,
column_format=column_format,
longtable=longtable,
multicolumn=multicolumn,
multicolumn_format=multicolumn_format,
multirow=multirow,
- )
-
- if encoding is None:
- encoding = "utf-8"
-
- if hasattr(self.buf, "write"):
- latex_renderer.write_result(self.buf)
- elif isinstance(self.buf, str):
- import codecs
-
- with codecs.open(self.buf, "w", encoding=encoding) as f:
- latex_renderer.write_result(f)
- else:
- raise TypeError("buf is not a file name and it has no write method")
+ ).get_result(buf=buf, encoding=encoding)
def _format_col(self, i: int) -> List[str]:
frame = self.tr_frame
@@ -880,10 +904,11 @@ def _format_col(self, i: int) -> List[str]:
def to_html(
self,
+ buf: Optional[FilePathOrBuffer[str]] = None,
classes: Optional[Union[str, List, Tuple]] = None,
notebook: bool = False,
border: Optional[int] = None,
- ) -> None:
+ ) -> Optional[str]:
"""
Render a DataFrame to a html table.
@@ -901,14 +926,7 @@ def to_html(
from pandas.io.formats.html import HTMLFormatter, NotebookFormatter
Klass = NotebookFormatter if notebook else HTMLFormatter
- html = Klass(self, classes=classes, border=border).render()
- if hasattr(self.buf, "write"):
- buffer_put_lines(self.buf, html)
- elif isinstance(self.buf, str):
- with open(self.buf, "w") as f:
- buffer_put_lines(f, html)
- else:
- raise TypeError("buf is not a file name and it has no write method")
+ return Klass(self, classes=classes, border=border).get_result(buf=buf)
def _get_formatted_column_labels(self, frame: "DataFrame") -> List[List[str]]:
from pandas.core.index import _sparsify
@@ -1901,7 +1919,7 @@ def get_level_lengths(
return result
-def buffer_put_lines(buf: TextIO, lines: List[str]) -> None:
+def buffer_put_lines(buf: IO[str], lines: List[str]) -> None:
"""
Appends lines to a buffer.
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 19305126f4e5f..4b44893df70ed 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -4,7 +4,7 @@
from collections import OrderedDict
from textwrap import dedent
-from typing import Any, Dict, Iterable, List, Optional, Tuple, Union, cast
+from typing import IO, Any, Dict, Iterable, List, Optional, Tuple, Union, cast
from pandas._config import get_option
@@ -16,6 +16,7 @@
from pandas.io.formats.format import (
DataFrameFormatter,
TableFormatter,
+ buffer_put_lines,
get_level_lengths,
)
from pandas.io.formats.printing import pprint_thing
@@ -203,6 +204,9 @@ def render(self) -> List[str]:
return self.elements
+ def write_result(self, buf: IO[str]) -> None:
+ buffer_put_lines(buf, self.render())
+
def _write_table(self, indent: int = 0) -> None:
_classes = ["dataframe"] # Default class.
use_mathjax = get_option("display.html.use_mathjax")
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index ad47f714c9550..a048e3bb867bd 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -7,6 +7,7 @@
import itertools
from operator import methodcaller
import os
+from pathlib import Path
import re
from shutil import get_terminal_size
import sys
@@ -17,7 +18,7 @@
import pytest
import pytz
-from pandas.compat import is_platform_32bit, is_platform_windows
+from pandas.compat import PY36, is_platform_32bit, is_platform_windows
import pandas as pd
from pandas import (
@@ -42,6 +43,54 @@
use_32bit_repr = is_platform_windows() or is_platform_32bit()
+@pytest.fixture(params=["string", "pathlike", "buffer"])
+def filepath_or_buffer_id(request):
+ """
+ A fixture yielding test ids for filepath_or_buffer testing.
+ """
+ return request.param
+
+
+@pytest.fixture
+def filepath_or_buffer(filepath_or_buffer_id, tmp_path):
+ """
+ A fixture yeilding a string representing a filepath, a path-like object
+ and a StringIO buffer. Also checks that buffer is not closed.
+ """
+ if filepath_or_buffer_id == "buffer":
+ buf = StringIO()
+ yield buf
+ assert not buf.closed
+ else:
+ if PY36:
+ assert isinstance(tmp_path, Path)
+ else:
+ assert hasattr(tmp_path, "__fspath__")
+ if filepath_or_buffer_id == "pathlike":
+ yield tmp_path / "foo"
+ else:
+ yield str(tmp_path / "foo")
+
+
+@pytest.fixture
+def assert_filepath_or_buffer_equals(filepath_or_buffer, filepath_or_buffer_id):
+ """
+ Assertion helper for checking filepath_or_buffer.
+ """
+
+ def _assert_filepath_or_buffer_equals(expected):
+ if filepath_or_buffer_id == "string":
+ with open(filepath_or_buffer) as f:
+ result = f.read()
+ elif filepath_or_buffer_id == "pathlike":
+ result = filepath_or_buffer.read_text()
+ elif filepath_or_buffer_id == "buffer":
+ result = filepath_or_buffer.getvalue()
+ assert result == expected
+
+ return _assert_filepath_or_buffer_equals
+
+
def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
@@ -3142,3 +3191,21 @@ def test_repr_html_ipython_config(ip):
)
result = ip.run_cell(code)
assert not result.error_in_exec
+
+
+@pytest.mark.parametrize("method", ["to_string", "to_html", "to_latex"])
+def test_filepath_or_buffer_arg(
+ float_frame, method, filepath_or_buffer, assert_filepath_or_buffer_equals
+):
+ df = float_frame
+ expected = getattr(df, method)()
+
+ getattr(df, method)(buf=filepath_or_buffer)
+ assert_filepath_or_buffer_equals(expected)
+
+
+@pytest.mark.parametrize("method", ["to_string", "to_html", "to_latex"])
+def test_filepath_or_buffer_bad_arg_raises(float_frame, method):
+ msg = "buf is not a file name and it has no write method"
+ with pytest.raises(TypeError, match=msg):
+ getattr(float_frame, method)(buf=object())
| https://api.github.com/repos/pandas-dev/pandas/pulls/27598 | 2019-07-25T23:37:11Z | 2019-08-02T17:14:20Z | 2019-08-02T17:14:20Z | 2019-08-05T09:39:05Z | |
Expanded ASVs for to_json | diff --git a/asv_bench/benchmarks/io/json.py b/asv_bench/benchmarks/io/json.py
index 0ce42856fb14a..fc07f2a484102 100644
--- a/asv_bench/benchmarks/io/json.py
+++ b/asv_bench/benchmarks/io/json.py
@@ -63,10 +63,13 @@ def peakmem_read_json_lines_concat(self, index):
class ToJSON(BaseIO):
fname = "__test__.json"
- params = ["split", "columns", "index"]
- param_names = ["orient"]
+ params = [
+ ["split", "columns", "index", "values", "records"],
+ ["df", "df_date_idx", "df_td_int_ts", "df_int_floats", "df_int_float_str"],
+ ]
+ param_names = ["orient", "frame"]
- def setup(self, lines_orient):
+ def setup(self, orient, frame):
N = 10 ** 5
ncols = 5
index = date_range("20000101", periods=N, freq="H")
@@ -111,34 +114,85 @@ def setup(self, lines_orient):
index=index,
)
- def time_floats_with_int_index(self, orient):
- self.df.to_json(self.fname, orient=orient)
+ def time_to_json(self, orient, frame):
+ getattr(self, frame).to_json(self.fname, orient=orient)
- def time_floats_with_dt_index(self, orient):
- self.df_date_idx.to_json(self.fname, orient=orient)
+ def mem_to_json(self, orient, frame):
+ getattr(self, frame).to_json(self.fname, orient=orient)
+
+ def time_to_json_wide(self, orient, frame):
+ base_df = getattr(self, frame).copy()
+ df = concat([base_df.iloc[:100]] * 1000, ignore_index=True, axis=1)
+ df.to_json(self.fname, orient=orient)
+
+ def mem_to_json_wide(self, orient, frame):
+ base_df = getattr(self, frame).copy()
+ df = concat([base_df.iloc[:100]] * 1000, ignore_index=True, axis=1)
+ df.to_json(self.fname, orient=orient)
- def time_delta_int_tstamp(self, orient):
- self.df_td_int_ts.to_json(self.fname, orient=orient)
- def time_float_int(self, orient):
- self.df_int_floats.to_json(self.fname, orient=orient)
+class ToJSONLines(BaseIO):
- def time_float_int_str(self, orient):
- self.df_int_float_str.to_json(self.fname, orient=orient)
+ fname = "__test__.json"
+
+ def setup(self):
+ N = 10 ** 5
+ ncols = 5
+ index = date_range("20000101", periods=N, freq="H")
+ timedeltas = timedelta_range(start=1, periods=N, freq="s")
+ datetimes = date_range(start=1, periods=N, freq="s")
+ ints = np.random.randint(100000000, size=N)
+ floats = np.random.randn(N)
+ strings = tm.makeStringIndex(N)
+ self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
+ self.df_date_idx = DataFrame(np.random.randn(N, ncols), index=index)
+ self.df_td_int_ts = DataFrame(
+ {
+ "td_1": timedeltas,
+ "td_2": timedeltas,
+ "int_1": ints,
+ "int_2": ints,
+ "ts_1": datetimes,
+ "ts_2": datetimes,
+ },
+ index=index,
+ )
+ self.df_int_floats = DataFrame(
+ {
+ "int_1": ints,
+ "int_2": ints,
+ "int_3": ints,
+ "float_1": floats,
+ "float_2": floats,
+ "float_3": floats,
+ },
+ index=index,
+ )
+ self.df_int_float_str = DataFrame(
+ {
+ "int_1": ints,
+ "int_2": ints,
+ "float_1": floats,
+ "float_2": floats,
+ "str_1": strings,
+ "str_2": strings,
+ },
+ index=index,
+ )
- def time_floats_with_int_idex_lines(self, orient):
+ def time_floats_with_int_idex_lines(self):
self.df.to_json(self.fname, orient="records", lines=True)
- def time_floats_with_dt_index_lines(self, orient):
+ def time_floats_with_dt_index_lines(self):
self.df_date_idx.to_json(self.fname, orient="records", lines=True)
- def time_delta_int_tstamp_lines(self, orient):
+ def time_delta_int_tstamp_lines(self):
self.df_td_int_ts.to_json(self.fname, orient="records", lines=True)
- def time_float_int_lines(self, orient):
+ def time_float_int_lines(self):
self.df_int_floats.to_json(self.fname, orient="records", lines=True)
- def time_float_int_str_lines(self, orient):
+ def time_float_int_str_lines(self):
self.df_int_float_str.to_json(self.fname, orient="records", lines=True)
| Instead of one massive PR in #27166 I think this makes sense to break up in a few chunks. First step to get block code out of that extension is to get expanded test coverage, which I think I've done here.
After this I'll probably:
- Convert "columns" orient to not use the block manager code
- Convert "values"
- Convert "records" / "index" (and by nature table) and clean up code
Might be a little bit of bloat in the middle but think that approach will ultimately be more manageable
| https://api.github.com/repos/pandas-dev/pandas/pulls/27595 | 2019-07-25T22:38:15Z | 2019-07-26T16:35:29Z | 2019-07-26T16:35:29Z | 2019-07-26T16:35:32Z |
DOC: add documentation for read_spss(#27476) | diff --git a/doc/source/reference/io.rst b/doc/source/reference/io.rst
index 666220d390cdc..91f4942d03b0d 100644
--- a/doc/source/reference/io.rst
+++ b/doc/source/reference/io.rst
@@ -105,6 +105,13 @@ SAS
read_sas
+SPSS
+~~~~
+.. autosummary::
+ :toctree: api/
+
+ read_spss
+
SQL
~~~
.. autosummary::
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index ae288ba5bde16..8e5352c337072 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -39,6 +39,7 @@ The pandas I/O API is a set of top level ``reader`` functions accessed like
binary;`Msgpack <https://msgpack.org/index.html>`__;:ref:`read_msgpack<io.msgpack>`;:ref:`to_msgpack<io.msgpack>`
binary;`Stata <https://en.wikipedia.org/wiki/Stata>`__;:ref:`read_stata<io.stata_reader>`;:ref:`to_stata<io.stata_writer>`
binary;`SAS <https://en.wikipedia.org/wiki/SAS_(software)>`__;:ref:`read_sas<io.sas_reader>`;
+ binary;`SPSS <https://en.wikipedia.org/wiki/SPSS>`__;:ref:`read_spss<io.spss_reader>`;
binary;`Python Pickle Format <https://docs.python.org/3/library/pickle.html>`__;:ref:`read_pickle<io.pickle>`;:ref:`to_pickle<io.pickle>`
SQL;`SQL <https://en.wikipedia.org/wiki/SQL>`__;:ref:`read_sql<io.sql>`;:ref:`to_sql<io.sql>`
SQL;`Google Big Query <https://en.wikipedia.org/wiki/BigQuery>`__;:ref:`read_gbq<io.bigquery>`;:ref:`to_gbq<io.bigquery>`
@@ -5477,6 +5478,44 @@ web site.
No official documentation is available for the SAS7BDAT format.
+.. _io.spss:
+
+.. _io.spss_reader:
+
+SPSS formats
+------------
+
+.. versionadded:: 0.25.0
+
+The top-level function :func:`read_spss` can read (but not write) SPSS
+`sav` (.sav) and `zsav` (.zsav) format files.
+
+SPSS files contain column names. By default the
+whole file is read, categorical columns are converted into ``pd.Categorical``
+and a ``DataFrame`` with all columns is returned.
+
+Specify a ``usecols`` to obtain a subset of columns. Specify ``convert_categoricals=False``
+to avoid converting categorical columns into ``pd.Categorical``.
+
+Read a spss file:
+
+.. code-block:: python
+
+ df = pd.read_spss('spss_data.zsav')
+
+Extract a subset of columns ``usecols`` from SPSS file and
+avoid converting categorical columns into ``pd.Categorical``:
+
+.. code-block:: python
+
+ df = pd.read_spss('spss_data.zsav', usecols=['foo', 'bar'],
+ convert_categoricals=False)
+
+More info_ about the sav and zsav file format is available from the IBM
+web site.
+
+.. _info: https://www.ibm.com/support/knowledgecenter/en/SSLVMB_22.0.0/com.ibm.spss.statistics.help/spss/base/savedatatypes.htm
+
.. _io.other:
Other file formats
| - [x] closes #27476
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Documentation is added for the new read_spss function.
After the generating html in Sphinx, it looks like this:
IO page top:
<img width="766" alt="IO page" src="https://user-images.githubusercontent.com/6498024/61973737-c70bf580-afb2-11e9-95cf-499efc17e807.png">
SPSS description:
<img width="774" alt="SPSS description" src="https://user-images.githubusercontent.com/6498024/61973739-ca9f7c80-afb2-11e9-8909-1638ad7a70d1.png">
API section:
<img width="761" alt="SPSS API" src="https://user-images.githubusercontent.com/6498024/61973743-cd01d680-afb2-11e9-9fca-be0a06128049.png">
| https://api.github.com/repos/pandas-dev/pandas/pulls/27594 | 2019-07-25T20:08:47Z | 2019-07-26T20:58:47Z | 2019-07-26T20:58:46Z | 2019-07-29T13:19:53Z |
Backport PR #27488 on branch 0.25.x (API: Add entrypoint for plotting) | diff --git a/Makefile b/Makefile
index baceefe6d49ff..9e69eb7922925 100644
--- a/Makefile
+++ b/Makefile
@@ -15,7 +15,7 @@ lint-diff:
git diff upstream/master --name-only -- "*.py" | xargs flake8
black:
- black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
+ black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
develop: build
python setup.py develop
diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 96a8440d85694..06d45e38bfcdb 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -56,7 +56,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
black --version
MSG='Checking black formatting' ; echo $MSG
- black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
+ black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
RET=$(($RET + $?)) ; echo $MSG "DONE"
# `setup.cfg` contains the list of error codes that are being ignored in flake8
diff --git a/doc/source/development/extending.rst b/doc/source/development/extending.rst
index b492a4edd70a4..e341dcb8318bc 100644
--- a/doc/source/development/extending.rst
+++ b/doc/source/development/extending.rst
@@ -441,5 +441,22 @@ This would be more or less equivalent to:
The backend module can then use other visualization tools (Bokeh, Altair,...)
to generate the plots.
+Libraries implementing the plotting backend should use `entry points <https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins>`__
+to make their backend discoverable to pandas. The key is ``"pandas_plotting_backends"``. For example, pandas
+registers the default "matplotlib" backend as follows.
+
+.. code-block:: python
+
+ # in setup.py
+ setup( # noqa: F821
+ ...,
+ entry_points={
+ "pandas_plotting_backends": [
+ "matplotlib = pandas:plotting._matplotlib",
+ ],
+ },
+ )
+
+
More information on how to implement a third-party plotting backend can be found at
https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/__init__.py#L1.
diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 169968314d70e..eb60272246ebb 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -114,7 +114,7 @@ I/O
Plotting
^^^^^^^^
--
+- Added a pandas_plotting_backends entrypoint group for registering plot backends. See :ref:`extending.plotting-backends` for more (:issue:`26747`).
-
-
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 0610780edb28d..a3c1499845c2a 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -1533,6 +1533,53 @@ def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwargs):
return self(kind="hexbin", x=x, y=y, C=C, **kwargs)
+_backends = {}
+
+
+def _find_backend(backend: str):
+ """
+ Find a pandas plotting backend>
+
+ Parameters
+ ----------
+ backend : str
+ The identifier for the backend. Either an entrypoint item registered
+ with pkg_resources, or a module name.
+
+ Notes
+ -----
+ Modifies _backends with imported backends as a side effect.
+
+ Returns
+ -------
+ types.ModuleType
+ The imported backend.
+ """
+ import pkg_resources # Delay import for performance.
+
+ for entry_point in pkg_resources.iter_entry_points("pandas_plotting_backends"):
+ if entry_point.name == "matplotlib":
+ # matplotlib is an optional dependency. When
+ # missing, this would raise.
+ continue
+ _backends[entry_point.name] = entry_point.load()
+
+ try:
+ return _backends[backend]
+ except KeyError:
+ # Fall back to unregisted, module name approach.
+ try:
+ module = importlib.import_module(backend)
+ except ImportError:
+ # We re-raise later on.
+ pass
+ else:
+ _backends[backend] = module
+ return module
+
+ raise ValueError("No backend {}".format(backend))
+
+
def _get_plot_backend(backend=None):
"""
Return the plotting backend to use (e.g. `pandas.plotting._matplotlib`).
@@ -1546,7 +1593,18 @@ def _get_plot_backend(backend=None):
The backend is imported lazily, as matplotlib is a soft dependency, and
pandas can be used without it being installed.
"""
- backend_str = backend or pandas.get_option("plotting.backend")
- if backend_str == "matplotlib":
- backend_str = "pandas.plotting._matplotlib"
- return importlib.import_module(backend_str)
+ backend = backend or pandas.get_option("plotting.backend")
+
+ if backend == "matplotlib":
+ # Because matplotlib is an optional dependency and first-party backend,
+ # we need to attempt an import here to raise an ImportError if needed.
+ import pandas.plotting._matplotlib as module
+
+ _backends["matplotlib"] = module
+
+ if backend in _backends:
+ return _backends[backend]
+
+ module = _find_backend(backend)
+ _backends[backend] = module
+ return module
diff --git a/pandas/tests/plotting/test_backend.py b/pandas/tests/plotting/test_backend.py
index 51f2abb6cc2f4..e79e7b6239eb3 100644
--- a/pandas/tests/plotting/test_backend.py
+++ b/pandas/tests/plotting/test_backend.py
@@ -1,5 +1,11 @@
+import sys
+import types
+
+import pkg_resources
import pytest
+import pandas.util._test_decorators as td
+
import pandas
@@ -36,3 +42,44 @@ def test_backend_is_correct(monkeypatch):
pandas.set_option("plotting.backend", "matplotlib")
except ImportError:
pass
+
+
+@td.skip_if_no_mpl
+def test_register_entrypoint():
+ mod = types.ModuleType("my_backend")
+ mod.plot = lambda *args, **kwargs: 1
+
+ backends = pkg_resources.get_entry_map("pandas")
+ my_entrypoint = pkg_resources.EntryPoint(
+ "pandas_plotting_backend",
+ mod.__name__,
+ dist=pkg_resources.get_distribution("pandas"),
+ )
+ backends["pandas_plotting_backends"]["my_backend"] = my_entrypoint
+ # TODO: the docs recommend importlib.util.module_from_spec. But this works for now.
+ sys.modules["my_backend"] = mod
+
+ result = pandas.plotting._core._get_plot_backend("my_backend")
+ assert result is mod
+
+ # TODO: https://github.com/pandas-dev/pandas/issues/27517
+ # Remove the td.skip_if_no_mpl
+ with pandas.option_context("plotting.backend", "my_backend"):
+ result = pandas.plotting._core._get_plot_backend()
+
+ assert result is mod
+
+
+def test_register_import():
+ mod = types.ModuleType("my_backend2")
+ mod.plot = lambda *args, **kwargs: 1
+ sys.modules["my_backend2"] = mod
+
+ result = pandas.plotting._core._get_plot_backend("my_backend2")
+ assert result is mod
+
+
+@td.skip_if_mpl
+def test_no_matplotlib_ok():
+ with pytest.raises(ImportError):
+ pandas.plotting._core._get_plot_backend("matplotlib")
diff --git a/setup.py b/setup.py
index 53e12da53cdeb..d2c6b18b892cd 100755
--- a/setup.py
+++ b/setup.py
@@ -830,5 +830,10 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
"hypothesis>=3.58",
]
},
+ entry_points={
+ "pandas_plotting_backends": [
+ "matplotlib = pandas:plotting._matplotlib",
+ ],
+ },
**setuptools_kwargs
)
| Backport PR #27488: API: Add entrypoint for plotting | https://api.github.com/repos/pandas-dev/pandas/pulls/27590 | 2019-07-25T17:18:22Z | 2019-07-25T18:14:41Z | 2019-07-25T18:14:41Z | 2019-07-25T18:14:41Z |
REF: collect indexing methods | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9a84f1ddd87a5..301aa08236ff5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2777,81 +2777,7 @@ def _unpickle_matrix_compat(self, state): # pragma: no cover
self._data = dm._data
# ----------------------------------------------------------------------
- # Getting and setting elements
-
- def _get_value(self, index, col, takeable: bool = False):
- """
- Quickly retrieve single value at passed column and index.
-
- Parameters
- ----------
- index : row label
- col : column label
- takeable : interpret the index/col as indexers, default False
-
- Returns
- -------
- scalar
- """
- if takeable:
- series = self._iget_item_cache(col)
- return com.maybe_box_datetimelike(series._values[index])
-
- series = self._get_item_cache(col)
- engine = self.index._engine
-
- try:
- return engine.get_value(series._values, index)
- except KeyError:
- # GH 20629
- if self.index.nlevels > 1:
- # partial indexing forbidden
- raise
- except (TypeError, ValueError):
- pass
-
- # we cannot handle direct indexing
- # use positional
- col = self.columns.get_loc(col)
- index = self.index.get_loc(index)
- return self._get_value(index, col, takeable=True)
-
- def _set_value(self, index, col, value, takeable: bool = False):
- """
- Put single value at passed column and index.
-
- Parameters
- ----------
- index : row label
- col : column label
- value : scalar
- takeable : interpret the index/col as indexers, default False
-
- Returns
- -------
- DataFrame
- If label pair is contained, will be reference to calling DataFrame,
- otherwise a new object.
- """
- try:
- if takeable is True:
- series = self._iget_item_cache(col)
- return series._set_value(index, value, takeable=True)
-
- series = self._get_item_cache(col)
- engine = self.index._engine
- engine.set_value(series._values, index, value)
- return self
- except (KeyError, TypeError):
-
- # set using a non-recursive method & reset the cache
- if takeable:
- self.iloc[index, col] = value
- else:
- self.loc[index, col] = value
- self._item_cache.pop(col, None)
-
- return self
+ # Indexing Methods
def _ixs(self, i: int, axis: int = 0):
"""
@@ -3021,6 +2947,199 @@ def _getitem_frame(self, key):
raise ValueError("Must pass DataFrame with boolean values only")
return self.where(key)
+ def _get_value(self, index, col, takeable: bool = False):
+ """
+ Quickly retrieve single value at passed column and index.
+
+ Parameters
+ ----------
+ index : row label
+ col : column label
+ takeable : interpret the index/col as indexers, default False
+
+ Returns
+ -------
+ scalar
+ """
+ if takeable:
+ series = self._iget_item_cache(col)
+ return com.maybe_box_datetimelike(series._values[index])
+
+ series = self._get_item_cache(col)
+ engine = self.index._engine
+
+ try:
+ return engine.get_value(series._values, index)
+ except KeyError:
+ # GH 20629
+ if self.index.nlevels > 1:
+ # partial indexing forbidden
+ raise
+ except (TypeError, ValueError):
+ pass
+
+ # we cannot handle direct indexing
+ # use positional
+ col = self.columns.get_loc(col)
+ index = self.index.get_loc(index)
+ return self._get_value(index, col, takeable=True)
+
+ def __setitem__(self, key, value):
+ key = com.apply_if_callable(key, self)
+
+ # see if we can slice the rows
+ indexer = convert_to_index_sliceable(self, key)
+ if indexer is not None:
+ return self._setitem_slice(indexer, value)
+
+ if isinstance(key, DataFrame) or getattr(key, "ndim", None) == 2:
+ self._setitem_frame(key, value)
+ elif isinstance(key, (Series, np.ndarray, list, Index)):
+ self._setitem_array(key, value)
+ else:
+ # set column
+ self._set_item(key, value)
+
+ def _setitem_slice(self, key, value):
+ self._check_setitem_copy()
+ self.loc[key] = value
+
+ def _setitem_array(self, key, value):
+ # also raises Exception if object array with NA values
+ if com.is_bool_indexer(key):
+ if len(key) != len(self.index):
+ raise ValueError(
+ "Item wrong length %d instead of %d!" % (len(key), len(self.index))
+ )
+ key = check_bool_indexer(self.index, key)
+ indexer = key.nonzero()[0]
+ self._check_setitem_copy()
+ self.loc._setitem_with_indexer(indexer, value)
+ else:
+ if isinstance(value, DataFrame):
+ if len(value.columns) != len(key):
+ raise ValueError("Columns must be same length as key")
+ for k1, k2 in zip(key, value.columns):
+ self[k1] = value[k2]
+ else:
+ indexer = self.loc._get_listlike_indexer(
+ key, axis=1, raise_missing=False
+ )[1]
+ self._check_setitem_copy()
+ self.loc._setitem_with_indexer((slice(None), indexer), value)
+
+ def _setitem_frame(self, key, value):
+ # support boolean setting with DataFrame input, e.g.
+ # df[df > df2] = 0
+ if isinstance(key, np.ndarray):
+ if key.shape != self.shape:
+ raise ValueError("Array conditional must be same shape as self")
+ key = self._constructor(key, **self._construct_axes_dict())
+
+ if key.values.size and not is_bool_dtype(key.values):
+ raise TypeError(
+ "Must pass DataFrame or 2-d ndarray with boolean values only"
+ )
+
+ self._check_inplace_setting(value)
+ self._check_setitem_copy()
+ self._where(-key, value, inplace=True)
+
+ def _set_item(self, key, value):
+ """
+ Add series to DataFrame in specified column.
+
+ If series is a numpy-array (not a Series/TimeSeries), it must be the
+ same length as the DataFrames index or an error will be thrown.
+
+ Series/TimeSeries will be conformed to the DataFrames index to
+ ensure homogeneity.
+ """
+
+ self._ensure_valid_index(value)
+ value = self._sanitize_column(key, value)
+ NDFrame._set_item(self, key, value)
+
+ # check if we are modifying a copy
+ # try to set first as we want an invalid
+ # value exception to occur first
+ if len(self):
+ self._check_setitem_copy()
+
+ def _set_value(self, index, col, value, takeable: bool = False):
+ """
+ Put single value at passed column and index.
+
+ Parameters
+ ----------
+ index : row label
+ col : column label
+ value : scalar
+ takeable : interpret the index/col as indexers, default False
+
+ Returns
+ -------
+ DataFrame
+ If label pair is contained, will be reference to calling DataFrame,
+ otherwise a new object.
+ """
+ try:
+ if takeable is True:
+ series = self._iget_item_cache(col)
+ return series._set_value(index, value, takeable=True)
+
+ series = self._get_item_cache(col)
+ engine = self.index._engine
+ engine.set_value(series._values, index, value)
+ return self
+ except (KeyError, TypeError):
+
+ # set using a non-recursive method & reset the cache
+ if takeable:
+ self.iloc[index, col] = value
+ else:
+ self.loc[index, col] = value
+ self._item_cache.pop(col, None)
+
+ return self
+
+ def _ensure_valid_index(self, value):
+ """
+ Ensure that if we don't have an index, that we can create one from the
+ passed value.
+ """
+ # GH5632, make sure that we are a Series convertible
+ if not len(self.index) and is_list_like(value):
+ try:
+ value = Series(value)
+ except (ValueError, NotImplementedError, TypeError):
+ raise ValueError(
+ "Cannot set a frame with no defined index "
+ "and a value that cannot be converted to a "
+ "Series"
+ )
+
+ self._data = self._data.reindex_axis(
+ value.index.copy(), axis=1, fill_value=np.nan
+ )
+
+ def _box_item_values(self, key, values):
+ items = self.columns[self.columns.get_loc(key)]
+ if values.ndim == 2:
+ return self._constructor(values.T, columns=items, index=self.index)
+ else:
+ return self._box_col_values(values, items)
+
+ def _box_col_values(self, values, items):
+ """
+ Provide boxed values for a column.
+ """
+ klass = self._constructor_sliced
+ return klass(values, index=self.index, name=items, fastpath=True)
+
+ # ----------------------------------------------------------------------
+ # Unsorted
+
def query(self, expr, inplace=False, **kwargs):
"""
Query the columns of a DataFrame with a boolean expression.
@@ -3392,122 +3511,6 @@ def is_dtype_instance_mapper(idx, dtype):
dtype_indexer = include_these & exclude_these
return self.loc[_get_info_slice(self, dtype_indexer)]
- def _box_item_values(self, key, values):
- items = self.columns[self.columns.get_loc(key)]
- if values.ndim == 2:
- return self._constructor(values.T, columns=items, index=self.index)
- else:
- return self._box_col_values(values, items)
-
- def _box_col_values(self, values, items):
- """
- Provide boxed values for a column.
- """
- klass = self._constructor_sliced
- return klass(values, index=self.index, name=items, fastpath=True)
-
- def __setitem__(self, key, value):
- key = com.apply_if_callable(key, self)
-
- # see if we can slice the rows
- indexer = convert_to_index_sliceable(self, key)
- if indexer is not None:
- return self._setitem_slice(indexer, value)
-
- if isinstance(key, DataFrame) or getattr(key, "ndim", None) == 2:
- self._setitem_frame(key, value)
- elif isinstance(key, (Series, np.ndarray, list, Index)):
- self._setitem_array(key, value)
- else:
- # set column
- self._set_item(key, value)
-
- def _setitem_slice(self, key, value):
- self._check_setitem_copy()
- self.loc[key] = value
-
- def _setitem_array(self, key, value):
- # also raises Exception if object array with NA values
- if com.is_bool_indexer(key):
- if len(key) != len(self.index):
- raise ValueError(
- "Item wrong length %d instead of %d!" % (len(key), len(self.index))
- )
- key = check_bool_indexer(self.index, key)
- indexer = key.nonzero()[0]
- self._check_setitem_copy()
- self.loc._setitem_with_indexer(indexer, value)
- else:
- if isinstance(value, DataFrame):
- if len(value.columns) != len(key):
- raise ValueError("Columns must be same length as key")
- for k1, k2 in zip(key, value.columns):
- self[k1] = value[k2]
- else:
- indexer = self.loc._get_listlike_indexer(
- key, axis=1, raise_missing=False
- )[1]
- self._check_setitem_copy()
- self.loc._setitem_with_indexer((slice(None), indexer), value)
-
- def _setitem_frame(self, key, value):
- # support boolean setting with DataFrame input, e.g.
- # df[df > df2] = 0
- if isinstance(key, np.ndarray):
- if key.shape != self.shape:
- raise ValueError("Array conditional must be same shape as self")
- key = self._constructor(key, **self._construct_axes_dict())
-
- if key.values.size and not is_bool_dtype(key.values):
- raise TypeError(
- "Must pass DataFrame or 2-d ndarray with boolean values only"
- )
-
- self._check_inplace_setting(value)
- self._check_setitem_copy()
- self._where(-key, value, inplace=True)
-
- def _ensure_valid_index(self, value):
- """
- Ensure that if we don't have an index, that we can create one from the
- passed value.
- """
- # GH5632, make sure that we are a Series convertible
- if not len(self.index) and is_list_like(value):
- try:
- value = Series(value)
- except (ValueError, NotImplementedError, TypeError):
- raise ValueError(
- "Cannot set a frame with no defined index "
- "and a value that cannot be converted to a "
- "Series"
- )
-
- self._data = self._data.reindex_axis(
- value.index.copy(), axis=1, fill_value=np.nan
- )
-
- def _set_item(self, key, value):
- """
- Add series to DataFrame in specified column.
-
- If series is a numpy-array (not a Series/TimeSeries), it must be the
- same length as the DataFrames index or an error will be thrown.
-
- Series/TimeSeries will be conformed to the DataFrames index to
- ensure homogeneity.
- """
-
- self._ensure_valid_index(value)
- value = self._sanitize_column(key, value)
- NDFrame._set_item(self, key, value)
-
- # check if we are modifying a copy
- # try to set first as we want an invalid
- # value exception to occur first
- if len(self):
- self._check_setitem_copy()
-
def insert(self, loc, column, value, allow_duplicates=False):
"""
Insert column into DataFrame at specified location.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9053edf2d1424..db54c3006a6a2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3231,41 +3231,8 @@ def _create_indexer(cls, name, indexer):
_indexer = functools.partial(indexer, name)
setattr(cls, name, property(_indexer, doc=indexer.__doc__))
- def get(self, key, default=None):
- """
- Get item from object for given key (ex: DataFrame column).
-
- Returns default value if not found.
-
- Parameters
- ----------
- key : object
-
- Returns
- -------
- value : same type as items contained in object
- """
- try:
- return self[key]
- except (KeyError, ValueError, IndexError):
- return default
-
- def __getitem__(self, item):
- return self._get_item_cache(item)
-
- def _get_item_cache(self, item):
- """Return the cached item, item represents a label indexer."""
- cache = self._item_cache
- res = cache.get(item)
- if res is None:
- values = self._data.get(item)
- res = self._box_item_values(item, values)
- cache[item] = res
- res._set_as_cached(item, self)
-
- # for a chain
- res._is_copy = self._is_copy
- return res
+ # ----------------------------------------------------------------------
+ # Lookup Caching
def _set_as_cached(self, item, cacher):
"""Set the _cacher attribute on the calling object with a weakref to
@@ -3278,18 +3245,6 @@ def _reset_cacher(self):
if hasattr(self, "_cacher"):
del self._cacher
- def _iget_item_cache(self, item):
- """Return the cached item, item represents a positional indexer."""
- ax = self._info_axis
- if ax.is_unique:
- lower = self._get_item_cache(ax[item])
- else:
- lower = self.take(item, axis=self._info_axis_number)
- return lower
-
- def _box_item_values(self, key, values):
- raise AbstractMethodError(self)
-
def _maybe_cache_changed(self, item, value):
"""The object has called back to us saying maybe it has changed.
"""
@@ -3307,11 +3262,6 @@ def _get_cacher(self):
cacher = cacher[1]()
return cacher
- @property
- def _is_view(self):
- """Return boolean indicating if self is view of another array """
- return self._data.is_view
-
def _maybe_update_cacher(self, clear=False, verify_is_copy=True):
"""
See if we need to update our parent cacher if clear, then clear our
@@ -3352,165 +3302,8 @@ def _clear_item_cache(self, i=None):
else:
self._item_cache.clear()
- def _slice(self, slobj, axis=0, kind=None):
- """
- Construct a slice of this container.
-
- kind parameter is maintained for compatibility with Series slicing.
- """
- axis = self._get_block_manager_axis(axis)
- result = self._constructor(self._data.get_slice(slobj, axis=axis))
- result = result.__finalize__(self)
-
- # this could be a view
- # but only in a single-dtyped view sliceable case
- is_copy = axis != 0 or result._is_view
- result._set_is_copy(self, copy=is_copy)
- return result
-
- def _set_item(self, key, value):
- self._data.set(key, value)
- self._clear_item_cache()
-
- def _set_is_copy(self, ref=None, copy=True):
- if not copy:
- self._is_copy = None
- else:
- if ref is not None:
- self._is_copy = weakref.ref(ref)
- else:
- self._is_copy = None
-
- def _check_is_chained_assignment_possible(self):
- """
- Check if we are a view, have a cacher, and are of mixed type.
- If so, then force a setitem_copy check.
-
- Should be called just near setting a value
-
- Will return a boolean if it we are a view and are cached, but a
- single-dtype meaning that the cacher should be updated following
- setting.
- """
- if self._is_view and self._is_cached:
- ref = self._get_cacher()
- if ref is not None and ref._is_mixed_type:
- self._check_setitem_copy(stacklevel=4, t="referant", force=True)
- return True
- elif self._is_copy:
- self._check_setitem_copy(stacklevel=4, t="referant")
- return False
-
- def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
- """
-
- Parameters
- ----------
- stacklevel : integer, default 4
- the level to show of the stack when the error is output
- t : string, the type of setting error
- force : boolean, default False
- if True, then force showing an error
-
- validate if we are doing a setitem on a chained copy.
-
- If you call this function, be sure to set the stacklevel such that the
- user will see the error *at the level of setting*
-
- It is technically possible to figure out that we are setting on
- a copy even WITH a multi-dtyped pandas object. In other words, some
- blocks may be views while other are not. Currently _is_view will ALWAYS
- return False for multi-blocks to avoid having to handle this case.
-
- df = DataFrame(np.arange(0,9), columns=['count'])
- df['group'] = 'b'
-
- # This technically need not raise SettingWithCopy if both are view
- # (which is not # generally guaranteed but is usually True. However,
- # this is in general not a good practice and we recommend using .loc.
- df.iloc[0:5]['group'] = 'a'
-
- """
-
- # return early if the check is not needed
- if not (force or self._is_copy):
- return
-
- value = config.get_option("mode.chained_assignment")
- if value is None:
- return
-
- # see if the copy is not actually referred; if so, then dissolve
- # the copy weakref
- if self._is_copy is not None and not isinstance(self._is_copy, str):
- r = self._is_copy()
- if not gc.get_referents(r) or r.shape == self.shape:
- self._is_copy = None
- return
-
- # a custom message
- if isinstance(self._is_copy, str):
- t = self._is_copy
-
- elif t == "referant":
- t = (
- "\n"
- "A value is trying to be set on a copy of a slice from a "
- "DataFrame\n\n"
- "See the caveats in the documentation: "
- "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
- "indexing.html#returning-a-view-versus-a-copy"
- )
-
- else:
- t = (
- "\n"
- "A value is trying to be set on a copy of a slice from a "
- "DataFrame.\n"
- "Try using .loc[row_indexer,col_indexer] = value "
- "instead\n\nSee the caveats in the documentation: "
- "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
- "indexing.html#returning-a-view-versus-a-copy"
- )
-
- if value == "raise":
- raise com.SettingWithCopyError(t)
- elif value == "warn":
- warnings.warn(t, com.SettingWithCopyWarning, stacklevel=stacklevel)
-
- def __delitem__(self, key):
- """
- Delete item
- """
- deleted = False
-
- maybe_shortcut = False
- if self.ndim == 2 and isinstance(self.columns, MultiIndex):
- try:
- maybe_shortcut = key not in self.columns._engine
- except TypeError:
- pass
-
- if maybe_shortcut:
- # Allow shorthand to delete all columns whose first len(key)
- # elements match key:
- if not isinstance(key, tuple):
- key = (key,)
- for col in self.columns:
- if isinstance(col, tuple) and col[: len(key)] == key:
- del self[col]
- deleted = True
- if not deleted:
- # If the above loop ran and didn't delete anything because
- # there was no match, this call should raise the appropriate
- # exception:
- self._data.delete(key)
-
- # delete from the caches
- try:
- del self._item_cache[key]
- except KeyError:
- pass
+ # ----------------------------------------------------------------------
+ # Indexing Methods
def take(self, indices, axis=0, is_copy=True, **kwargs):
"""
@@ -3766,6 +3559,222 @@ class animal locomotion
_xs = xs # type: Callable
+ def get(self, key, default=None):
+ """
+ Get item from object for given key (ex: DataFrame column).
+
+ Returns default value if not found.
+
+ Parameters
+ ----------
+ key : object
+
+ Returns
+ -------
+ value : same type as items contained in object
+ """
+ try:
+ return self[key]
+ except (KeyError, ValueError, IndexError):
+ return default
+
+ def __getitem__(self, item):
+ return self._get_item_cache(item)
+
+ def _get_item_cache(self, item):
+ """Return the cached item, item represents a label indexer."""
+ cache = self._item_cache
+ res = cache.get(item)
+ if res is None:
+ values = self._data.get(item)
+ res = self._box_item_values(item, values)
+ cache[item] = res
+ res._set_as_cached(item, self)
+
+ # for a chain
+ res._is_copy = self._is_copy
+ return res
+
+ def _iget_item_cache(self, item):
+ """Return the cached item, item represents a positional indexer."""
+ ax = self._info_axis
+ if ax.is_unique:
+ lower = self._get_item_cache(ax[item])
+ else:
+ lower = self.take(item, axis=self._info_axis_number)
+ return lower
+
+ def _box_item_values(self, key, values):
+ raise AbstractMethodError(self)
+
+ def _slice(self, slobj, axis=0, kind=None):
+ """
+ Construct a slice of this container.
+
+ kind parameter is maintained for compatibility with Series slicing.
+ """
+ axis = self._get_block_manager_axis(axis)
+ result = self._constructor(self._data.get_slice(slobj, axis=axis))
+ result = result.__finalize__(self)
+
+ # this could be a view
+ # but only in a single-dtyped view sliceable case
+ is_copy = axis != 0 or result._is_view
+ result._set_is_copy(self, copy=is_copy)
+ return result
+
+ def _set_item(self, key, value):
+ self._data.set(key, value)
+ self._clear_item_cache()
+
+ def _set_is_copy(self, ref=None, copy=True):
+ if not copy:
+ self._is_copy = None
+ else:
+ if ref is not None:
+ self._is_copy = weakref.ref(ref)
+ else:
+ self._is_copy = None
+
+ def _check_is_chained_assignment_possible(self):
+ """
+ Check if we are a view, have a cacher, and are of mixed type.
+ If so, then force a setitem_copy check.
+
+ Should be called just near setting a value
+
+ Will return a boolean if it we are a view and are cached, but a
+ single-dtype meaning that the cacher should be updated following
+ setting.
+ """
+ if self._is_view and self._is_cached:
+ ref = self._get_cacher()
+ if ref is not None and ref._is_mixed_type:
+ self._check_setitem_copy(stacklevel=4, t="referant", force=True)
+ return True
+ elif self._is_copy:
+ self._check_setitem_copy(stacklevel=4, t="referant")
+ return False
+
+ def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
+ """
+
+ Parameters
+ ----------
+ stacklevel : integer, default 4
+ the level to show of the stack when the error is output
+ t : string, the type of setting error
+ force : boolean, default False
+ if True, then force showing an error
+
+ validate if we are doing a setitem on a chained copy.
+
+ If you call this function, be sure to set the stacklevel such that the
+ user will see the error *at the level of setting*
+
+ It is technically possible to figure out that we are setting on
+ a copy even WITH a multi-dtyped pandas object. In other words, some
+ blocks may be views while other are not. Currently _is_view will ALWAYS
+ return False for multi-blocks to avoid having to handle this case.
+
+ df = DataFrame(np.arange(0,9), columns=['count'])
+ df['group'] = 'b'
+
+ # This technically need not raise SettingWithCopy if both are view
+ # (which is not # generally guaranteed but is usually True. However,
+ # this is in general not a good practice and we recommend using .loc.
+ df.iloc[0:5]['group'] = 'a'
+
+ """
+
+ # return early if the check is not needed
+ if not (force or self._is_copy):
+ return
+
+ value = config.get_option("mode.chained_assignment")
+ if value is None:
+ return
+
+ # see if the copy is not actually referred; if so, then dissolve
+ # the copy weakref
+ if self._is_copy is not None and not isinstance(self._is_copy, str):
+ r = self._is_copy()
+ if not gc.get_referents(r) or r.shape == self.shape:
+ self._is_copy = None
+ return
+
+ # a custom message
+ if isinstance(self._is_copy, str):
+ t = self._is_copy
+
+ elif t == "referant":
+ t = (
+ "\n"
+ "A value is trying to be set on a copy of a slice from a "
+ "DataFrame\n\n"
+ "See the caveats in the documentation: "
+ "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
+ "indexing.html#returning-a-view-versus-a-copy"
+ )
+
+ else:
+ t = (
+ "\n"
+ "A value is trying to be set on a copy of a slice from a "
+ "DataFrame.\n"
+ "Try using .loc[row_indexer,col_indexer] = value "
+ "instead\n\nSee the caveats in the documentation: "
+ "http://pandas.pydata.org/pandas-docs/stable/user_guide/"
+ "indexing.html#returning-a-view-versus-a-copy"
+ )
+
+ if value == "raise":
+ raise com.SettingWithCopyError(t)
+ elif value == "warn":
+ warnings.warn(t, com.SettingWithCopyWarning, stacklevel=stacklevel)
+
+ def __delitem__(self, key):
+ """
+ Delete item
+ """
+ deleted = False
+
+ maybe_shortcut = False
+ if self.ndim == 2 and isinstance(self.columns, MultiIndex):
+ try:
+ maybe_shortcut = key not in self.columns._engine
+ except TypeError:
+ pass
+
+ if maybe_shortcut:
+ # Allow shorthand to delete all columns whose first len(key)
+ # elements match key:
+ if not isinstance(key, tuple):
+ key = (key,)
+ for col in self.columns:
+ if isinstance(col, tuple) and col[: len(key)] == key:
+ del self[col]
+ deleted = True
+ if not deleted:
+ # If the above loop ran and didn't delete anything because
+ # there was no match, this call should raise the appropriate
+ # exception:
+ self._data.delete(key)
+
+ # delete from the caches
+ try:
+ del self._item_cache[key]
+ except KeyError:
+ pass
+
+ # ----------------------------------------------------------------------
+ # Unsorted
+
+ @property
+ def _is_view(self):
+ """Return boolean indicating if self is view of another array """
+ return self._data.is_view
+
def reindex_like(self, other, method=None, copy=True, limit=None, tolerance=None):
"""
Return an object with matching indices as other object.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 418b3fc8c57d0..d7c486ccefe1b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1030,6 +1030,36 @@ def axes(self):
"""
return [self.index]
+ # ----------------------------------------------------------------------
+ # Indexing Methods
+
+ @Appender(generic.NDFrame.take.__doc__)
+ def take(self, indices, axis=0, is_copy=False, **kwargs):
+ nv.validate_take(tuple(), kwargs)
+
+ indices = ensure_platform_int(indices)
+ new_index = self.index.take(indices)
+
+ if is_categorical_dtype(self):
+ # https://github.com/pandas-dev/pandas/issues/20664
+ # TODO: remove when the default Categorical.take behavior changes
+ indices = maybe_convert_indices(indices, len(self._get_axis(axis)))
+ kwargs = {"allow_fill": False}
+ else:
+ kwargs = {}
+ new_values = self._values.take(indices, **kwargs)
+
+ result = self._constructor(
+ new_values, index=new_index, fastpath=True
+ ).__finalize__(self)
+
+ # Maybe set copy if we didn't actually change the index.
+ if is_copy:
+ if not result._get_axis(axis).equals(self._get_axis(axis)):
+ result._set_is_copy(self)
+
+ return result
+
def _ixs(self, i: int, axis: int = 0):
"""
Return the i-th value or values in the Series by location.
@@ -1050,10 +1080,6 @@ def _ixs(self, i: int, axis: int = 0):
else:
return values[i]
- @property
- def _is_mixed_type(self):
- return False
-
def _slice(self, slobj, axis=0, kind=None):
slobj = self.index._convert_slice_indexer(slobj, kind=kind or "getitem")
return self._get_values(slobj)
@@ -1178,6 +1204,23 @@ def _get_values(self, indexer):
except Exception:
return self._values[indexer]
+ def _get_value(self, label, takeable: bool = False):
+ """
+ Quickly retrieve single value at passed index label.
+
+ Parameters
+ ----------
+ label : object
+ takeable : interpret the index as indexers, default False
+
+ Returns
+ -------
+ scalar value
+ """
+ if takeable:
+ return com.maybe_box_datetimelike(self._values[label])
+ return self.index.get_value(self._values, label)
+
def __setitem__(self, key, value):
key = com.apply_if_callable(key, self)
@@ -1310,6 +1353,46 @@ def _set_values(self, key, value):
self._data = self._data.setitem(indexer=key, value=value)
self._maybe_update_cacher()
+ def _set_value(self, label, value, takeable: bool = False):
+ """
+ Quickly set single value at passed label.
+
+ If label is not contained, a new object is created with the label
+ placed at the end of the result index.
+
+ Parameters
+ ----------
+ label : object
+ Partial indexing with MultiIndex not allowed
+ value : object
+ Scalar value
+ takeable : interpret the index as indexers, default False
+
+ Returns
+ -------
+ Series
+ If label is contained, will be reference to calling Series,
+ otherwise a new object.
+ """
+ try:
+ if takeable:
+ self._values[label] = value
+ else:
+ self.index._engine.set_value(self._values, label, value)
+ except (KeyError, TypeError):
+
+ # set using a non-recursive method
+ self.loc[label] = value
+
+ return self
+
+ # ----------------------------------------------------------------------
+ # Unsorted
+
+ @property
+ def _is_mixed_type(self):
+ return False
+
def repeat(self, repeats, axis=None):
"""
Repeat elements of a Series.
@@ -1367,56 +1450,6 @@ def repeat(self, repeats, axis=None):
new_values = self._values.repeat(repeats)
return self._constructor(new_values, index=new_index).__finalize__(self)
- def _get_value(self, label, takeable: bool = False):
- """
- Quickly retrieve single value at passed index label.
-
- Parameters
- ----------
- label : object
- takeable : interpret the index as indexers, default False
-
- Returns
- -------
- scalar value
- """
- if takeable:
- return com.maybe_box_datetimelike(self._values[label])
- return self.index.get_value(self._values, label)
-
- def _set_value(self, label, value, takeable: bool = False):
- """
- Quickly set single value at passed label.
-
- If label is not contained, a new object is created with the label
- placed at the end of the result index.
-
- Parameters
- ----------
- label : object
- Partial indexing with MultiIndex not allowed
- value : object
- Scalar value
- takeable : interpret the index as indexers, default False
-
- Returns
- -------
- Series
- If label is contained, will be reference to calling Series,
- otherwise a new object.
- """
- try:
- if takeable:
- self._values[label] = value
- else:
- self.index._engine.set_value(self._values, label, value)
- except (KeyError, TypeError):
-
- # set using a non-recursive method
- self.loc[label] = value
-
- return self
-
def reset_index(self, level=None, drop=False, name=None, inplace=False):
"""
Generate a new DataFrame or Series with the index reset.
@@ -4384,33 +4417,6 @@ def memory_usage(self, index=True, deep=False):
v += self.index.memory_usage(deep=deep)
return v
- @Appender(generic.NDFrame.take.__doc__)
- def take(self, indices, axis=0, is_copy=False, **kwargs):
- nv.validate_take(tuple(), kwargs)
-
- indices = ensure_platform_int(indices)
- new_index = self.index.take(indices)
-
- if is_categorical_dtype(self):
- # https://github.com/pandas-dev/pandas/issues/20664
- # TODO: remove when the default Categorical.take behavior changes
- indices = maybe_convert_indices(indices, len(self._get_axis(axis)))
- kwargs = {"allow_fill": False}
- else:
- kwargs = {}
- new_values = self._values.take(indices, **kwargs)
-
- result = self._constructor(
- new_values, index=new_index, fastpath=True
- ).__finalize__(self)
-
- # Maybe set copy if we didn't actually change the index.
- if is_copy:
- if not result._get_axis(axis).equals(self._get_axis(axis)):
- result._set_is_copy(self)
-
- return result
-
def isin(self, values):
"""
Check whether `values` are contained in Series.
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 2dbf4807cf144..3e44a7f941a86 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -447,6 +447,9 @@ def sp_maker(x, index=None):
# always return a SparseArray!
return clean
+ # ----------------------------------------------------------------------
+ # Indexing Methods
+
def _get_value(self, index, col, takeable=False):
"""
Quickly retrieve single value at passed column and index
@@ -470,34 +473,6 @@ def _get_value(self, index, col, takeable=False):
return series._get_value(index, takeable=takeable)
- def _set_value(self, index, col, value, takeable=False):
- """
- Put single value at passed column and index
-
- Please use .at[] or .iat[] accessors.
-
- Parameters
- ----------
- index : row label
- col : column label
- value : scalar value
- takeable : interpret the index/col as indexers, default False
-
- Notes
- -----
- This method *always* returns a new object. It is currently not
- particularly efficient (and potentially very expensive) but is provided
- for API compatibility with DataFrame
-
- Returns
- -------
- frame : DataFrame
- """
- dense = self.to_dense()._set_value(index, col, value, takeable=takeable)
- return dense.to_sparse(
- kind=self._default_kind, fill_value=self._default_fill_value
- )
-
def _slice(self, slobj, axis=0, kind=None):
if axis == 0:
new_index = self.index[slobj]
@@ -529,6 +504,34 @@ def xs(self, key, axis=0, copy=False):
data = self.take([i])._internal_get_values()[0]
return Series(data, index=self.columns)
+ def _set_value(self, index, col, value, takeable=False):
+ """
+ Put single value at passed column and index
+
+ Please use .at[] or .iat[] accessors.
+
+ Parameters
+ ----------
+ index : row label
+ col : column label
+ value : scalar value
+ takeable : interpret the index/col as indexers, default False
+
+ Notes
+ -----
+ This method *always* returns a new object. It is currently not
+ particularly efficient (and potentially very expensive) but is provided
+ for API compatibility with DataFrame
+
+ Returns
+ -------
+ frame : DataFrame
+ """
+ dense = self.to_dense()._set_value(index, col, value, takeable=takeable)
+ return dense.to_sparse(
+ kind=self._default_kind, fill_value=self._default_fill_value
+ )
+
# ----------------------------------------------------------------------
# Arithmetic-related methods
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index f5d39c47150a2..1ebee1995bc29 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -310,6 +310,9 @@ def _set_subtyp(self, is_all_dates):
else:
object.__setattr__(self, "_subtyp", "sparse_series")
+ # ----------------------------------------------------------------------
+ # Indexing Methods
+
def _ixs(self, i, axis=0):
"""
Return the i-th value or values in the SparseSeries by location
@@ -340,52 +343,6 @@ def __getitem__(self, key):
else:
return super().__getitem__(key)
- def _get_values(self, indexer):
- try:
- return self._constructor(
- self._data.get_slice(indexer), fastpath=True
- ).__finalize__(self)
- except Exception:
- return self[indexer]
-
- def _set_with_engine(self, key, value):
- return self._set_value(key, value)
-
- def abs(self):
- """
- Return an object with absolute value taken. Only applicable to objects
- that are all numeric
-
- Returns
- -------
- abs: same type as caller
- """
- return self._constructor(np.abs(self.values), index=self.index).__finalize__(
- self
- )
-
- def get(self, label, default=None):
- """
- Returns value occupying requested label, default to specified
- missing value if not present. Analogous to dict.get
-
- Parameters
- ----------
- label : object
- Label value looking for
- default : object, optional
- Value to return if label not in index
-
- Returns
- -------
- y : scalar
- """
- if label in self.index:
- loc = self.index.get_loc(label)
- return self._get_val_at(loc)
- else:
- return default
-
def _get_value(self, label, takeable=False):
"""
Retrieve single value at passed index label
@@ -404,6 +361,17 @@ def _get_value(self, label, takeable=False):
loc = label if takeable is True else self.index.get_loc(label)
return self._get_val_at(loc)
+ def _get_values(self, indexer):
+ try:
+ return self._constructor(
+ self._data.get_slice(indexer), fastpath=True
+ ).__finalize__(self)
+ except Exception:
+ return self[indexer]
+
+ def _set_with_engine(self, key, value):
+ return self._set_value(key, value)
+
def _set_value(self, label, value, takeable=False):
"""
Quickly set single value at passed label. If label is not contained, a
@@ -457,6 +425,44 @@ def _set_values(self, key, value):
values = SparseArray(values, fill_value=self.fill_value, kind=self.kind)
self._data = SingleBlockManager(values, self.index)
+ # ----------------------------------------------------------------------
+ # Unsorted
+
+ def abs(self):
+ """
+ Return an object with absolute value taken. Only applicable to objects
+ that are all numeric
+
+ Returns
+ -------
+ abs: same type as caller
+ """
+ return self._constructor(np.abs(self.values), index=self.index).__finalize__(
+ self
+ )
+
+ def get(self, label, default=None):
+ """
+ Returns value occupying requested label, default to specified
+ missing value if not present. Analogous to dict.get
+
+ Parameters
+ ----------
+ label : object
+ Label value looking for
+ default : object, optional
+ Value to return if label not in index
+
+ Returns
+ -------
+ y : scalar
+ """
+ if label in self.index:
+ loc = self.index.get_loc(label)
+ return self._get_val_at(loc)
+ else:
+ return default
+
def to_dense(self):
"""
Convert SparseSeries to a Series.
| No real changes, just putting indexing methods in one place to make subsequent passes a little easier | https://api.github.com/repos/pandas-dev/pandas/pulls/27588 | 2019-07-25T16:21:27Z | 2019-07-25T17:19:30Z | 2019-07-25T17:19:30Z | 2019-07-25T17:31:47Z |
Backport PR #27511 on branch 0.25.x (BUG: display.precision of negative complex numbers) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 9007d1c06f197..169968314d70e 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -57,6 +57,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
+- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
-
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c15f4ad8e1900..245e41ed16eb2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2601,12 +2601,12 @@ def memory_usage(self, index=True, deep=False):
... for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
- int64 float64 complex128 object bool
- 0 1 1.0 1.0+0.0j 1 True
- 1 1 1.0 1.0+0.0j 1 True
- 2 1 1.0 1.0+0.0j 1 True
- 3 1 1.0 1.0+0.0j 1 True
- 4 1 1.0 1.0+0.0j 1 True
+ int64 float64 complex128 object bool
+ 0 1 1.0 1.000000+0.000000j 1 True
+ 1 1 1.0 1.000000+0.000000j 1 True
+ 2 1 1.0 1.000000+0.000000j 1 True
+ 3 1 1.0 1.000000+0.000000j 1 True
+ 4 1 1.0 1.000000+0.000000j 1 True
>>> df.memory_usage()
Index 128
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 0e8ed7b25d665..11275973e2bb8 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -5,6 +5,7 @@
from functools import partial
from io import StringIO
+import re
from shutil import get_terminal_size
from unicodedata import east_asian_width
@@ -1584,17 +1585,10 @@ def _trim_zeros_complex(str_complexes, na_rep="NaN"):
Separates the real and imaginary parts from the complex number, and
executes the _trim_zeros_float method on each of those.
"""
-
- def separate_and_trim(str_complex, na_rep):
- num_arr = str_complex.split("+")
- return (
- _trim_zeros_float([num_arr[0]], na_rep)
- + ["+"]
- + _trim_zeros_float([num_arr[1][:-1]], na_rep)
- + ["j"]
- )
-
- return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
+ return [
+ "".join(_trim_zeros_float(re.split(r"([j+-])", x), na_rep))
+ for x in str_complexes
+ ]
def _trim_zeros_float(str_floats, na_rep="NaN"):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 818bbc566aca8..ad47f714c9550 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1537,7 +1537,7 @@ def test_to_string_float_index(self):
assert result == expected
def test_to_string_complex_float_formatting(self):
- # GH #25514
+ # GH #25514, 25745
with pd.option_context("display.precision", 5):
df = DataFrame(
{
@@ -1545,6 +1545,7 @@ def test_to_string_complex_float_formatting(self):
(0.4467846931321966 + 0.0715185102060818j),
(0.2739442392974528 + 0.23515228785438969j),
(0.26974928742135185 + 0.3250604054898979j),
+ (-1j),
]
}
)
@@ -1552,7 +1553,8 @@ def test_to_string_complex_float_formatting(self):
expected = (
" x\n0 0.44678+0.07152j\n"
"1 0.27394+0.23515j\n"
- "2 0.26975+0.32506j"
+ "2 0.26975+0.32506j\n"
+ "3 -0.00000-1.00000j"
)
assert result == expected
| Backport PR #27511: BUG: display.precision of negative complex numbers | https://api.github.com/repos/pandas-dev/pandas/pulls/27587 | 2019-07-25T15:58:49Z | 2019-07-25T16:55:03Z | 2019-07-25T16:55:03Z | 2019-07-25T16:55:03Z |
CLN: Centralised _check_percentile | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c0ed198e200f1..4124936b910e6 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -248,7 +248,6 @@ def _get_hashtable_algo(values):
def _get_data_algo(values, func_map):
-
if is_categorical_dtype(values):
values = values._values_for_rank()
@@ -299,7 +298,6 @@ def match(to_match, values, na_sentinel=-1):
result = table.lookup(to_match)
if na_sentinel != -1:
-
# replace but return a numpy array
# use a Series because it handles dtype conversions properly
from pandas import Series
@@ -1165,7 +1163,6 @@ def compute(self, method):
# slow method
if n >= len(self.obj):
-
reverse_it = self.keep == "last" or method == "nlargest"
ascending = method == "nsmallest"
slc = np.s_[::-1] if reverse_it else np.s_[:]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f1ed3a125f60c..79f3ca6ffab2c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -32,7 +32,11 @@
deprecate_kwarg,
rewrite_axis_style_signature,
)
-from pandas.util._validators import validate_axis_style_args, validate_bool_kwarg
+from pandas.util._validators import (
+ validate_axis_style_args,
+ validate_bool_kwarg,
+ validate_percentile,
+)
from pandas.core.dtypes.cast import (
cast_scalar_to_array,
@@ -8225,7 +8229,7 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation="linear"):
C 1 days 12:00:00
Name: 0.5, dtype: object
"""
- self._check_percentile(q)
+ validate_percentile(q)
data = self._get_numeric_data() if numeric_only else self
axis = self._get_axis_number(axis)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 68308b2f83b60..bbfbea37b4a71 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -31,7 +31,11 @@
from pandas.compat.numpy import function as nv
from pandas.errors import AbstractMethodError
from pandas.util._decorators import Appender, Substitution, rewrite_axis_style_signature
-from pandas.util._validators import validate_bool_kwarg, validate_fillna_kwargs
+from pandas.util._validators import (
+ validate_bool_kwarg,
+ validate_fillna_kwargs,
+ validate_percentile,
+)
from pandas.core.dtypes.cast import maybe_promote, maybe_upcast_putmask
from pandas.core.dtypes.common import (
@@ -10169,7 +10173,7 @@ def describe(self, percentiles=None, include=None, exclude=None):
percentiles = list(percentiles)
# get them all to be in [0, 1]
- self._check_percentile(percentiles)
+ validate_percentile(percentiles)
# median should always be included
if 0.5 not in percentiles:
@@ -10273,21 +10277,6 @@ def describe_1d(data):
d.columns = data.columns.copy()
return d
- def _check_percentile(self, q):
- """
- Validate percentiles (used by describe and quantile).
- """
-
- msg = "percentiles should all be in the interval [0, 1]. Try {0} instead."
- q = np.asarray(q)
- if q.ndim == 0:
- if not 0 <= q <= 1:
- raise ValueError(msg.format(q / 100.0))
- else:
- if not all(0 <= qs <= 1 for qs in q):
- raise ValueError(msg.format(q / 100.0))
- return q
-
_shared_docs[
"pct_change"
] = """
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 922977bc04d63..4ee05b582003b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -16,7 +16,7 @@
from pandas.compat import PY36
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, deprecate
-from pandas.util._validators import validate_bool_kwarg
+from pandas.util._validators import validate_bool_kwarg, validate_percentile
from pandas.core.dtypes.common import (
_is_unorderable_exception,
@@ -2353,7 +2353,7 @@ def quantile(self, q=0.5, interpolation="linear"):
dtype: float64
"""
- self._check_percentile(q)
+ validate_percentile(q)
# We dispatch to DataFrame so that core.internals only has to worry
# about 2D cases.
diff --git a/pandas/util/_validators.py b/pandas/util/_validators.py
index 8d5f9f7749682..f5a472596f58f 100644
--- a/pandas/util/_validators.py
+++ b/pandas/util/_validators.py
@@ -2,8 +2,11 @@
Module that contains many useful utilities
for validating data or function arguments
"""
+from typing import Iterable, Union
import warnings
+import numpy as np
+
from pandas.core.dtypes.common import is_bool
@@ -370,3 +373,35 @@ def validate_fillna_kwargs(value, method, validate_scalar_dict_value=True):
raise ValueError("Cannot specify both 'value' and 'method'.")
return value, method
+
+
+def validate_percentile(q: Union[float, Iterable[float]]) -> np.ndarray:
+ """
+ Validate percentiles (used by describe and quantile).
+
+ This function checks if the given float oriterable of floats is a valid percentile
+ otherwise raises a ValueError.
+
+ Parameters
+ ----------
+ q: float or iterable of floats
+ A single percentile or an iterable of percentiles.
+
+ Returns
+ -------
+ ndarray
+ An ndarray of the percentiles if valid.
+
+ Raises
+ ------
+ ValueError if percentiles are not in given interval([0, 1]).
+ """
+ msg = "percentiles should all be in the interval [0, 1]. Try {0} instead."
+ q_arr = np.asarray(q)
+ if q_arr.ndim == 0:
+ if not 0 <= q_arr <= 1:
+ raise ValueError(msg.format(q_arr / 100.0))
+ else:
+ if not all(0 <= qs <= 1 for qs in q_arr):
+ raise ValueError(msg.format(q_arr / 100.0))
+ return q_arr
| - Fixes #27559
- Moved the _check_percentile method on NDFrame to algorithms as
check_percentile.
- Changed the references to _check_percentile in pandas/core/series.py
and pandas/core/frame.py
- [x] closes #27559
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27584 | 2019-07-25T10:23:11Z | 2019-10-03T17:11:46Z | 2019-10-03T17:11:45Z | 2020-09-07T11:32:12Z |
DOC:Update python version support info | diff --git a/doc/source/install.rst b/doc/source/install.rst
index 352b56ebd3020..fc99b458fa0af 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -15,35 +15,10 @@ Instructions for installing from source,
`PyPI <https://pypi.org/project/pandas>`__, `ActivePython <https://www.activestate.com/activepython/downloads>`__, various Linux distributions, or a
`development version <http://github.com/pandas-dev/pandas>`__ are also provided.
-.. _install.dropping-27:
-
-Plan for dropping Python 2.7
-----------------------------
-
-The Python core team plans to stop supporting Python 2.7 on January 1st, 2020.
-In line with `NumPy's plans`_, all pandas releases through December 31, 2018
-will support Python 2.
-
-The 0.24.x feature release will be the last release to
-support Python 2. The released package will continue to be available on
-PyPI and through conda.
-
- Starting **January 1, 2019**, all new feature releases (> 0.24) will be Python 3 only.
-
-If there are people interested in continued support for Python 2.7 past December
-31, 2018 (either backporting bug fixes or funding) please reach out to the
-maintainers on the issue tracker.
-
-For more information, see the `Python 3 statement`_ and the `Porting to Python 3 guide`_.
-
-.. _NumPy's plans: https://github.com/numpy/numpy/blob/master/doc/neps/nep-0014-dropping-python2.7-proposal.rst#plan-for-dropping-python-27-support
-.. _Python 3 statement: http://python3statement.org/
-.. _Porting to Python 3 guide: https://docs.python.org/3/howto/pyporting.html
-
Python version support
----------------------
-Officially Python 2.7, 3.5, 3.6, and 3.7.
+Officially Python 3.5.3 and above, 3.6, and 3.7.
Installing pandas
-----------------
diff --git a/doc/source/whatsnew/v0.23.0.rst b/doc/source/whatsnew/v0.23.0.rst
index 62cf977d8c8ac..f4c283ea742f7 100644
--- a/doc/source/whatsnew/v0.23.0.rst
+++ b/doc/source/whatsnew/v0.23.0.rst
@@ -31,7 +31,7 @@ Check the :ref:`API Changes <whatsnew_0230.api_breaking>` and :ref:`deprecations
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.0
:local:
diff --git a/doc/source/whatsnew/v0.23.1.rst b/doc/source/whatsnew/v0.23.1.rst
index d730a57a01a60..03b7d9db6bc63 100644
--- a/doc/source/whatsnew/v0.23.1.rst
+++ b/doc/source/whatsnew/v0.23.1.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.1
:local:
diff --git a/doc/source/whatsnew/v0.23.2.rst b/doc/source/whatsnew/v0.23.2.rst
index df8cc12e3385e..9f24092d1d4ae 100644
--- a/doc/source/whatsnew/v0.23.2.rst
+++ b/doc/source/whatsnew/v0.23.2.rst
@@ -17,7 +17,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.2
:local:
diff --git a/doc/source/whatsnew/v0.23.4.rst b/doc/source/whatsnew/v0.23.4.rst
index 060d1fc8eba34..eadac6f569926 100644
--- a/doc/source/whatsnew/v0.23.4.rst
+++ b/doc/source/whatsnew/v0.23.4.rst
@@ -12,7 +12,7 @@ and bug fixes. We recommend that all users upgrade to this version.
.. warning::
Starting January 1, 2019, pandas feature releases will support Python 3 only.
- See :ref:`install.dropping-27` for more.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
.. contents:: What's new in v0.23.4
:local:
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index a66056f661de3..d9f41d2a75116 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -6,7 +6,7 @@ What's new in 0.24.0 (January 25, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more
details.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.1.rst b/doc/source/whatsnew/v0.24.1.rst
index 1b0232cad7476..aead8c48eb9b7 100644
--- a/doc/source/whatsnew/v0.24.1.rst
+++ b/doc/source/whatsnew/v0.24.1.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.1 (February 3, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.24.2.rst b/doc/source/whatsnew/v0.24.2.rst
index da8064893e8a8..d1a893f99cff4 100644
--- a/doc/source/whatsnew/v0.24.2.rst
+++ b/doc/source/whatsnew/v0.24.2.rst
@@ -6,7 +6,7 @@ Whats new in 0.24.2 (March 12, 2019)
.. warning::
The 0.24.x series of releases will be the last to support Python 2. Future feature
- releases will support Python 3 only. See :ref:`install.dropping-27` for more.
+ releases will support Python 3 only. See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more.
{{ header }}
diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 42e756635e739..5b8f980d27b9d 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -6,7 +6,7 @@ What's new in 0.25.0 (July 18, 2019)
.. warning::
Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
- See :ref:`install.dropping-27` for more details.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more details.
.. warning::
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 7af4795fbc3e8..70c948eb29d65 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -6,7 +6,7 @@ What's new in 1.0.0 (??)
.. warning::
Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
- See :ref:`install.dropping-27` for more details.
+ See `Dropping Python 2.7 <https://pandas.pydata.org/pandas-docs/version/0.24/install.html#install-dropping-27>`_ for more details.
.. warning::
| Updated the python version support info.
Added minimum version for Python 3.5 is 3.5.3 and removed python 2.7 support info.
closes #27558
| https://api.github.com/repos/pandas-dev/pandas/pulls/27580 | 2019-07-25T06:05:45Z | 2019-08-01T12:13:48Z | 2019-08-01T12:13:48Z | 2019-08-21T21:09:14Z |
TYPING: add type hints to pandas\io\formats\printing.py | diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 4958d8246610e..4ec9094ce4abe 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -3,13 +3,14 @@
"""
import sys
+from typing import Callable, Dict, Iterable, List, Optional, Sequence, Tuple, Union
from pandas._config import get_option
from pandas.core.dtypes.inference import is_sequence
-def adjoin(space, *lists, **kwargs):
+def adjoin(space: int, *lists: List[str], **kwargs) -> str:
"""
Glues together two sets of strings using the amount of space requested.
The idea is to prettify.
@@ -40,11 +41,11 @@ def adjoin(space, *lists, **kwargs):
newLists.append(nl)
toJoin = zip(*newLists)
for lines in toJoin:
- out_lines.append(_join_unicode(lines))
- return _join_unicode(out_lines, sep="\n")
+ out_lines.append("".join(lines))
+ return "\n".join(out_lines)
-def justify(texts, max_len, mode="right"):
+def justify(texts: Iterable[str], max_len: int, mode: str = "right") -> List[str]:
"""
Perform ljust, center, rjust against string or list-like
"""
@@ -56,14 +57,6 @@ def justify(texts, max_len, mode="right"):
return [x.rjust(max_len) for x in texts]
-def _join_unicode(lines, sep=""):
- try:
- return sep.join(lines)
- except UnicodeDecodeError:
- sep = str(sep)
- return sep.join([x.decode("utf-8") if isinstance(x, str) else x for x in lines])
-
-
# Unicode consolidation
# ---------------------
#
@@ -88,7 +81,9 @@ def _join_unicode(lines, sep=""):
# working with straight ascii.
-def _pprint_seq(seq, _nest_lvl=0, max_seq_items=None, **kwds):
+def _pprint_seq(
+ seq: Sequence, _nest_lvl: int = 0, max_seq_items: Optional[int] = None, **kwds
+) -> str:
"""
internal. pprinter for iterables. you should probably use pprint_thing()
rather then calling this directly.
@@ -121,7 +116,9 @@ def _pprint_seq(seq, _nest_lvl=0, max_seq_items=None, **kwds):
return fmt.format(body=body)
-def _pprint_dict(seq, _nest_lvl=0, max_seq_items=None, **kwds):
+def _pprint_dict(
+ seq: Dict, _nest_lvl: int = 0, max_seq_items: Optional[int] = None, **kwds
+) -> str:
"""
internal. pprinter for iterables. you should probably use pprint_thing()
rather then calling this directly.
@@ -152,12 +149,12 @@ def _pprint_dict(seq, _nest_lvl=0, max_seq_items=None, **kwds):
def pprint_thing(
thing,
- _nest_lvl=0,
- escape_chars=None,
- default_escapes=False,
- quote_strings=False,
- max_seq_items=None,
-):
+ _nest_lvl: int = 0,
+ escape_chars: Optional[Union[Dict[str, str], Iterable[str]]] = None,
+ default_escapes: bool = False,
+ quote_strings: bool = False,
+ max_seq_items: Optional[int] = None,
+) -> str:
"""
This function is the sanctioned way of converting objects
to a unicode representation.
@@ -234,12 +231,14 @@ def as_escaped_unicode(thing, escape_chars=escape_chars):
return str(result) # always unicode
-def pprint_thing_encoded(object, encoding="utf-8", errors="replace", **kwds):
+def pprint_thing_encoded(
+ object, encoding: str = "utf-8", errors: str = "replace"
+) -> bytes:
value = pprint_thing(object) # get unicode representation of object
- return value.encode(encoding, errors, **kwds)
+ return value.encode(encoding, errors)
-def _enable_data_resource_formatter(enable):
+def _enable_data_resource_formatter(enable: bool) -> None:
if "IPython" not in sys.modules:
# definitely not in IPython
return
@@ -279,12 +278,12 @@ class TableSchemaFormatter(BaseFormatter):
def format_object_summary(
obj,
- formatter,
- is_justify=True,
- name=None,
- indent_for_name=True,
- line_break_each_value=False,
-):
+ formatter: Callable,
+ is_justify: bool = True,
+ name: Optional[str] = None,
+ indent_for_name: bool = True,
+ line_break_each_value: bool = False,
+) -> str:
"""
Return the formatted obj as a unicode string
@@ -448,7 +447,9 @@ def best_len(values):
return summary
-def _justify(head, tail):
+def _justify(
+ head: List[Sequence[str]], tail: List[Sequence[str]]
+) -> Tuple[List[Tuple[str, ...]], List[Tuple[str, ...]]]:
"""
Justify items in head and tail, so they are right-aligned when stacked.
@@ -484,10 +485,16 @@ def _justify(head, tail):
tail = [
tuple(x.rjust(max_len) for x, max_len in zip(seq, max_length)) for seq in tail
]
- return head, tail
+ # https://github.com/python/mypy/issues/4975
+ # error: Incompatible return value type (got "Tuple[List[Sequence[str]],
+ # List[Sequence[str]]]", expected "Tuple[List[Tuple[str, ...]],
+ # List[Tuple[str, ...]]]")
+ return head, tail # type: ignore
-def format_object_attrs(obj, include_dtype=True):
+def format_object_attrs(
+ obj: Sequence, include_dtype: bool = True
+) -> List[Tuple[str, Union[str, int]]]:
"""
Return a list of tuples of the (attr, formatted_value)
for common attrs, including dtype, name, length
@@ -501,16 +508,20 @@ def format_object_attrs(obj, include_dtype=True):
Returns
-------
- list
+ list of 2-tuple
"""
- attrs = []
+ attrs = [] # type: List[Tuple[str, Union[str, int]]]
if hasattr(obj, "dtype") and include_dtype:
- attrs.append(("dtype", "'{}'".format(obj.dtype)))
+ # error: "Sequence[Any]" has no attribute "dtype"
+ attrs.append(("dtype", "'{}'".format(obj.dtype))) # type: ignore
if getattr(obj, "name", None) is not None:
- attrs.append(("name", default_pprint(obj.name)))
- elif getattr(obj, "names", None) is not None and any(obj.names):
- attrs.append(("names", default_pprint(obj.names)))
+ # error: "Sequence[Any]" has no attribute "name"
+ attrs.append(("name", default_pprint(obj.name))) # type: ignore
+ # error: "Sequence[Any]" has no attribute "names"
+ elif getattr(obj, "names", None) is not None and any(obj.names): # type: ignore
+ # error: "Sequence[Any]" has no attribute "names"
+ attrs.append(("names", default_pprint(obj.names))) # type: ignore
max_seq_items = get_option("display.max_seq_items") or len(obj)
if len(obj) > max_seq_items:
attrs.append(("length", len(obj)))
| https://api.github.com/repos/pandas-dev/pandas/pulls/27579 | 2019-07-25T03:15:04Z | 2019-07-25T17:38:53Z | 2019-07-25T17:38:53Z | 2019-07-26T03:18:57Z | |
CLN: pandas\io\formats\format.py | diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 1c2a8c4f0c10c..1fc1684335982 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -299,9 +299,11 @@ def _formatter_func(self):
def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
from pandas.io.formats.format import Timedelta64Formatter
- return Timedelta64Formatter(
- values=self, nat_rep=na_rep, justify="all"
- ).get_result()
+ return np.asarray(
+ Timedelta64Formatter(
+ values=self, nat_rep=na_rep, justify="all"
+ ).get_result()
+ )
# -------------------------------------------------------------------
# Wrapping TimedeltaArray
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 1aa9b43197b30..c0cbda54ae88e 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -25,7 +25,6 @@
from dateutil.tz.tz import tzutc
from dateutil.zoneinfo import tzfile
import numpy as np
-from numpy import float64, int32, ndarray
from pandas._config.config import get_option, set_option
@@ -1124,7 +1123,7 @@ def __init__(
self.fixed_width = fixed_width
self.leading_space = leading_space
- def get_result(self) -> Union[ndarray, List[str]]:
+ def get_result(self) -> List[str]:
fmt_values = self._format_strings()
return _make_fixed_width(fmt_values, self.justify)
@@ -1260,7 +1259,7 @@ def formatter(value):
return formatter
- def get_result_as_array(self) -> Union[ndarray, List[str]]:
+ def get_result_as_array(self) -> np.ndarray:
"""
Returns the float values converted into strings using
the parameters given at initialisation, as a numpy array
@@ -1300,9 +1299,10 @@ def format_values_with(float_format):
if self.fixed_width:
if is_complex:
- return _trim_zeros_complex(values, na_rep)
+ result = _trim_zeros_complex(values, na_rep)
else:
- return _trim_zeros_float(values, na_rep)
+ result = _trim_zeros_float(values, na_rep)
+ return np.asarray(result, dtype="object")
return values
@@ -1367,7 +1367,7 @@ def _format_strings(self) -> List[str]:
class Datetime64Formatter(GenericArrayFormatter):
def __init__(
self,
- values: Union[ndarray, "Series", DatetimeIndex, DatetimeArray],
+ values: Union[np.ndarray, "Series", DatetimeIndex, DatetimeArray],
nat_rep: str = "NaT",
date_format: None = None,
**kwargs
@@ -1424,7 +1424,7 @@ def _format_strings(self) -> List[str]:
def format_percentiles(
percentiles: Union[
- ndarray, List[Union[int, float]], List[float], List[Union[str, float]]
+ np.ndarray, List[Union[int, float]], List[float], List[Union[str, float]]
]
) -> List[str]:
"""
@@ -1492,7 +1492,9 @@ def format_percentiles(
return [i + "%" for i in out]
-def _is_dates_only(values: Union[ndarray, DatetimeArray, Index, DatetimeIndex]) -> bool:
+def _is_dates_only(
+ values: Union[np.ndarray, DatetimeArray, Index, DatetimeIndex]
+) -> bool:
# return a boolean if we are only dates (and don't have a timezone)
assert values.ndim == 1
@@ -1556,7 +1558,7 @@ def _get_format_datetime64(
def _get_format_datetime64_from_values(
- values: Union[ndarray, DatetimeArray, DatetimeIndex], date_format: Optional[str]
+ values: Union[np.ndarray, DatetimeArray, DatetimeIndex], date_format: Optional[str]
) -> Optional[str]:
""" given values and a date_format, return a string format """
@@ -1588,7 +1590,7 @@ def _format_strings(self) -> List[str]:
class Timedelta64Formatter(GenericArrayFormatter):
def __init__(
self,
- values: Union[ndarray, TimedeltaIndex],
+ values: Union[np.ndarray, TimedeltaIndex],
nat_rep: str = "NaT",
box: bool = False,
**kwargs
@@ -1597,16 +1599,15 @@ def __init__(
self.nat_rep = nat_rep
self.box = box
- def _format_strings(self) -> ndarray:
+ def _format_strings(self) -> List[str]:
formatter = self.formatter or _get_format_timedelta64(
self.values, nat_rep=self.nat_rep, box=self.box
)
- fmt_values = np.array([formatter(x) for x in self.values])
- return fmt_values
+ return [formatter(x) for x in self.values]
def _get_format_timedelta64(
- values: Union[ndarray, TimedeltaIndex, TimedeltaArray],
+ values: Union[np.ndarray, TimedeltaIndex, TimedeltaArray],
nat_rep: str = "NaT",
box: bool = False,
) -> Callable:
@@ -1651,11 +1652,11 @@ def _formatter(x):
def _make_fixed_width(
- strings: Union[ndarray, List[str]],
+ strings: List[str],
justify: str = "right",
minimum: Optional[int] = None,
adj: Optional[TextAdjustment] = None,
-) -> Union[ndarray, List[str]]:
+) -> List[str]:
if len(strings) == 0 or justify == "all":
return strings
@@ -1683,7 +1684,7 @@ def just(x):
return result
-def _trim_zeros_complex(str_complexes: ndarray, na_rep: str = "NaN") -> List[str]:
+def _trim_zeros_complex(str_complexes: np.ndarray, na_rep: str = "NaN") -> List[str]:
"""
Separates the real and imaginary parts from the complex number, and
executes the _trim_zeros_float method on each of those.
@@ -1702,7 +1703,7 @@ def separate_and_trim(str_complex, na_rep):
def _trim_zeros_float(
- str_floats: Union[ndarray, List[str]], na_rep: str = "NaN"
+ str_floats: Union[np.ndarray, List[str]], na_rep: str = "NaN"
) -> List[str]:
"""
Trims zeros, leaving just one before the decimal points if need be.
@@ -1766,7 +1767,7 @@ def __init__(self, accuracy: Optional[int] = None, use_eng_prefix: bool = False)
self.accuracy = accuracy
self.use_eng_prefix = use_eng_prefix
- def __call__(self, num: Union[float64, int, float]) -> str:
+ def __call__(self, num: Union[int, float]) -> str:
""" Formats a number in engineering notation, appending a letter
representing the power of 1000 of the original number. Some examples:
@@ -1846,7 +1847,7 @@ def set_eng_float_format(accuracy: int = 3, use_eng_prefix: bool = False) -> Non
set_option("display.column_space", max(12, accuracy + 9))
-def _binify(cols: List[int32], line_width: Union[int32, int]) -> List[int]:
+def _binify(cols: List[np.int32], line_width: Union[np.int32, int]) -> List[int]:
adjoin_width = 1
bins = []
curr_width = 0
| xref https://github.com/pandas-dev/pandas/pull/27512#discussion_r306561575 | https://api.github.com/repos/pandas-dev/pandas/pulls/27577 | 2019-07-24T22:46:22Z | 2019-07-25T17:10:23Z | 2019-07-25T17:10:23Z | 2019-07-26T03:15:13Z |
CLN: Prune unnecessary indexing code | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 34025d942b377..cdbe0e9d22eb4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -94,12 +94,9 @@
)
from pandas.core.indexes import base as ibase
from pandas.core.indexes.datetimes import DatetimeIndex
+from pandas.core.indexes.multi import maybe_droplevels
from pandas.core.indexes.period import PeriodIndex
-from pandas.core.indexing import (
- check_bool_indexer,
- convert_to_index_sliceable,
- maybe_droplevels,
-)
+from pandas.core.indexing import check_bool_indexer, convert_to_index_sliceable
from pandas.core.internals import BlockManager
from pandas.core.internals.construction import (
arrays_to_mgr,
@@ -2794,8 +2791,6 @@ def _ixs(self, i: int, axis: int = 0):
if axis == 0:
label = self.index[i]
new_values = self._data.fast_xs(i)
- if is_scalar(new_values):
- return new_values
# if we are a copy, mark as such
copy = isinstance(new_values, np.ndarray) and new_values.base is None
@@ -2906,6 +2901,7 @@ def _getitem_bool_array(self, key):
return self.take(indexer, axis=0)
def _getitem_multilevel(self, key):
+ # self.columns is a MultiIndex
loc = self.columns.get_loc(key)
if isinstance(loc, (slice, Series, np.ndarray, Index)):
new_columns = self.columns[loc]
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c0d4baef6cebe..b900e1e82255d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3296,11 +3296,8 @@ def _maybe_update_cacher(self, clear=False, verify_is_copy=True):
if clear:
self._clear_item_cache()
- def _clear_item_cache(self, i=None):
- if i is not None:
- self._item_cache.pop(i, None)
- else:
- self._item_cache.clear()
+ def _clear_item_cache(self):
+ self._item_cache.clear()
# ----------------------------------------------------------------------
# Indexing Methods
@@ -3559,27 +3556,8 @@ class animal locomotion
_xs = xs # type: Callable
- def get(self, key, default=None):
- """
- Get item from object for given key (ex: DataFrame column).
-
- Returns default value if not found.
-
- Parameters
- ----------
- key : object
-
- Returns
- -------
- value : same type as items contained in object
- """
- try:
- return self[key]
- except (KeyError, ValueError, IndexError):
- return default
-
def __getitem__(self, item):
- return self._get_item_cache(item)
+ raise AbstractMethodError(self)
def _get_item_cache(self, item):
"""Return the cached item, item represents a label indexer."""
@@ -3770,6 +3748,25 @@ def __delitem__(self, key):
# ----------------------------------------------------------------------
# Unsorted
+ def get(self, key, default=None):
+ """
+ Get item from object for given key (ex: DataFrame column).
+
+ Returns default value if not found.
+
+ Parameters
+ ----------
+ key : object
+
+ Returns
+ -------
+ value : same type as items contained in object
+ """
+ try:
+ return self[key]
+ except (KeyError, ValueError, IndexError):
+ return default
+
@property
def _is_view(self):
"""Return boolean indicating if self is view of another array """
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 0123230bc0f63..a7c3449615299 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1470,9 +1470,6 @@ def dropna(self, how="any"):
return self.copy(codes=new_codes, deep=True)
def get_value(self, series, key):
- # somewhat broken encapsulation
- from pandas.core.indexing import maybe_droplevels
-
# Label-based
s = com.values_from_object(series)
k = com.values_from_object(key)
@@ -2709,7 +2706,7 @@ def _maybe_to_slice(loc):
return _maybe_to_slice(loc) if len(loc) != stop - start else slice(start, stop)
- def get_loc_level(self, key, level=0, drop_level=True):
+ def get_loc_level(self, key, level=0, drop_level: bool = True):
"""
Get both the location for the requested label(s) and the
resulting sliced index.
@@ -2750,7 +2747,8 @@ def get_loc_level(self, key, level=0, drop_level=True):
(1, None)
"""
- def maybe_droplevels(indexer, levels, drop_level):
+ # different name to distinguish from maybe_droplevels
+ def maybe_mi_droplevels(indexer, levels, drop_level: bool):
if not drop_level:
return self[indexer]
# kludgearound
@@ -2780,7 +2778,7 @@ def maybe_droplevels(indexer, levels, drop_level):
result = loc if result is None else result & loc
- return result, maybe_droplevels(result, level, drop_level)
+ return result, maybe_mi_droplevels(result, level, drop_level)
level = self._get_level_number(level)
@@ -2793,7 +2791,7 @@ def maybe_droplevels(indexer, levels, drop_level):
try:
if key in self.levels[0]:
indexer = self._get_level_indexer(key, level=level)
- new_index = maybe_droplevels(indexer, [0], drop_level)
+ new_index = maybe_mi_droplevels(indexer, [0], drop_level)
return indexer, new_index
except TypeError:
pass
@@ -2808,7 +2806,7 @@ def partial_selection(key, indexer=None):
ilevels = [
i for i in range(len(key)) if key[i] != slice(None, None)
]
- return indexer, maybe_droplevels(indexer, ilevels, drop_level)
+ return indexer, maybe_mi_droplevels(indexer, ilevels, drop_level)
if len(key) == self.nlevels and self.is_unique:
# Complete key in unique index -> standard get_loc
@@ -2843,10 +2841,10 @@ def partial_selection(key, indexer=None):
if indexer is None:
indexer = slice(None, None)
ilevels = [i for i in range(len(key)) if key[i] != slice(None, None)]
- return indexer, maybe_droplevels(indexer, ilevels, drop_level)
+ return indexer, maybe_mi_droplevels(indexer, ilevels, drop_level)
else:
indexer = self._get_level_indexer(key, level=level)
- return indexer, maybe_droplevels(indexer, [level], drop_level)
+ return indexer, maybe_mi_droplevels(indexer, [level], drop_level)
def _get_level_indexer(self, key, level=0, indexer=None):
# return an indexer, boolean array or a slice showing where the key is
@@ -3464,3 +3462,34 @@ def _sparsify(label_list, start=0, sentinel=""):
def _get_na_rep(dtype):
return {np.datetime64: "NaT", np.timedelta64: "NaT"}.get(dtype, "NaN")
+
+
+def maybe_droplevels(index, key):
+ """
+ Attempt to drop level or levels from the given index.
+
+ Parameters
+ ----------
+ index: Index
+ key : scalar or tuple
+
+ Returns
+ -------
+ Index
+ """
+ # drop levels
+ original_index = index
+ if isinstance(key, tuple):
+ for _ in key:
+ try:
+ index = index.droplevel(0)
+ except ValueError:
+ # we have dropped too much, so back out
+ return original_index
+ else:
+ try:
+ index = index.droplevel(0)
+ except ValueError:
+ pass
+
+ return index
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 1d4ea54ef0d70..cb33044c4e23f 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -226,7 +226,7 @@ def _validate_key(self, key, axis: int):
def _has_valid_tuple(self, key: Tuple):
""" check the key for valid keys across my indexer """
for i, k in enumerate(key):
- if i >= self.obj.ndim:
+ if i >= self.ndim:
raise IndexingError("Too many indexers")
try:
self._validate_key(k, i)
@@ -254,7 +254,7 @@ def _convert_tuple(self, key, is_setter: bool = False):
keyidx.append(slice(None))
else:
for i, k in enumerate(key):
- if i >= self.obj.ndim:
+ if i >= self.ndim:
raise IndexingError("Too many indexers")
idx = self._convert_to_indexer(k, axis=i, is_setter=is_setter)
keyidx.append(idx)
@@ -286,7 +286,7 @@ def _has_valid_positional_setitem_indexer(self, indexer):
raise IndexError("{0} cannot enlarge its target object".format(self.name))
else:
if not isinstance(indexer, tuple):
- indexer = self._tuplify(indexer)
+ indexer = _tuplify(self.ndim, indexer)
for ax, i in zip(self.obj.axes, indexer):
if isinstance(i, slice):
# should check the stop slice?
@@ -401,7 +401,7 @@ def _setitem_with_indexer(self, indexer, value):
assert info_axis == 1
if not isinstance(indexer, tuple):
- indexer = self._tuplify(indexer)
+ indexer = _tuplify(self.ndim, indexer)
if isinstance(value, ABCSeries):
value = self._align_series(indexer, value)
@@ -670,7 +670,7 @@ def ravel(i):
aligners = [not com.is_null_slice(idx) for idx in indexer]
sum_aligners = sum(aligners)
single_aligner = sum_aligners == 1
- is_frame = self.obj.ndim == 2
+ is_frame = self.ndim == 2
obj = self.obj
# are we a single alignable value on a non-primary
@@ -731,7 +731,7 @@ def ravel(i):
raise ValueError("Incompatible indexer with Series")
def _align_frame(self, indexer, df: ABCDataFrame):
- is_frame = self.obj.ndim == 2
+ is_frame = self.ndim == 2
if isinstance(indexer, tuple):
@@ -867,7 +867,7 @@ def _handle_lowerdim_multi_index_axis0(self, tup: Tuple):
except KeyError as ek:
# raise KeyError if number of indexers match
# else IndexingError will be raised
- if len(tup) <= self.obj.index.nlevels and len(tup) > self.obj.ndim:
+ if len(tup) <= self.obj.index.nlevels and len(tup) > self.ndim:
raise ek
except Exception as e1:
if isinstance(tup[0], (slice, Index)):
@@ -900,7 +900,7 @@ def _getitem_lowerdim(self, tup: Tuple):
if result is not None:
return result
- if len(tup) > self.obj.ndim:
+ if len(tup) > self.ndim:
raise IndexingError("Too many indexers. handle elsewhere")
# to avoid wasted computation
@@ -1274,11 +1274,6 @@ def _convert_to_indexer(
return {"key": obj}
raise
- def _tuplify(self, loc):
- tup = [slice(None, None) for _ in range(self.ndim)]
- tup[0] = loc
- return tuple(tup)
-
def _get_slice_axis(self, slice_obj: slice, axis: int):
# caller is responsible for ensuring non-None axis
obj = self.obj
@@ -2077,6 +2072,8 @@ def _getitem_tuple(self, tup: Tuple):
# if the dim was reduced, then pass a lower-dim the next time
if retval.ndim < self.ndim:
+ # TODO: this is never reached in tests; can we confirm that
+ # it is impossible?
axis -= 1
# try to get for the next axis
@@ -2152,7 +2149,7 @@ def _convert_to_indexer(
)
-class _ScalarAccessIndexer(_NDFrameIndexer):
+class _ScalarAccessIndexer(_NDFrameIndexerBase):
""" access scalars quickly """
def _convert_key(self, key, is_setter: bool = False):
@@ -2178,8 +2175,8 @@ def __setitem__(self, key, value):
key = com.apply_if_callable(key, self.obj)
if not isinstance(key, tuple):
- key = self._tuplify(key)
- if len(key) != self.obj.ndim:
+ key = _tuplify(self.ndim, key)
+ if len(key) != self.ndim:
raise ValueError("Not enough indexers for scalar access (setting)!")
key = list(self._convert_key(key, is_setter=True))
key.append(value)
@@ -2309,9 +2306,6 @@ class _iAtIndexer(_ScalarAccessIndexer):
_takeable = True
- def _has_valid_setitem_indexer(self, indexer):
- self._has_valid_positional_setitem_indexer(indexer)
-
def _convert_key(self, key, is_setter: bool = False):
""" require integer args (and convert to label arguments) """
for a, i in zip(self.obj.axes, key):
@@ -2320,6 +2314,25 @@ def _convert_key(self, key, is_setter: bool = False):
return key
+def _tuplify(ndim: int, loc) -> tuple:
+ """
+ Given an indexer for the first dimension, create an equivalent tuple
+ for indexing over all dimensions.
+
+ Parameters
+ ----------
+ ndim : int
+ loc : object
+
+ Returns
+ -------
+ tuple
+ """
+ tup = [slice(None, None) for _ in range(ndim)]
+ tup[0] = loc
+ return tuple(tup)
+
+
def convert_to_index_sliceable(obj, key):
"""
if we are index sliceable, then return my slicer, otherwise return None
@@ -2469,25 +2482,6 @@ def need_slice(obj):
)
-def maybe_droplevels(index, key):
- # drop levels
- original_index = index
- if isinstance(key, tuple):
- for _ in key:
- try:
- index = index.droplevel(0)
- except ValueError:
- # we have dropped too much, so back out
- return original_index
- else:
- try:
- index = index.droplevel(0)
- except ValueError:
- pass
-
- return index
-
-
def _non_reducing_slice(slice_):
"""
Ensurse that a slice doesn't reduce to a Series or Scalar.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 592aa658556c7..59f39758cb3b0 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -12,7 +12,7 @@
from pandas._config import get_option
-from pandas._libs import iNaT, index as libindex, lib, reshape, tslibs
+from pandas._libs import index as libindex, lib, reshape, tslibs
from pandas.compat import PY36
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, deprecate
@@ -47,7 +47,6 @@
ABCSparseSeries,
)
from pandas.core.dtypes.missing import (
- is_valid_nat_for_dtype,
isna,
na_value_for_dtype,
notna,
@@ -1237,20 +1236,6 @@ def setitem(key, value):
elif key is Ellipsis:
self[:] = value
return
- elif com.is_bool_indexer(key):
- pass
- elif is_timedelta64_dtype(self.dtype):
- # reassign a null value to iNaT
- if is_valid_nat_for_dtype(value, self.dtype):
- # exclude np.datetime64("NaT")
- value = iNaT
-
- try:
- self.index._engine.set_value(self._values, key, value)
- return
- except (TypeError, ValueError):
- # ValueError appears in only some builds in CI
- pass
self.loc[key] = value
return
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 8d1971801cccc..e375bd459e66f 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -8,6 +8,7 @@
import pytest
from pandas.compat import PY36
+from pandas.errors import AbstractMethodError
from pandas.core.dtypes.common import is_float_dtype, is_integer_dtype
@@ -1168,7 +1169,7 @@ def test_extension_array_cross_section_converts():
@pytest.mark.parametrize(
"idxr, error, error_message",
[
- (lambda x: x, AttributeError, "'numpy.ndarray' object has no attribute 'get'"),
+ (lambda x: x, AbstractMethodError, None),
(
lambda x: x.loc,
AttributeError,
| https://api.github.com/repos/pandas-dev/pandas/pulls/27576 | 2019-07-24T21:52:24Z | 2019-07-25T22:12:49Z | 2019-07-25T22:12:49Z | 2019-08-02T19:04:37Z | |
DEPR: remove .ix from tests/indexing/multiindex/test_setitem.py | diff --git a/pandas/tests/indexing/multiindex/test_setitem.py b/pandas/tests/indexing/multiindex/test_setitem.py
index 261d2e9c04e77..c383c38958692 100644
--- a/pandas/tests/indexing/multiindex/test_setitem.py
+++ b/pandas/tests/indexing/multiindex/test_setitem.py
@@ -1,5 +1,3 @@
-from warnings import catch_warnings, simplefilter
-
import numpy as np
from numpy.random import randn
import pytest
@@ -10,133 +8,114 @@
from pandas.util import testing as tm
-@pytest.mark.filterwarnings("ignore:\\n.ix:FutureWarning")
class TestMultiIndexSetItem:
def test_setitem_multiindex(self):
- with catch_warnings(record=True):
-
- for index_fn in ("ix", "loc"):
-
- def assert_equal(a, b):
- assert a == b
-
- def check(target, indexers, value, compare_fn, expected=None):
- fn = getattr(target, index_fn)
- fn.__setitem__(indexers, value)
- result = fn.__getitem__(indexers)
- if expected is None:
- expected = value
- compare_fn(result, expected)
-
- # GH7190
- index = MultiIndex.from_product(
- [np.arange(0, 100), np.arange(0, 80)], names=["time", "firm"]
- )
- t, n = 0, 2
- df = DataFrame(
- np.nan,
- columns=["A", "w", "l", "a", "x", "X", "d", "profit"],
- index=index,
- )
- check(
- target=df, indexers=((t, n), "X"), value=0, compare_fn=assert_equal
- )
-
- df = DataFrame(
- -999,
- columns=["A", "w", "l", "a", "x", "X", "d", "profit"],
- index=index,
- )
- check(
- target=df, indexers=((t, n), "X"), value=1, compare_fn=assert_equal
- )
-
- df = DataFrame(
- columns=["A", "w", "l", "a", "x", "X", "d", "profit"], index=index
- )
- check(
- target=df, indexers=((t, n), "X"), value=2, compare_fn=assert_equal
- )
-
- # gh-7218: assigning with 0-dim arrays
- df = DataFrame(
- -999,
- columns=["A", "w", "l", "a", "x", "X", "d", "profit"],
- index=index,
- )
- check(
- target=df,
- indexers=((t, n), "X"),
- value=np.array(3),
- compare_fn=assert_equal,
- expected=3,
- )
-
- # GH5206
- df = DataFrame(
- np.arange(25).reshape(5, 5),
- columns="A,B,C,D,E".split(","),
- dtype=float,
- )
- df["F"] = 99
- row_selection = df["A"] % 2 == 0
- col_selection = ["B", "C"]
- with catch_warnings(record=True):
- df.ix[row_selection, col_selection] = df["F"]
- output = DataFrame(99.0, index=[0, 2, 4], columns=["B", "C"])
- with catch_warnings(record=True):
- tm.assert_frame_equal(df.ix[row_selection, col_selection], output)
- check(
- target=df,
- indexers=(row_selection, col_selection),
- value=df["F"],
- compare_fn=tm.assert_frame_equal,
- expected=output,
- )
-
- # GH11372
- idx = MultiIndex.from_product(
- [["A", "B", "C"], date_range("2015-01-01", "2015-04-01", freq="MS")]
- )
- cols = MultiIndex.from_product(
- [["foo", "bar"], date_range("2016-01-01", "2016-02-01", freq="MS")]
- )
-
- df = DataFrame(np.random.random((12, 4)), index=idx, columns=cols)
-
- subidx = MultiIndex.from_tuples(
- [("A", Timestamp("2015-01-01")), ("A", Timestamp("2015-02-01"))]
- )
- subcols = MultiIndex.from_tuples(
- [("foo", Timestamp("2016-01-01")), ("foo", Timestamp("2016-02-01"))]
- )
-
- vals = DataFrame(
- np.random.random((2, 2)), index=subidx, columns=subcols
- )
- check(
- target=df,
- indexers=(subidx, subcols),
- value=vals,
- compare_fn=tm.assert_frame_equal,
- )
- # set all columns
- vals = DataFrame(np.random.random((2, 4)), index=subidx, columns=cols)
- check(
- target=df,
- indexers=(subidx, slice(None, None, None)),
- value=vals,
- compare_fn=tm.assert_frame_equal,
- )
- # identity
- copy = df.copy()
- check(
- target=df,
- indexers=(df.index, df.columns),
- value=df,
- compare_fn=tm.assert_frame_equal,
- expected=copy,
- )
+ for index_fn in ("loc",):
+
+ def assert_equal(a, b):
+ assert a == b
+
+ def check(target, indexers, value, compare_fn, expected=None):
+ fn = getattr(target, index_fn)
+ fn.__setitem__(indexers, value)
+ result = fn.__getitem__(indexers)
+ if expected is None:
+ expected = value
+ compare_fn(result, expected)
+
+ # GH7190
+ index = MultiIndex.from_product(
+ [np.arange(0, 100), np.arange(0, 80)], names=["time", "firm"]
+ )
+ t, n = 0, 2
+ df = DataFrame(
+ np.nan,
+ columns=["A", "w", "l", "a", "x", "X", "d", "profit"],
+ index=index,
+ )
+ check(target=df, indexers=((t, n), "X"), value=0, compare_fn=assert_equal)
+
+ df = DataFrame(
+ -999, columns=["A", "w", "l", "a", "x", "X", "d", "profit"], index=index
+ )
+ check(target=df, indexers=((t, n), "X"), value=1, compare_fn=assert_equal)
+
+ df = DataFrame(
+ columns=["A", "w", "l", "a", "x", "X", "d", "profit"], index=index
+ )
+ check(target=df, indexers=((t, n), "X"), value=2, compare_fn=assert_equal)
+
+ # gh-7218: assigning with 0-dim arrays
+ df = DataFrame(
+ -999, columns=["A", "w", "l", "a", "x", "X", "d", "profit"], index=index
+ )
+ check(
+ target=df,
+ indexers=((t, n), "X"),
+ value=np.array(3),
+ compare_fn=assert_equal,
+ expected=3,
+ )
+
+ # GH5206
+ df = DataFrame(
+ np.arange(25).reshape(5, 5), columns="A,B,C,D,E".split(","), dtype=float
+ )
+ df["F"] = 99
+ row_selection = df["A"] % 2 == 0
+ col_selection = ["B", "C"]
+ df.loc[row_selection, col_selection] = df["F"]
+ output = DataFrame(99.0, index=[0, 2, 4], columns=["B", "C"])
+ tm.assert_frame_equal(df.loc[row_selection, col_selection], output)
+ check(
+ target=df,
+ indexers=(row_selection, col_selection),
+ value=df["F"],
+ compare_fn=tm.assert_frame_equal,
+ expected=output,
+ )
+
+ # GH11372
+ idx = MultiIndex.from_product(
+ [["A", "B", "C"], date_range("2015-01-01", "2015-04-01", freq="MS")]
+ )
+ cols = MultiIndex.from_product(
+ [["foo", "bar"], date_range("2016-01-01", "2016-02-01", freq="MS")]
+ )
+
+ df = DataFrame(np.random.random((12, 4)), index=idx, columns=cols)
+
+ subidx = MultiIndex.from_tuples(
+ [("A", Timestamp("2015-01-01")), ("A", Timestamp("2015-02-01"))]
+ )
+ subcols = MultiIndex.from_tuples(
+ [("foo", Timestamp("2016-01-01")), ("foo", Timestamp("2016-02-01"))]
+ )
+
+ vals = DataFrame(np.random.random((2, 2)), index=subidx, columns=subcols)
+ check(
+ target=df,
+ indexers=(subidx, subcols),
+ value=vals,
+ compare_fn=tm.assert_frame_equal,
+ )
+ # set all columns
+ vals = DataFrame(np.random.random((2, 4)), index=subidx, columns=cols)
+ check(
+ target=df,
+ indexers=(subidx, slice(None, None, None)),
+ value=vals,
+ compare_fn=tm.assert_frame_equal,
+ )
+ # identity
+ copy = df.copy()
+ check(
+ target=df,
+ indexers=(df.index, df.columns),
+ value=df,
+ compare_fn=tm.assert_frame_equal,
+ expected=copy,
+ )
def test_multiindex_setitem(self):
@@ -204,9 +183,8 @@ def test_multiindex_assignment(self):
df["d"] = np.nan
arr = np.array([0.0, 1.0])
- with catch_warnings(record=True):
- df.ix[4, "d"] = arr
- tm.assert_series_equal(df.ix[4, "d"], Series(arr, index=[8, 10], name="d"))
+ df.loc[4, "d"] = arr
+ tm.assert_series_equal(df.loc[4, "d"], Series(arr, index=[8, 10], name="d"))
# single dtype
df = DataFrame(
@@ -215,25 +193,21 @@ def test_multiindex_assignment(self):
index=[[4, 4, 8], [8, 10, 12]],
)
- with catch_warnings(record=True):
- df.ix[4, "c"] = arr
- exp = Series(arr, index=[8, 10], name="c", dtype="float64")
- tm.assert_series_equal(df.ix[4, "c"], exp)
+ df.loc[4, "c"] = arr
+ exp = Series(arr, index=[8, 10], name="c", dtype="float64")
+ tm.assert_series_equal(df.loc[4, "c"], exp)
# scalar ok
- with catch_warnings(record=True):
- df.ix[4, "c"] = 10
- exp = Series(10, index=[8, 10], name="c", dtype="float64")
- tm.assert_series_equal(df.ix[4, "c"], exp)
+ df.loc[4, "c"] = 10
+ exp = Series(10, index=[8, 10], name="c", dtype="float64")
+ tm.assert_series_equal(df.loc[4, "c"], exp)
# invalid assignments
with pytest.raises(ValueError):
- with catch_warnings(record=True):
- df.ix[4, "c"] = [0, 1, 2, 3]
+ df.loc[4, "c"] = [0, 1, 2, 3]
with pytest.raises(ValueError):
- with catch_warnings(record=True):
- df.ix[4, "c"] = [0]
+ df.loc[4, "c"] = [0]
# groupby example
NUM_ROWS = 100
@@ -264,8 +238,7 @@ def f(name, df2):
# but in this case, that's ok
for name, df2 in grp:
new_vals = np.arange(df2.shape[0])
- with catch_warnings(record=True):
- df.ix[name, "new_col"] = new_vals
+ df.loc[name, "new_col"] = new_vals
def test_series_setitem(self, multiindex_year_month_day_dataframe_random_data):
ymd = multiindex_year_month_day_dataframe_random_data
@@ -313,11 +286,6 @@ def test_frame_getitem_setitem_multislice(self):
result = df.loc[:, "value"]
tm.assert_series_equal(df["value"], result)
- with catch_warnings(record=True):
- simplefilter("ignore", FutureWarning)
- result = df.ix[:, "value"]
- tm.assert_series_equal(df["value"], result)
-
result = df.loc[df.index[1:3], "value"]
tm.assert_series_equal(df["value"][1:3], result)
@@ -412,7 +380,7 @@ def test_setitem_change_dtype(self, multiindex_dataframe_random_data):
reindexed = dft.reindex(columns=[("foo", "two")])
tm.assert_series_equal(reindexed["foo", "two"], s > s.median())
- def test_set_column_scalar_with_ix(self, multiindex_dataframe_random_data):
+ def test_set_column_scalar_with_loc(self, multiindex_dataframe_random_data):
frame = multiindex_dataframe_random_data
subset = frame.index[[1, 4, 5]]
| https://api.github.com/repos/pandas-dev/pandas/pulls/27574 | 2019-07-24T18:59:43Z | 2019-07-25T16:56:28Z | 2019-07-25T16:56:28Z | 2019-07-26T03:06:20Z | |
API: Make most arguments for read_html and read_json keyword-ony | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index be48552fb04e9..db39d3616ca60 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -248,13 +248,26 @@ Assignment to multiple columns of a :class:`DataFrame` when some of the columns
Deprecations
~~~~~~~~~~~~
+
- Lookups on a :class:`Series` with a single-item list containing a slice (e.g. ``ser[[slice(0, 4)]]``) are deprecated, will raise in a future version. Either convert the list to tuple, or pass the slice directly instead (:issue:`31333`)
+
- :meth:`DataFrame.mean` and :meth:`DataFrame.median` with ``numeric_only=None`` will include datetime64 and datetime64tz columns in a future version (:issue:`29941`)
- Setting values with ``.loc`` using a positional slice is deprecated and will raise in a future version. Use ``.loc`` with labels or ``.iloc`` with positions instead (:issue:`31840`)
- :meth:`DataFrame.to_dict` has deprecated accepting short names for ``orient`` in future versions (:issue:`32515`)
- :meth:`Categorical.to_dense` is deprecated and will be removed in a future version, use ``np.asarray(cat)`` instead (:issue:`32639`)
- The ``fastpath`` keyword in the ``SingleBlockManager`` constructor is deprecated and will be removed in a future version (:issue:`33092`)
+- Passing any arguments but the first one to :func:`read_html` as
+ positional arguments is deprecated since version 1.1. All other
+ arguments should be given as keyword arguments (:issue:`27573`).
+
+- Passing any arguments but `path_or_buf` (the first one) to
+ :func:`read_json` as positional arguments is deprecated since
+ version 1.1. All other arguments should be given as keyword
+ arguments (:issue:`27573`).
+
+-
+
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/html.py b/pandas/io/html.py
index ce6674ffb9588..2d48b40200fa6 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -11,6 +11,7 @@
from pandas.compat._optional import import_optional_dependency
from pandas.errors import AbstractMethodError, EmptyDataError
+from pandas.util._decorators import deprecate_nonkeyword_arguments
from pandas.core.dtypes.common import is_list_like
@@ -921,6 +922,7 @@ def _parse(flavor, io, match, attrs, encoding, displayed_only, **kwargs):
return ret
+@deprecate_nonkeyword_arguments(version="2.0")
def read_html(
io,
match=".+",
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index d6b90ae99973e..3b1164a51ba91 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -11,7 +11,7 @@
from pandas._libs.tslibs import iNaT
from pandas._typing import JSONSerializable
from pandas.errors import AbstractMethodError
-from pandas.util._decorators import deprecate_kwarg
+from pandas.util._decorators import deprecate_kwarg, deprecate_nonkeyword_arguments
from pandas.core.dtypes.common import ensure_str, is_period_dtype
@@ -345,6 +345,9 @@ def _write(
@deprecate_kwarg(old_arg_name="numpy", new_arg_name=None)
+@deprecate_nonkeyword_arguments(
+ version="2.0", allowed_args=["path_or_buf"], stacklevel=3
+)
def read_json(
path_or_buf=None,
orient=None,
diff --git a/pandas/tests/io/json/test_deprecated_kwargs.py b/pandas/tests/io/json/test_deprecated_kwargs.py
new file mode 100644
index 0000000000000..79245bc9d34a8
--- /dev/null
+++ b/pandas/tests/io/json/test_deprecated_kwargs.py
@@ -0,0 +1,31 @@
+"""
+Tests for the deprecated keyword arguments for `read_json`.
+"""
+
+import pandas as pd
+import pandas._testing as tm
+
+from pandas.io.json import read_json
+
+
+def test_deprecated_kwargs():
+ df = pd.DataFrame({"A": [2, 4, 6], "B": [3, 6, 9]}, index=[0, 1, 2])
+ buf = df.to_json(orient="split")
+ with tm.assert_produces_warning(FutureWarning):
+ tm.assert_frame_equal(df, read_json(buf, "split"))
+ buf = df.to_json(orient="columns")
+ with tm.assert_produces_warning(FutureWarning):
+ tm.assert_frame_equal(df, read_json(buf, "columns"))
+ buf = df.to_json(orient="index")
+ with tm.assert_produces_warning(FutureWarning):
+ tm.assert_frame_equal(df, read_json(buf, "index"))
+
+
+def test_good_kwargs():
+ df = pd.DataFrame({"A": [2, 4, 6], "B": [3, 6, 9]}, index=[0, 1, 2])
+ with tm.assert_produces_warning(None):
+ tm.assert_frame_equal(df, read_json(df.to_json(orient="split"), orient="split"))
+ tm.assert_frame_equal(
+ df, read_json(df.to_json(orient="columns"), orient="columns")
+ )
+ tm.assert_frame_equal(df, read_json(df.to_json(orient="index"), orient="index"))
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index cbaf16d048eda..3d73e983402a7 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -72,7 +72,7 @@ def test_invalid_flavor():
msg = r"\{" + flavor + r"\} is not a valid set of flavors"
with pytest.raises(ValueError, match=msg):
- read_html(url, "google", flavor=flavor)
+ read_html(url, match="google", flavor=flavor)
@td.skip_if_no("bs4")
@@ -121,13 +121,26 @@ def test_to_html_compat(self):
res = self.read_html(out, attrs={"class": "dataframe"}, index_col=0)[0]
tm.assert_frame_equal(res, df)
+ @tm.network
+ def test_banklist_url_positional_match(self):
+ url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
+ # Passing match argument as positional should cause a FutureWarning.
+ with tm.assert_produces_warning(FutureWarning):
+ df1 = self.read_html(
+ url, "First Federal Bank of Florida", attrs={"id": "table"}
+ )
+ with tm.assert_produces_warning(FutureWarning):
+ df2 = self.read_html(url, "Metcalf Bank", attrs={"id": "table"})
+
+ assert_framelist_equal(df1, df2)
+
@tm.network
def test_banklist_url(self):
url = "http://www.fdic.gov/bank/individual/failed/banklist.html"
df1 = self.read_html(
- url, "First Federal Bank of Florida", attrs={"id": "table"}
+ url, match="First Federal Bank of Florida", attrs={"id": "table"}
)
- df2 = self.read_html(url, "Metcalf Bank", attrs={"id": "table"})
+ df2 = self.read_html(url, match="Metcalf Bank", attrs={"id": "table"})
assert_framelist_equal(df1, df2)
@@ -137,21 +150,25 @@ def test_spam_url(self):
"https://raw.githubusercontent.com/pandas-dev/pandas/master/"
"pandas/tests/io/data/html/spam.html"
)
- df1 = self.read_html(url, ".*Water.*")
- df2 = self.read_html(url, "Unit")
+ df1 = self.read_html(url, match=".*Water.*")
+ df2 = self.read_html(url, match="Unit")
assert_framelist_equal(df1, df2)
@pytest.mark.slow
def test_banklist(self):
- df1 = self.read_html(self.banklist_data, ".*Florida.*", attrs={"id": "table"})
- df2 = self.read_html(self.banklist_data, "Metcalf Bank", attrs={"id": "table"})
+ df1 = self.read_html(
+ self.banklist_data, match=".*Florida.*", attrs={"id": "table"}
+ )
+ df2 = self.read_html(
+ self.banklist_data, match="Metcalf Bank", attrs={"id": "table"}
+ )
assert_framelist_equal(df1, df2)
def test_spam(self):
- df1 = self.read_html(self.spam_data, ".*Water.*")
- df2 = self.read_html(self.spam_data, "Unit")
+ df1 = self.read_html(self.spam_data, match=".*Water.*")
+ df2 = self.read_html(self.spam_data, match="Unit")
assert_framelist_equal(df1, df2)
assert df1[0].iloc[0, 0] == "Proximates"
@@ -168,81 +185,82 @@ def test_banklist_no_match(self):
assert isinstance(df, DataFrame)
def test_spam_header(self):
- df = self.read_html(self.spam_data, ".*Water.*", header=2)[0]
+ df = self.read_html(self.spam_data, match=".*Water.*", header=2)[0]
assert df.columns[0] == "Proximates"
assert not df.empty
def test_skiprows_int(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=1)
- df2 = self.read_html(self.spam_data, "Unit", skiprows=1)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=1)
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_range(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=range(2))[0]
- df2 = self.read_html(self.spam_data, "Unit", skiprows=range(2))[0]
- tm.assert_frame_equal(df1, df2)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=range(2))
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=range(2))
+
+ assert_framelist_equal(df1, df2)
def test_skiprows_list(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=[1, 2])
- df2 = self.read_html(self.spam_data, "Unit", skiprows=[2, 1])
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=[1, 2])
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=[2, 1])
assert_framelist_equal(df1, df2)
def test_skiprows_set(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows={1, 2})
- df2 = self.read_html(self.spam_data, "Unit", skiprows={2, 1})
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows={1, 2})
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows={2, 1})
assert_framelist_equal(df1, df2)
def test_skiprows_slice(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=1)
- df2 = self.read_html(self.spam_data, "Unit", skiprows=1)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=1)
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_slice_short(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=slice(2))
- df2 = self.read_html(self.spam_data, "Unit", skiprows=slice(2))
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=slice(2))
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=slice(2))
assert_framelist_equal(df1, df2)
def test_skiprows_slice_long(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=slice(2, 5))
- df2 = self.read_html(self.spam_data, "Unit", skiprows=slice(4, 1, -1))
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=slice(2, 5))
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=slice(4, 1, -1))
assert_framelist_equal(df1, df2)
def test_skiprows_ndarray(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", skiprows=np.arange(2))
- df2 = self.read_html(self.spam_data, "Unit", skiprows=np.arange(2))
+ df1 = self.read_html(self.spam_data, match=".*Water.*", skiprows=np.arange(2))
+ df2 = self.read_html(self.spam_data, match="Unit", skiprows=np.arange(2))
assert_framelist_equal(df1, df2)
def test_skiprows_invalid(self):
with pytest.raises(TypeError, match=("is not a valid type for skipping rows")):
- self.read_html(self.spam_data, ".*Water.*", skiprows="asdf")
+ self.read_html(self.spam_data, match=".*Water.*", skiprows="asdf")
def test_index(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", index_col=0)
- df2 = self.read_html(self.spam_data, "Unit", index_col=0)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", index_col=0)
+ df2 = self.read_html(self.spam_data, match="Unit", index_col=0)
assert_framelist_equal(df1, df2)
def test_header_and_index_no_types(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", header=1, index_col=0)
- df2 = self.read_html(self.spam_data, "Unit", header=1, index_col=0)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", header=1, index_col=0)
+ df2 = self.read_html(self.spam_data, match="Unit", header=1, index_col=0)
assert_framelist_equal(df1, df2)
def test_header_and_index_with_types(self):
- df1 = self.read_html(self.spam_data, ".*Water.*", header=1, index_col=0)
- df2 = self.read_html(self.spam_data, "Unit", header=1, index_col=0)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", header=1, index_col=0)
+ df2 = self.read_html(self.spam_data, match="Unit", header=1, index_col=0)
assert_framelist_equal(df1, df2)
def test_infer_types(self):
# 10892 infer_types removed
- df1 = self.read_html(self.spam_data, ".*Water.*", index_col=0)
- df2 = self.read_html(self.spam_data, "Unit", index_col=0)
+ df1 = self.read_html(self.spam_data, match=".*Water.*", index_col=0)
+ df2 = self.read_html(self.spam_data, match="Unit", index_col=0)
assert_framelist_equal(df1, df2)
def test_string_io(self):
@@ -252,25 +270,25 @@ def test_string_io(self):
with open(self.spam_data, **self.spam_data_kwargs) as f:
data2 = StringIO(f.read())
- df1 = self.read_html(data1, ".*Water.*")
- df2 = self.read_html(data2, "Unit")
+ df1 = self.read_html(data1, match=".*Water.*")
+ df2 = self.read_html(data2, match="Unit")
assert_framelist_equal(df1, df2)
def test_string(self):
with open(self.spam_data, **self.spam_data_kwargs) as f:
data = f.read()
- df1 = self.read_html(data, ".*Water.*")
- df2 = self.read_html(data, "Unit")
+ df1 = self.read_html(data, match=".*Water.*")
+ df2 = self.read_html(data, match="Unit")
assert_framelist_equal(df1, df2)
def test_file_like(self):
with open(self.spam_data, **self.spam_data_kwargs) as f:
- df1 = self.read_html(f, ".*Water.*")
+ df1 = self.read_html(f, match=".*Water.*")
with open(self.spam_data, **self.spam_data_kwargs) as f:
- df2 = self.read_html(f, "Unit")
+ df2 = self.read_html(f, match="Unit")
assert_framelist_equal(df1, df2)
@@ -292,7 +310,7 @@ def test_invalid_url(self):
def test_file_url(self):
url = self.banklist_data
dfs = self.read_html(
- file_path_to_url(os.path.abspath(url)), "First", attrs={"id": "table"}
+ file_path_to_url(os.path.abspath(url)), match="First", attrs={"id": "table"}
)
assert isinstance(dfs, list)
for df in dfs:
@@ -308,7 +326,7 @@ def test_invalid_table_attrs(self):
def _bank_data(self, *args, **kwargs):
return self.read_html(
- self.banklist_data, "Metcalf", attrs={"id": "table"}, *args, **kwargs
+ self.banklist_data, match="Metcalf", attrs={"id": "table"}, *args, **kwargs
)
@pytest.mark.slow
@@ -358,7 +376,7 @@ def test_regex_idempotency(self):
def test_negative_skiprows(self):
msg = r"\(you passed a negative value\)"
with pytest.raises(ValueError, match=msg):
- self.read_html(self.spam_data, "Water", skiprows=-1)
+ self.read_html(self.spam_data, match="Water", skiprows=-1)
@tm.network
def test_multiple_matches(self):
@@ -600,7 +618,9 @@ def test_gold_canyon(self):
raw_text = f.read()
assert gc in raw_text
- df = self.read_html(self.banklist_data, "Gold Canyon", attrs={"id": "table"})[0]
+ df = self.read_html(
+ self.banklist_data, match="Gold Canyon", attrs={"id": "table"}
+ )[0]
assert gc in df.to_string()
def test_different_number_of_cols(self):
@@ -855,7 +875,7 @@ def test_wikipedia_states_table(self, datapath):
data = datapath("io", "data", "html", "wikipedia_states.html")
assert os.path.isfile(data), f"{repr(data)} is not a file"
assert os.path.getsize(data), f"{repr(data)} is an empty file"
- result = self.read_html(data, "Arizona", header=1)[0]
+ result = self.read_html(data, match="Arizona", header=1)[0]
assert result.shape == (60, 12)
assert "Unnamed" in result.columns[-1]
assert result["sq mi"].dtype == np.dtype("float64")
@@ -1065,7 +1085,7 @@ def test_works_on_valid_markup(self, datapath):
@pytest.mark.slow
def test_fallback_success(self, datapath):
banklist_data = datapath("io", "data", "html", "banklist.html")
- self.read_html(banklist_data, ".*Water.*", flavor=["lxml", "html5lib"])
+ self.read_html(banklist_data, match=".*Water.*", flavor=["lxml", "html5lib"])
def test_to_html_timestamp(self):
rng = date_range("2000-01-01", periods=10)
diff --git a/pandas/tests/util/test_deprecate_nonkeyword_arguments.py b/pandas/tests/util/test_deprecate_nonkeyword_arguments.py
new file mode 100644
index 0000000000000..05bc617232bdd
--- /dev/null
+++ b/pandas/tests/util/test_deprecate_nonkeyword_arguments.py
@@ -0,0 +1,101 @@
+"""
+Tests for the `deprecate_nonkeyword_arguments` decorator
+"""
+
+import warnings
+
+from pandas.util._decorators import deprecate_nonkeyword_arguments
+
+import pandas._testing as tm
+
+
+@deprecate_nonkeyword_arguments(version="1.1", allowed_args=["a", "b"])
+def f(a, b=0, c=0, d=0):
+ return a + b + c + d
+
+
+def test_one_argument():
+ with tm.assert_produces_warning(None):
+ assert f(19) == 19
+
+
+def test_one_and_one_arguments():
+ with tm.assert_produces_warning(None):
+ assert f(19, d=6) == 25
+
+
+def test_two_arguments():
+ with tm.assert_produces_warning(None):
+ assert f(1, 5) == 6
+
+
+def test_two_and_two_arguments():
+ with tm.assert_produces_warning(None):
+ assert f(1, 3, c=3, d=5) == 12
+
+
+def test_three_arguments():
+ with tm.assert_produces_warning(FutureWarning):
+ assert f(6, 3, 3) == 12
+
+
+def test_four_arguments():
+ with tm.assert_produces_warning(FutureWarning):
+ assert f(1, 2, 3, 4) == 10
+
+
+@deprecate_nonkeyword_arguments(version="1.1")
+def g(a, b=0, c=0, d=0):
+ with tm.assert_produces_warning(None):
+ return a + b + c + d
+
+
+def test_one_and_three_arguments_default_allowed_args():
+ with tm.assert_produces_warning(None):
+ assert g(1, b=3, c=3, d=5) == 12
+
+
+def test_three_arguments_default_allowed_args():
+ with tm.assert_produces_warning(FutureWarning):
+ assert g(6, 3, 3) == 12
+
+
+def test_three_positional_argument_with_warning_message_analysis():
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ assert g(6, 3, 3) == 12
+ assert len(w) == 1
+ for actual_warning in w:
+ assert actual_warning.category == FutureWarning
+ assert str(actual_warning.message) == (
+ "Starting with Pandas version 1.1 all arguments of g "
+ "except for the argument 'a' will be keyword-only"
+ )
+
+
+@deprecate_nonkeyword_arguments(version="1.1")
+def h(a=0, b=0, c=0, d=0):
+ return a + b + c + d
+
+
+def test_all_keyword_arguments():
+ with tm.assert_produces_warning(None):
+ assert h(a=1, b=2) == 3
+
+
+def test_one_positional_argument():
+ with tm.assert_produces_warning(FutureWarning):
+ assert h(23) == 23
+
+
+def test_one_positional_argument_with_warning_message_analysis():
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ assert h(19) == 19
+ assert len(w) == 1
+ for actual_warning in w:
+ assert actual_warning.category == FutureWarning
+ assert str(actual_warning.message) == (
+ "Starting with Pandas version 1.1 all arguments "
+ "of h will be keyword-only"
+ )
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 7a804792174c7..71d02db10c7ba 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -216,6 +216,105 @@ def wrapper(*args, **kwargs) -> Callable[..., Any]:
return _deprecate_kwarg
+def _format_argument_list(allow_args: Union[List[str], int]):
+ """
+ Convert the allow_args argument (either string or integer) of
+ `deprecate_nonkeyword_arguments` function to a string describing
+ it to be inserted into warning message.
+
+ Parameters
+ ----------
+ allowed_args : list, tuple or int
+ The `allowed_args` argument for `deprecate_nonkeyword_arguments`,
+ but None value is not allowed.
+
+ Returns
+ -------
+ s : str
+ The substring describing the argument list in best way to be
+ inserted to the warning message.
+
+ Examples
+ --------
+ `format_argument_list(0)` -> ''
+ `format_argument_list(1)` -> 'except for the first argument'
+ `format_argument_list(2)` -> 'except for the first 2 arguments'
+ `format_argument_list([])` -> ''
+ `format_argument_list(['a'])` -> "except for the arguments 'a'"
+ `format_argument_list(['a', 'b'])` -> "except for the arguments 'a' and 'b'"
+ `format_argument_list(['a', 'b', 'c'])` ->
+ "except for the arguments 'a', 'b' and 'c'"
+ """
+ if not allow_args:
+ return ""
+ elif allow_args == 1:
+ return " except for the first argument"
+ elif isinstance(allow_args, int):
+ return " except for the first {num_args} arguments".format(num_args=allow_args)
+ elif len(allow_args) == 1:
+ return " except for the argument '{arg}'".format(arg=allow_args[0])
+ else:
+ last = allow_args[-1]
+ args = ", ".join(["'" + x + "'" for x in allow_args[:-1]])
+ return " except for the arguments {args} and '{last}'".format(
+ args=args, last=last
+ )
+
+
+def deprecate_nonkeyword_arguments(
+ version: str,
+ allowed_args: Optional[Union[List[str], int]] = None,
+ stacklevel: int = 2,
+) -> Callable:
+ """
+ Decorator to deprecate a use of non-keyword arguments of a function.
+
+ Parameters
+ ----------
+ version : str
+ The version in which positional arguments will become
+ keyword-only.
+
+ allowed_args : list or int, optional
+ In case of list, it must be the list of names of some
+ first arguments of the decorated functions that are
+ OK to be given as positional arguments. In case of an
+ integer, this is the number of positional arguments
+ that will stay positional. In case of None value,
+ defaults to list of all arguments not having the
+ default value.
+
+ stacklevel : int, default=2
+ The stack level for warnings.warn
+ """
+
+ def decorate(func):
+ if allowed_args is not None:
+ allow_args = allowed_args
+ else:
+ spec = inspect.getfullargspec(func)
+ allow_args = spec.args[: -len(spec.defaults)]
+
+ @wraps(func)
+ def wrapper(*args, **kwargs):
+ arguments = _format_argument_list(allow_args)
+ if isinstance(allow_args, (list, tuple)):
+ num_allow_args = len(allow_args)
+ else:
+ num_allow_args = allow_args
+ if len(args) > num_allow_args:
+ msg = (
+ "Starting with Pandas version {version} all arguments of {funcname}"
+ "{except_args} will be keyword-only"
+ ).format(version=version, funcname=func.__name__, except_args=arguments)
+ warnings.warn(msg, FutureWarning, stacklevel=stacklevel)
+ return func(*args, **kwargs)
+
+ return wrapper
+
+ return decorate
+
+
def rewrite_axis_style_signature(
name: str, extra_params: List[Tuple[str, Any]]
) -> Callable[..., Any]:
| As mentioned in #27544 it is better to use keyword-only arguments for functions with too many arguments. A deprecation warning will be displayed if `read_html` get more than 2 positional arguments (`io` and `match`) or `read_json` get more than 1 positional argument (`path_or_buf`).
- [x] tests added / passed
Three groups of tests are actually needed
- [x] Tests dof the `deprecate_nonkeyword_arguments` decoratorr
- [x] Check whether the `read_json` function emits a `FutureWarning` whenever necessary and does not emit it whenever not.
- [x] Check whether the `read_html` function emits a `FutureWarning` whenever necessary and does not emit it whenever not.
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27573 | 2019-07-24T17:56:02Z | 2020-04-07T00:55:26Z | 2020-04-07T00:55:26Z | 2020-04-07T08:08:42Z |
TST: add regression test for MultiIndex with IntervalIndex level | diff --git a/pandas/tests/indexing/interval/test_interval.py b/pandas/tests/indexing/interval/test_interval.py
index 7ae42782774db..bbce786fc07ba 100644
--- a/pandas/tests/indexing/interval/test_interval.py
+++ b/pandas/tests/indexing/interval/test_interval.py
@@ -112,3 +112,38 @@ def test_loc_getitem_frame(self):
# partial missing
with pytest.raises(KeyError, match="^$"):
df.loc[[10, 4]]
+
+
+class TestIntervalIndexInsideMultiIndex:
+ def test_mi_intervalindex_slicing_with_scalar(self):
+ # GH#27456
+ idx = pd.MultiIndex.from_arrays(
+ [
+ pd.Index(["FC", "FC", "FC", "FC", "OWNER", "OWNER", "OWNER", "OWNER"]),
+ pd.Index(
+ ["RID1", "RID1", "RID2", "RID2", "RID1", "RID1", "RID2", "RID2"]
+ ),
+ pd.IntervalIndex.from_arrays(
+ [0, 1, 10, 11, 0, 1, 10, 11], [1, 2, 11, 12, 1, 2, 11, 12]
+ ),
+ ]
+ )
+
+ idx.names = ["Item", "RID", "MP"]
+ df = pd.DataFrame({"value": [1, 2, 3, 4, 5, 6, 7, 8]})
+ df.index = idx
+ query_df = pd.DataFrame(
+ {
+ "Item": ["FC", "OWNER", "FC", "OWNER", "OWNER"],
+ "RID": ["RID1", "RID1", "RID1", "RID2", "RID2"],
+ "MP": [0.2, 1.5, 1.6, 11.1, 10.9],
+ }
+ )
+
+ query_df = query_df.sort_index()
+
+ idx = pd.MultiIndex.from_arrays([query_df.Item, query_df.RID, query_df.MP])
+ query_df.index = idx
+ result = df.value.loc[query_df.index]
+ expected = pd.Series([1, 6, 2, 8, 7], index=idx, name="value")
+ tm.assert_series_equal(result, expected)
| #27456 makes for a good regression test for the non-overlapping case, where there's no coverage currently. Add it to the tests.
| https://api.github.com/repos/pandas-dev/pandas/pulls/27572 | 2019-07-24T17:00:55Z | 2019-07-25T16:57:33Z | 2019-07-25T16:57:33Z | 2019-07-25T18:15:07Z |
CLN: remove block._coerce_values | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 1fef65349976b..4dc1dfcae0777 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -148,8 +148,10 @@ def _cython_agg_blocks(self, how, alt=None, numeric_only=True, min_count=-1):
new_blocks = []
new_items = []
deleted_items = []
+ no_result = object()
for block in data.blocks:
-
+ # Avoid inheriting result from earlier in the loop
+ result = no_result
locs = block.mgr_locs.as_array
try:
result, _ = self.grouper.aggregate(
@@ -174,15 +176,15 @@ def _cython_agg_blocks(self, how, alt=None, numeric_only=True, min_count=-1):
except TypeError:
# we may have an exception in trying to aggregate
# continue and exclude the block
- pass
-
+ deleted_items.append(locs)
+ continue
finally:
+ if result is not no_result:
+ dtype = block.values.dtype
- dtype = block.values.dtype
-
- # see if we can cast the block back to the original dtype
- result = block._try_coerce_and_cast_result(result, dtype=dtype)
- newb = block.make_block(result)
+ # see if we can cast the block back to the original dtype
+ result = block._try_coerce_and_cast_result(result, dtype=dtype)
+ newb = block.make_block(result)
new_items.append(locs)
new_blocks.append(newb)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 3d4dbd3f8d887..5961a7ff72832 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -47,6 +47,7 @@ class providing the base-class of operations.
SpecificationError,
)
import pandas.core.common as com
+from pandas.core.construction import extract_array
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
from pandas.core.groupby import base
@@ -803,10 +804,9 @@ def _try_cast(self, result, obj, numeric_only=False):
# Prior results _may_ have been generated in UTC.
# Ensure we localize to UTC first before converting
# to the target timezone
+ arr = extract_array(obj)
try:
- result = obj._values._from_sequence(
- result, dtype="datetime64[ns, UTC]"
- )
+ result = arr._from_sequence(result, dtype="datetime64[ns, UTC]")
result = result.astype(dtype)
except TypeError:
# _try_cast was called at a point where the result
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 6e5a2aab298c7..4ca867b1088e7 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -7,7 +7,7 @@
import numpy as np
-from pandas._libs import NaT, lib, tslib, tslibs
+from pandas._libs import NaT, Timestamp, lib, tslib, tslibs
import pandas._libs.internals as libinternals
from pandas._libs.tslibs import Timedelta, conversion
from pandas._libs.tslibs.timezones import tz_compare
@@ -715,20 +715,6 @@ def _try_cast_result(self, result, dtype=None):
# may need to change the dtype here
return maybe_downcast_to_dtype(result, dtype)
- def _coerce_values(self, values):
- """
- Coerce values (usually derived from self.values) for an operation.
-
- Parameters
- ----------
- values : ndarray or ExtensionArray
-
- Returns
- -------
- ndarray or ExtensionArray
- """
- return values
-
def _try_coerce_args(self, other):
""" provide coercion to our input arguments """
@@ -817,7 +803,7 @@ def replace(
convert=convert,
)
- values = self._coerce_values(self.values)
+ values = self.values
to_replace = self._try_coerce_args(to_replace)
mask = missing.mask_missing(values, to_replace)
@@ -882,7 +868,6 @@ def setitem(self, indexer, value):
if self._can_hold_element(value):
value = self._try_coerce_args(value)
- values = self._coerce_values(values)
# can keep its own dtype
if hasattr(value, "dtype") and is_dtype_equal(values.dtype, value.dtype):
dtype = self.dtype
@@ -1229,7 +1214,6 @@ def _interpolate_with_fill(
return [self.copy()]
values = self.values if inplace else self.values.copy()
- values = self._coerce_values(values)
fill_value = self._try_coerce_args(fill_value)
values = missing.interpolate_2d(
values,
@@ -1444,7 +1428,6 @@ def func(cond, values, other):
else:
# see if we can operate on the entire block, or need item-by-item
# or if we are a single block (ndim == 1)
- values = self._coerce_values(values)
try:
result = func(cond, values, other)
except TypeError:
@@ -1548,14 +1531,13 @@ def quantile(self, qs, interpolation="linear", axis=0):
# We need to operate on i8 values for datetimetz
# but `Block.get_values()` returns an ndarray of objects
# right now. We need an API for "values to do numeric-like ops on"
- values = self.values.asi8
+ values = self.values.view("M8[ns]")
# TODO: NonConsolidatableMixin shape
# Usual shape inconsistencies for ExtensionBlocks
values = values[None, :]
else:
values = self.get_values()
- values = self._coerce_values(values)
is_empty = values.shape[axis] == 0
orig_scalar = not is_list_like(qs)
@@ -1720,7 +1702,6 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0, transpose=False)
# use block's copy logic.
# .values may be an Index which does shallow copy by default
new_values = self.values if inplace else self.copy().values
- new_values = self._coerce_values(new_values)
new = self._try_coerce_args(new)
if isinstance(new, np.ndarray) and len(new) == len(mask):
@@ -1919,12 +1900,6 @@ def _try_cast_result(self, result, dtype=None):
result could also be an EA Array itself, in which case it
is already a 1-D array
"""
- try:
-
- result = self._holder._from_sequence(result.ravel(), dtype=dtype)
- except Exception:
- pass
-
return result
def formatting_values(self):
@@ -2304,8 +2279,8 @@ def _try_coerce_args(self, other):
if is_valid_nat_for_dtype(other, self.dtype):
other = np.datetime64("NaT", "ns")
elif isinstance(other, (datetime, np.datetime64, date)):
- other = self._box_func(other)
- if getattr(other, "tz") is not None:
+ other = Timestamp(other)
+ if other.tz is not None:
raise TypeError("cannot coerce a Timestamp with a tz on a naive Block")
other = other.asm8
elif hasattr(other, "dtype") and is_datetime64_dtype(other):
@@ -2320,18 +2295,11 @@ def _try_coerce_args(self, other):
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
- if isinstance(result, np.ndarray):
- if result.dtype.kind in ["i", "f"]:
- result = result.astype("M8[ns]")
-
- elif isinstance(result, (np.integer, np.float, np.datetime64)):
- result = self._box_func(result)
+ if isinstance(result, np.ndarray) and result.dtype.kind == "i":
+ # needed for _interpolate_with_ffill
+ result = result.view("M8[ns]")
return result
- @property
- def _box_func(self):
- return tslibs.Timestamp
-
def to_native_types(
self, slicer=None, na_rep=None, date_format=None, quoting=None, **kwargs
):
@@ -2387,6 +2355,7 @@ class DatetimeTZBlock(ExtensionBlock, DatetimeBlock):
is_extension = True
_can_hold_element = DatetimeBlock._can_hold_element
+ fill_value = np.datetime64("NaT", "ns")
@property
def _holder(self):
@@ -2442,7 +2411,7 @@ def get_values(self, dtype=None):
"""
values = self.values
if is_object_dtype(dtype):
- values = values._box_values(values._data)
+ values = values.astype(object)
values = np.asarray(values)
@@ -2468,9 +2437,6 @@ def _slice(self, slicer):
return self.values[loc]
return self.values[slicer]
- def _coerce_values(self, values):
- return _block_shape(values, ndim=self.ndim)
-
def _try_coerce_args(self, other):
"""
localize and return i8 for the values
@@ -2483,17 +2449,7 @@ def _try_coerce_args(self, other):
-------
base-type other
"""
-
- if isinstance(other, ABCSeries):
- other = self._holder(other)
-
- if isinstance(other, bool):
- raise TypeError
- elif is_datetime64_dtype(other):
- # add the tz back
- other = self._holder(other, dtype=self.dtype)
-
- elif is_valid_nat_for_dtype(other, self.dtype):
+ if is_valid_nat_for_dtype(other, self.dtype):
other = np.datetime64("NaT", "ns")
elif isinstance(other, self._holder):
if not tz_compare(other.tz, self.values.tz):
@@ -2513,22 +2469,23 @@ def _try_coerce_args(self, other):
def _try_coerce_result(self, result):
""" reverse of try_coerce_args """
if isinstance(result, np.ndarray):
- if result.dtype.kind in ["i", "f"]:
- result = result.astype("M8[ns]")
+ if result.ndim == 2:
+ # kludge for 2D blocks with 1D EAs
+ result = result[0, :]
+ if result.dtype == np.float64:
+ # needed for post-groupby.median
+ result = self._holder._from_sequence(
+ result.astype(np.int64), freq=None, dtype=self.values.dtype
+ )
+ elif result.dtype == "M8[ns]":
+ # otherwise we get here via quantile and already have M8[ns]
+ result = self._holder._simple_new(
+ result, freq=None, dtype=self.values.dtype
+ )
- elif isinstance(result, (np.integer, np.float, np.datetime64)):
+ elif isinstance(result, np.datetime64):
+ # also for post-quantile
result = self._box_func(result)
-
- if isinstance(result, np.ndarray):
- # allow passing of > 1dim if its trivial
-
- if result.ndim > 1:
- result = result.reshape(np.prod(result.shape))
- # GH#24096 new values invalidates a frequency
- result = self._holder._simple_new(
- result, freq=None, dtype=self.values.dtype
- )
-
return result
@property
@@ -2627,10 +2584,6 @@ def __init__(self, values, placement, ndim=None):
def _holder(self):
return TimedeltaArray
- @property
- def _box_func(self):
- return lambda x: Timedelta(x, unit="ns")
-
def _can_hold_element(self, element):
tipo = maybe_infer_dtype_type(element)
if tipo is not None:
@@ -2688,15 +2641,6 @@ def _try_coerce_args(self, other):
def _try_coerce_result(self, result):
""" reverse of try_coerce_args / try_operate """
- if isinstance(result, np.ndarray):
- mask = isna(result)
- if result.dtype.kind in ["i", "f"]:
- result = result.astype("m8[ns]")
- result[mask] = np.timedelta64("NaT", "ns")
-
- elif isinstance(result, (np.integer, np.float)):
- result = self._box_func(result)
-
return result
def should_store(self, value):
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py
index 31e9cff68445e..fb8f62d7a06c5 100644
--- a/pandas/tests/indexing/test_datetime.py
+++ b/pandas/tests/indexing/test_datetime.py
@@ -51,7 +51,7 @@ def test_indexing_with_datetime_tz(self):
# indexing
result = df.iloc[1]
expected = Series(
- [Timestamp("2013-01-02 00:00:00-0500", tz="US/Eastern"), np.nan, np.nan],
+ [Timestamp("2013-01-02 00:00:00-0500", tz="US/Eastern"), pd.NaT, pd.NaT],
index=list("ABC"),
dtype="object",
name=1,
@@ -59,7 +59,7 @@ def test_indexing_with_datetime_tz(self):
tm.assert_series_equal(result, expected)
result = df.loc[1]
expected = Series(
- [Timestamp("2013-01-02 00:00:00-0500", tz="US/Eastern"), np.nan, np.nan],
+ [Timestamp("2013-01-02 00:00:00-0500", tz="US/Eastern"), pd.NaT, pd.NaT],
index=list("ABC"),
dtype="object",
name=1,
| viable now that we stopped coercing timedelta and datetime to i8
I'd like to clean up DatetimeTZBlock._try_coerce_result before getting this in. Part of that means pushing the handling down to the function being wrapped | https://api.github.com/repos/pandas-dev/pandas/pulls/27567 | 2019-07-24T15:48:33Z | 2019-07-26T20:57:02Z | 2019-07-26T20:57:02Z | 2019-07-26T21:44:14Z |
DEPR: remove .ix from tests/indexing/test_partial.py | diff --git a/pandas/tests/indexing/multiindex/test_partial.py b/pandas/tests/indexing/multiindex/test_partial.py
index b1519d82e1aa7..692b57ff98f94 100644
--- a/pandas/tests/indexing/multiindex/test_partial.py
+++ b/pandas/tests/indexing/multiindex/test_partial.py
@@ -1,5 +1,3 @@
-from warnings import catch_warnings, simplefilter
-
import numpy as np
import pytest
@@ -106,11 +104,6 @@ def test_getitem_partial_column_select(self):
expected = df.loc[("a", "y")][[1, 0]]
tm.assert_frame_equal(result, expected)
- with catch_warnings(record=True):
- simplefilter("ignore", FutureWarning)
- result = df.ix[("a", "y"), [1, 0]]
- tm.assert_frame_equal(result, expected)
-
with pytest.raises(KeyError, match=r"\('a', 'foo'\)"):
df.loc[("a", "foo"), :]
| https://api.github.com/repos/pandas-dev/pandas/pulls/27566 | 2019-07-24T15:21:14Z | 2019-07-25T16:58:08Z | 2019-07-25T16:58:08Z | 2019-07-26T03:07:27Z | |
DEPR: remove .ix from tests/indexing/multiindex/test_ix.py | diff --git a/pandas/tests/indexing/multiindex/test_ix.py b/pandas/tests/indexing/multiindex/test_ix.py
index d43115d60c029..2e7a5a08a16f0 100644
--- a/pandas/tests/indexing/multiindex/test_ix.py
+++ b/pandas/tests/indexing/multiindex/test_ix.py
@@ -1,5 +1,3 @@
-from warnings import catch_warnings, simplefilter
-
import numpy as np
import pytest
@@ -9,9 +7,8 @@
from pandas.util import testing as tm
-@pytest.mark.filterwarnings("ignore:\\n.ix:FutureWarning")
-class TestMultiIndexIx:
- def test_frame_setitem_ix(self, multiindex_dataframe_random_data):
+class TestMultiIndex:
+ def test_frame_setitem_loc(self, multiindex_dataframe_random_data):
frame = multiindex_dataframe_random_data
frame.loc[("bar", "two"), "B"] = 5
assert frame.loc[("bar", "two"), "B"] == 5
@@ -22,16 +19,7 @@ def test_frame_setitem_ix(self, multiindex_dataframe_random_data):
df.loc[("bar", "two"), 1] = 7
assert df.loc[("bar", "two"), 1] == 7
- with catch_warnings(record=True):
- simplefilter("ignore", FutureWarning)
- df = frame.copy()
- df.columns = list(range(3))
- df.ix[("bar", "two"), 1] = 7
- assert df.loc[("bar", "two"), 1] == 7
-
- def test_ix_general(self):
-
- # ix general issues
+ def test_loc_general(self):
# GH 2817
data = {
@@ -55,7 +43,7 @@ def test_ix_general(self):
expected = DataFrame({"amount": [222, 333, 444]}, index=index)
tm.assert_frame_equal(res, expected)
- def test_ix_multiindex_missing_label_raises(self):
+ def test_loc_multiindex_missing_label_raises(self):
# GH 21593
df = DataFrame(
np.random.randn(3, 3),
@@ -64,12 +52,12 @@ def test_ix_multiindex_missing_label_raises(self):
)
with pytest.raises(KeyError, match=r"^2$"):
- df.ix[2]
+ df.loc[2]
- def test_series_ix_getitem_fancy(
+ def test_series_loc_getitem_fancy(
self, multiindex_year_month_day_dataframe_random_data
):
s = multiindex_year_month_day_dataframe_random_data["A"]
expected = s.reindex(s.index[49:51])
- result = s.ix[[(2000, 3, 10), (2000, 3, 13)]]
+ result = s.loc[[(2000, 3, 10), (2000, 3, 13)]]
tm.assert_series_equal(result, expected)
| https://api.github.com/repos/pandas-dev/pandas/pulls/27565 | 2019-07-24T15:09:29Z | 2019-07-25T16:58:58Z | 2019-07-25T16:58:58Z | 2019-07-26T03:08:20Z | |
TYPING: some type hints for core.dtypes.common | diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index f2571573bd1bc..5c1fe630ecaf5 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -1,5 +1,5 @@
""" common type operations """
-from typing import Any, Union
+from typing import Any, Callable, Union
import warnings
import numpy as np
@@ -141,7 +141,7 @@ def ensure_categorical(arr):
return arr
-def ensure_int_or_float(arr: ArrayLike, copy=False) -> np.array:
+def ensure_int_or_float(arr: ArrayLike, copy: bool = False) -> np.array:
"""
Ensure that an dtype array of some integer dtype
has an int64 dtype if possible.
@@ -206,12 +206,12 @@ def ensure_python_int(value: Union[int, np.integer]) -> int:
return new_value
-def classes(*klasses):
+def classes(*klasses) -> Callable:
""" evaluate if the tipo is a subclass of the klasses """
return lambda tipo: issubclass(tipo, klasses)
-def classes_and_not_datetimelike(*klasses):
+def classes_and_not_datetimelike(*klasses) -> Callable:
"""
evaluate if the tipo is a subclass of the klasses
and not a datetimelike
@@ -354,7 +354,7 @@ def is_scipy_sparse(arr):
return _is_scipy_sparse(arr)
-def is_categorical(arr):
+def is_categorical(arr) -> bool:
"""
Check whether an array-like is a Categorical instance.
@@ -675,7 +675,7 @@ def is_interval_dtype(arr_or_dtype):
return IntervalDtype.is_dtype(arr_or_dtype)
-def is_categorical_dtype(arr_or_dtype):
+def is_categorical_dtype(arr_or_dtype) -> bool:
"""
Check whether an array-like or dtype is of the Categorical dtype.
@@ -898,7 +898,7 @@ def is_dtype_equal(source, target):
return False
-def is_any_int_dtype(arr_or_dtype):
+def is_any_int_dtype(arr_or_dtype) -> bool:
"""Check whether the provided array or dtype is of an integer dtype.
In this function, timedelta64 instances are also considered "any-integer"
@@ -1160,7 +1160,7 @@ def is_int64_dtype(arr_or_dtype):
return _is_dtype_type(arr_or_dtype, classes(np.int64))
-def is_datetime64_any_dtype(arr_or_dtype):
+def is_datetime64_any_dtype(arr_or_dtype) -> bool:
"""
Check whether the provided array or dtype is of the datetime64 dtype.
@@ -1320,7 +1320,7 @@ def is_datetime_or_timedelta_dtype(arr_or_dtype):
return _is_dtype_type(arr_or_dtype, classes(np.datetime64, np.timedelta64))
-def _is_unorderable_exception(e):
+def _is_unorderable_exception(e: TypeError) -> bool:
"""
Check if the exception raised is an unorderable exception.
@@ -1616,7 +1616,7 @@ def is_float_dtype(arr_or_dtype):
return _is_dtype_type(arr_or_dtype, classes(np.floating))
-def is_bool_dtype(arr_or_dtype):
+def is_bool_dtype(arr_or_dtype) -> bool:
"""
Check whether the provided array or dtype is of a boolean dtype.
@@ -1789,7 +1789,7 @@ def is_extension_array_dtype(arr_or_dtype):
return isinstance(dtype, ExtensionDtype) or registry.find(dtype) is not None
-def is_complex_dtype(arr_or_dtype):
+def is_complex_dtype(arr_or_dtype) -> bool:
"""
Check whether the provided array or dtype is of a complex dtype.
@@ -1822,7 +1822,7 @@ def is_complex_dtype(arr_or_dtype):
return _is_dtype_type(arr_or_dtype, classes(np.complexfloating))
-def _is_dtype(arr_or_dtype, condition):
+def _is_dtype(arr_or_dtype, condition) -> bool:
"""
Return a boolean if the condition is satisfied for the arr_or_dtype.
@@ -1883,7 +1883,7 @@ def _get_dtype(arr_or_dtype):
return pandas_dtype(arr_or_dtype)
-def _is_dtype_type(arr_or_dtype, condition):
+def _is_dtype_type(arr_or_dtype, condition) -> bool:
"""
Return a boolean if the condition is satisfied for the arr_or_dtype.
@@ -1992,7 +1992,7 @@ def infer_dtype_from_object(dtype):
return infer_dtype_from_object(np.dtype(dtype))
-def _validate_date_like_dtype(dtype):
+def _validate_date_like_dtype(dtype) -> None:
"""
Check whether the dtype is a date-like dtype. Raises an error if invalid.
| https://api.github.com/repos/pandas-dev/pandas/pulls/27564 | 2019-07-24T14:38:07Z | 2019-07-25T17:01:35Z | 2019-07-25T17:01:35Z | 2019-07-26T03:16:32Z | |
DOC: Harmonize column selection to bracket notation | diff --git a/doc/source/getting_started/10min.rst b/doc/source/getting_started/10min.rst
index 510c7ef97aa98..d3ad6f99d5ecd 100644
--- a/doc/source/getting_started/10min.rst
+++ b/doc/source/getting_started/10min.rst
@@ -278,7 +278,7 @@ Using a single column's values to select data.
.. ipython:: python
- df[df.A > 0]
+ df[df['A'] > 0]
Selecting values from a DataFrame where a boolean condition is met.
diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index 3f6f56376861f..802ffadf2a81e 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -926,7 +926,7 @@ Single aggregations on a ``Series`` this will return a scalar value:
.. ipython:: python
- tsdf.A.agg('sum')
+ tsdf['A'].agg('sum')
Aggregating with multiple functions
@@ -950,13 +950,13 @@ On a ``Series``, multiple functions return a ``Series``, indexed by the function
.. ipython:: python
- tsdf.A.agg(['sum', 'mean'])
+ tsdf['A'].agg(['sum', 'mean'])
Passing a ``lambda`` function will yield a ``<lambda>`` named row:
.. ipython:: python
- tsdf.A.agg(['sum', lambda x: x.mean()])
+ tsdf['A'].agg(['sum', lambda x: x.mean()])
Passing a named function will yield that name for the row:
@@ -965,7 +965,7 @@ Passing a named function will yield that name for the row:
def mymean(x):
return x.mean()
- tsdf.A.agg(['sum', mymean])
+ tsdf['A'].agg(['sum', mymean])
Aggregating with a dict
+++++++++++++++++++++++
@@ -1065,7 +1065,7 @@ Passing a single function to ``.transform()`` with a ``Series`` will yield a sin
.. ipython:: python
- tsdf.A.transform(np.abs)
+ tsdf['A'].transform(np.abs)
Transform with multiple functions
@@ -1084,7 +1084,7 @@ resulting column names will be the transforming functions.
.. ipython:: python
- tsdf.A.transform([np.abs, lambda x: x + 1])
+ tsdf['A'].transform([np.abs, lambda x: x + 1])
Transforming with a dict
diff --git a/doc/source/getting_started/comparison/comparison_with_r.rst b/doc/source/getting_started/comparison/comparison_with_r.rst
index 444e886bc951d..f67f46fc2b29b 100644
--- a/doc/source/getting_started/comparison/comparison_with_r.rst
+++ b/doc/source/getting_started/comparison/comparison_with_r.rst
@@ -81,7 +81,7 @@ R pandas
=========================================== ===========================================
``select(df, col_one = col1)`` ``df.rename(columns={'col1': 'col_one'})['col_one']``
``rename(df, col_one = col1)`` ``df.rename(columns={'col1': 'col_one'})``
-``mutate(df, c=a-b)`` ``df.assign(c=df.a-df.b)``
+``mutate(df, c=a-b)`` ``df.assign(c=df['a']-df['b'])``
=========================================== ===========================================
@@ -258,8 +258,8 @@ index/slice as well as standard boolean indexing:
df = pd.DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})
df.query('a <= b')
- df[df.a <= df.b]
- df.loc[df.a <= df.b]
+ df[df['a'] <= df['b']]
+ df.loc[df['a'] <= df['b']]
For more details and examples see :ref:`the query documentation
<indexing.query>`.
@@ -284,7 +284,7 @@ In ``pandas`` the equivalent expression, using the
df = pd.DataFrame({'a': np.random.randn(10), 'b': np.random.randn(10)})
df.eval('a + b')
- df.a + df.b # same as the previous expression
+ df['a'] + df['b'] # same as the previous expression
In certain cases :meth:`~pandas.DataFrame.eval` will be much faster than
evaluation in pure Python. For more details and examples see :ref:`the eval
diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index 22a9791ffde30..62a9b6396404a 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -738,7 +738,7 @@ and allows efficient indexing and storage of an index with a large number of dup
df['B'] = df['B'].astype(CategoricalDtype(list('cab')))
df
df.dtypes
- df.B.cat.categories
+ df['B'].cat.categories
Setting the index will create a ``CategoricalIndex``.
diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst
index 15af5208a4f1f..c9d3bc3a28c70 100644
--- a/doc/source/user_guide/cookbook.rst
+++ b/doc/source/user_guide/cookbook.rst
@@ -592,8 +592,8 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to
.. ipython:: python
df = pd.DataFrame([0, 1, 0, 1, 1, 1, 0, 1, 1], columns=['A'])
- df.A.groupby((df.A != df.A.shift()).cumsum()).groups
- df.A.groupby((df.A != df.A.shift()).cumsum()).cumsum()
+ df['A'].groupby((df['A'] != df['A'].shift()).cumsum()).groups
+ df['A'].groupby((df['A'] != df['A'].shift()).cumsum()).cumsum()
Expanding data
**************
@@ -719,7 +719,7 @@ Rolling Apply to multiple columns where function calculates a Series before a Sc
df
def gm(df, const):
- v = ((((df.A + df.B) + 1).cumprod()) - 1) * const
+ v = ((((df['A'] + df['B']) + 1).cumprod()) - 1) * const
return v.iloc[-1]
s = pd.Series({df.index[i]: gm(df.iloc[i:min(i + 51, len(df) - 1)], 5)
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index b77bfb9778837..40c8c207c847c 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -393,15 +393,15 @@ Consider the following toy example of doubling each observation:
.. code-block:: ipython
# Custom function without numba
- In [5]: %timeit df['col1_doubled'] = df.a.apply(double_every_value_nonumba) # noqa E501
+ In [5]: %timeit df['col1_doubled'] = df['a'].apply(double_every_value_nonumba) # noqa E501
1000 loops, best of 3: 797 us per loop
# Standard implementation (faster than a custom function)
- In [6]: %timeit df['col1_doubled'] = df.a * 2
+ In [6]: %timeit df['col1_doubled'] = df['a'] * 2
1000 loops, best of 3: 233 us per loop
# Custom function with numba
- In [7]: %timeit (df['col1_doubled'] = double_every_value_withnumba(df.a.to_numpy())
+ In [7]: %timeit (df['col1_doubled'] = double_every_value_withnumba(df['a'].to_numpy())
1000 loops, best of 3: 145 us per loop
Caveats
@@ -643,8 +643,8 @@ The equivalent in standard Python would be
.. ipython:: python
df = pd.DataFrame(dict(a=range(5), b=range(5, 10)))
- df['c'] = df.a + df.b
- df['d'] = df.a + df.b + df.c
+ df['c'] = df['a'] + df['b']
+ df['d'] = df['a'] + df['b'] + df['c']
df['a'] = 1
df
@@ -688,7 +688,7 @@ name in an expression.
a = np.random.randn()
df.query('@a < a')
- df.loc[a < df.a] # same as the previous expression
+ df.loc[a < df['a']] # same as the previous expression
With :func:`pandas.eval` you cannot use the ``@`` prefix *at all*, because it
isn't defined in that context. ``pandas`` will let you know this if you try to
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index e3b75afcf945e..cf55ce0c9a6d4 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -210,7 +210,7 @@ as an attribute:
See `here for an explanation of valid identifiers
<https://docs.python.org/3/reference/lexical_analysis.html#identifiers>`__.
- - The attribute will not be available if it conflicts with an existing method name, e.g. ``s.min`` is not allowed.
+ - The attribute will not be available if it conflicts with an existing method name, e.g. ``s.min`` is not allowed, but ``s['min']`` is possible.
- Similarly, the attribute will not be available if it conflicts with any of the following list: ``index``,
``major_axis``, ``minor_axis``, ``items``.
@@ -540,7 +540,7 @@ The ``callable`` must be a function with one argument (the calling Series or Dat
columns=list('ABCD'))
df1
- df1.loc[lambda df: df.A > 0, :]
+ df1.loc[lambda df: df['A'] > 0, :]
df1.loc[:, lambda df: ['A', 'B']]
df1.iloc[:, lambda df: [0, 1]]
@@ -552,7 +552,7 @@ You can use callable indexing in ``Series``.
.. ipython:: python
- df1.A.loc[lambda s: s > 0]
+ df1['A'].loc[lambda s: s > 0]
Using these methods / indexers, you can chain data selection operations
without using a temporary variable.
@@ -561,7 +561,7 @@ without using a temporary variable.
bb = pd.read_csv('data/baseball.csv', index_col='id')
(bb.groupby(['year', 'team']).sum()
- .loc[lambda df: df.r > 100])
+ .loc[lambda df: df['r'] > 100])
.. _indexing.deprecate_ix:
@@ -871,9 +871,9 @@ Boolean indexing
Another common operation is the use of boolean vectors to filter the data.
The operators are: ``|`` for ``or``, ``&`` for ``and``, and ``~`` for ``not``.
These **must** be grouped by using parentheses, since by default Python will
-evaluate an expression such as ``df.A > 2 & df.B < 3`` as
-``df.A > (2 & df.B) < 3``, while the desired evaluation order is
-``(df.A > 2) & (df.B < 3)``.
+evaluate an expression such as ``df['A'] > 2 & df['B'] < 3`` as
+``df['A'] > (2 & df['B']) < 3``, while the desired evaluation order is
+``(df['A > 2) & (df['B'] < 3)``.
Using a boolean vector to index a Series works exactly as in a NumPy ndarray:
@@ -1134,7 +1134,7 @@ between the values of columns ``a`` and ``c``. For example:
df
# pure python
- df[(df.a < df.b) & (df.b < df.c)]
+ df[(df['a'] < df['b']) & (df['b'] < df['c'])]
# query
df.query('(a < b) & (b < c)')
@@ -1241,7 +1241,7 @@ Full numpy-like syntax:
df = pd.DataFrame(np.random.randint(n, size=(n, 3)), columns=list('abc'))
df
df.query('(a < b) & (b < c)')
- df[(df.a < df.b) & (df.b < df.c)]
+ df[(df['a'] < df['b']) & (df['b'] < df['c'])]
Slightly nicer by removing the parentheses (by binding making comparison
operators bind tighter than ``&`` and ``|``).
@@ -1279,12 +1279,12 @@ The ``in`` and ``not in`` operators
df.query('a in b')
# How you'd do it in pure Python
- df[df.a.isin(df.b)]
+ df[df['a'].isin(df['b'])]
df.query('a not in b')
# pure Python
- df[~df.a.isin(df.b)]
+ df[~df['a'].isin(df['b'])]
You can combine this with other expressions for very succinct queries:
@@ -1297,7 +1297,7 @@ You can combine this with other expressions for very succinct queries:
df.query('a in b and c < d')
# pure Python
- df[df.b.isin(df.a) & (df.c < df.d)]
+ df[df['b'].isin(df['a']) & (df['c'] < df['d'])]
.. note::
@@ -1326,7 +1326,7 @@ to ``in``/``not in``.
df.query('b == ["a", "b", "c"]')
# pure Python
- df[df.b.isin(["a", "b", "c"])]
+ df[df['b'].isin(["a", "b", "c"])]
df.query('c == [1, 2]')
@@ -1338,7 +1338,7 @@ to ``in``/``not in``.
df.query('[1, 2] not in c')
# pure Python
- df[df.c.isin([1, 2])]
+ df[df['c'].isin([1, 2])]
Boolean operators
@@ -1352,7 +1352,7 @@ You can negate boolean expressions with the word ``not`` or the ``~`` operator.
df['bools'] = np.random.rand(len(df)) > 0.5
df.query('~bools')
df.query('not bools')
- df.query('not bools') == df[~df.bools]
+ df.query('not bools') == df[~df['bools']]
Of course, expressions can be arbitrarily complex too:
@@ -1362,7 +1362,10 @@ Of course, expressions can be arbitrarily complex too:
shorter = df.query('a < b < c and (not bools) or bools > 2')
# equivalent in pure Python
- longer = df[(df.a < df.b) & (df.b < df.c) & (~df.bools) | (df.bools > 2)]
+ longer = df[(df['a'] < df['b'])
+ & (df['b'] < df['c'])
+ & (~df['bools'])
+ | (df['bools'] > 2)]
shorter
longer
@@ -1835,14 +1838,14 @@ chained indexing expression, you can set the :ref:`option <options>`
# This will show the SettingWithCopyWarning
# but the frame values will be set
- dfb['c'][dfb.a.str.startswith('o')] = 42
+ dfb['c'][dfb['a'].str.startswith('o')] = 42
This however is operating on a copy and will not work.
::
>>> pd.set_option('mode.chained_assignment','warn')
- >>> dfb[dfb.a.str.startswith('o')]['c'] = 42
+ >>> dfb[dfb['a'].str.startswith('o')]['c'] = 42
Traceback (most recent call last)
...
SettingWithCopyWarning:
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index f118fe84d523a..dd6d3062a8f0a 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -469,7 +469,7 @@ If ``crosstab`` receives only two Series, it will provide a frequency table.
'C': [1, 1, np.nan, 1, 1]})
df
- pd.crosstab(df.A, df.B)
+ pd.crosstab(df['A'], df['B'])
Any input passed containing ``Categorical`` data will have **all** of its
categories included in the cross-tabulation, even if the actual data does
@@ -489,13 +489,13 @@ using the ``normalize`` argument:
.. ipython:: python
- pd.crosstab(df.A, df.B, normalize=True)
+ pd.crosstab(df['A'], df['B'], normalize=True)
``normalize`` can also normalize values within each row or within each column:
.. ipython:: python
- pd.crosstab(df.A, df.B, normalize='columns')
+ pd.crosstab(df['A'], df['B'], normalize='columns')
``crosstab`` can also be passed a third ``Series`` and an aggregation function
(``aggfunc``) that will be applied to the values of the third ``Series`` within
@@ -503,7 +503,7 @@ each group defined by the first two ``Series``:
.. ipython:: python
- pd.crosstab(df.A, df.B, values=df.C, aggfunc=np.sum)
+ pd.crosstab(df['A'], df['B'], values=df['C'], aggfunc=np.sum)
Adding margins
~~~~~~~~~~~~~~
@@ -512,7 +512,7 @@ Finally, one can also add margins or normalize this output.
.. ipython:: python
- pd.crosstab(df.A, df.B, values=df.C, aggfunc=np.sum, normalize=True,
+ pd.crosstab(df['A'], df['B'], values=df['C'], aggfunc=np.sum, normalize=True,
margins=True)
.. _reshaping.tile:
diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index fdceaa5868cec..fa16b2f216610 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -1148,10 +1148,10 @@ To plot data on a secondary y-axis, use the ``secondary_y`` keyword:
.. ipython:: python
- df.A.plot()
+ df['A'].plot()
@savefig series_plot_secondary_y.png
- df.B.plot(secondary_y=True, style='g')
+ df['B'].plot(secondary_y=True, style='g')
.. ipython:: python
:suppress:
@@ -1205,7 +1205,7 @@ Here is the default behavior, notice how the x-axis tick labeling is performed:
plt.figure()
@savefig ser_plot_suppress.png
- df.A.plot()
+ df['A'].plot()
.. ipython:: python
:suppress:
@@ -1219,7 +1219,7 @@ Using the ``x_compat`` parameter, you can suppress this behavior:
plt.figure()
@savefig ser_plot_suppress_parm.png
- df.A.plot(x_compat=True)
+ df['A'].plot(x_compat=True)
.. ipython:: python
:suppress:
@@ -1235,9 +1235,9 @@ in ``pandas.plotting.plot_params`` can be used in a `with statement`:
@savefig ser_plot_suppress_context.png
with pd.plotting.plot_params.use('x_compat', True):
- df.A.plot(color='r')
- df.B.plot(color='g')
- df.C.plot(color='b')
+ df['A'].plot(color='r')
+ df['B'].plot(color='g')
+ df['C'].plot(color='b')
.. ipython:: python
:suppress:
| As [suggested by](https://medium.com/dunder-data/minimally-sufficient-pandas-a8e67f2a2428#46f9) @tdpetrou. If there is agreement about not _presenting_ the dot notation to access columns in the docu, please let me know and I'll then check the rest of the docu.
- [~] closes #xxxx
- [~] tests added / passed
- [~] passes `black pandas`
- [~] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] passes `git diff upstream/master -u -- "*.rst" | flake8-rst --diff`
- [~] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27562 | 2019-07-24T12:24:19Z | 2019-08-26T16:37:15Z | 2019-08-26T16:37:15Z | 2019-08-26T18:29:14Z |
Backport PR #27556 on branch 0.25.x (BUG: Allow ensure_index to coerce nan to NaT with numpy object array and tz Timestamp) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 6234bc0f7bd35..6be3f0fa2ef3d 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -50,7 +50,7 @@ Timedelta
Timezones
^^^^^^^^^
--
+- Bug in :class:`Index` where a numpy object array with a timezone aware :class:`Timestamp` and ``np.nan`` would not return a :class:`DatetimeIndex` (:issue:`27011`)
-
-
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 33de8e41b2f65..74519391bac2f 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -489,19 +489,15 @@ def __new__(
pass
elif inferred != "string":
if inferred.startswith("datetime"):
- if (
- lib.is_datetime_with_singletz_array(subarr)
- or "tz" in kwargs
- ):
- # only when subarr has the same tz
- from pandas import DatetimeIndex
+ from pandas import DatetimeIndex
- try:
- return DatetimeIndex(
- subarr, copy=copy, name=name, **kwargs
- )
- except OutOfBoundsDatetime:
- pass
+ try:
+ return DatetimeIndex(subarr, copy=copy, name=name, **kwargs)
+ except (ValueError, OutOfBoundsDatetime):
+ # GH 27011
+ # If we have mixed timezones, just send it
+ # down the base constructor
+ pass
elif inferred.startswith("timedelta"):
from pandas import TimedeltaIndex
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index 82409df5b46f7..6a86289b6fcc6 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -42,10 +42,9 @@ class TestAttributes:
(0, 1),
(Timedelta("0 days"), Timedelta("1 day")),
(Timestamp("2018-01-01"), Timestamp("2018-01-02")),
- pytest.param(
+ (
Timestamp("2018-01-01", tz="US/Eastern"),
Timestamp("2018-01-02", tz="US/Eastern"),
- marks=pytest.mark.xfail(strict=True, reason="GH 27011"),
),
],
)
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 6708feda7dd1e..66a22ae7e9e46 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -822,6 +822,12 @@ def test_constructor_wrong_precision_raises(self):
with pytest.raises(ValueError):
pd.DatetimeIndex(["2000"], dtype="datetime64[us]")
+ def test_index_constructor_with_numpy_object_array_and_timestamp_tz_with_nan(self):
+ # GH 27011
+ result = Index(np.array([Timestamp("2019", tz="UTC"), np.nan], dtype=object))
+ expected = DatetimeIndex([Timestamp("2019", tz="UTC"), pd.NaT])
+ tm.assert_index_equal(result, expected)
+
class TestTimeSeries:
def test_dti_constructor_preserve_dti_freq(self):
| Backport PR #27556: BUG: Allow ensure_index to coerce nan to NaT with numpy object array and tz Timestamp | https://api.github.com/repos/pandas-dev/pandas/pulls/27561 | 2019-07-24T11:43:13Z | 2019-07-24T13:00:01Z | 2019-07-24T13:00:01Z | 2019-07-24T13:00:02Z |
Backport PR #27549 on branch 0.25.x (BUG: Fix interpolate ValueError for datetime64_tz index) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 6234bc0f7bd35..a769930c9cd9b 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -56,7 +56,7 @@ Timezones
Numeric
^^^^^^^
--
+- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
-
-
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f28f58b070368..19f126c36cde7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -30,7 +30,6 @@
is_bool,
is_bool_dtype,
is_datetime64_any_dtype,
- is_datetime64_dtype,
is_datetime64tz_dtype,
is_dict_like,
is_extension_array_dtype,
@@ -7035,7 +7034,7 @@ def interpolate(
methods = {"index", "values", "nearest", "time"}
is_numeric_or_datetime = (
is_numeric_dtype(index)
- or is_datetime64_dtype(index)
+ or is_datetime64_any_dtype(index)
or is_timedelta64_dtype(index)
)
if method not in methods and not is_numeric_or_datetime:
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index c5fc52b9b0c41..10375719be8d2 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -1518,10 +1518,16 @@ def test_interp_nonmono_raise(self):
s.interpolate(method="krogh")
@td.skip_if_no_scipy
- def test_interp_datetime64(self):
- df = Series([1, np.nan, 3], index=date_range("1/1/2000", periods=3))
- result = df.interpolate(method="nearest")
- expected = Series([1.0, 1.0, 3.0], index=date_range("1/1/2000", periods=3))
+ @pytest.mark.parametrize("method", ["nearest", "pad"])
+ def test_interp_datetime64(self, method, tz_naive_fixture):
+ df = Series(
+ [1, np.nan, 3], index=date_range("1/1/2000", periods=3, tz=tz_naive_fixture)
+ )
+ result = df.interpolate(method=method)
+ expected = Series(
+ [1.0, 1.0, 3.0],
+ index=date_range("1/1/2000", periods=3, tz=tz_naive_fixture),
+ )
assert_series_equal(result, expected)
def test_interp_limit_no_nans(self):
| Backport PR #27549: BUG: Fix interpolate ValueError for datetime64_tz index | https://api.github.com/repos/pandas-dev/pandas/pulls/27560 | 2019-07-24T11:43:02Z | 2019-07-24T13:00:12Z | 2019-07-24T13:00:12Z | 2019-07-24T13:00:12Z |
BUG: Allow ensure_index to coerce nan to NaT with numpy object array and tz Timestamp | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 69f82f7f85040..c4dc3cd05d530 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -50,7 +50,7 @@ Timedelta
Timezones
^^^^^^^^^
--
+- Bug in :class:`Index` where a numpy object array with a timezone aware :class:`Timestamp` and ``np.nan`` would not return a :class:`DatetimeIndex` (:issue:`27011`)
-
-
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index b30d262f6304a..8042f71c2754e 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -469,19 +469,15 @@ def __new__(
pass
elif inferred != "string":
if inferred.startswith("datetime"):
- if (
- lib.is_datetime_with_singletz_array(subarr)
- or "tz" in kwargs
- ):
- # only when subarr has the same tz
- from pandas import DatetimeIndex
+ from pandas import DatetimeIndex
- try:
- return DatetimeIndex(
- subarr, copy=copy, name=name, **kwargs
- )
- except OutOfBoundsDatetime:
- pass
+ try:
+ return DatetimeIndex(subarr, copy=copy, name=name, **kwargs)
+ except (ValueError, OutOfBoundsDatetime):
+ # GH 27011
+ # If we have mixed timezones, just send it
+ # down the base constructor
+ pass
elif inferred.startswith("timedelta"):
from pandas import TimedeltaIndex
diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py
index 82409df5b46f7..6a86289b6fcc6 100644
--- a/pandas/tests/arrays/interval/test_interval.py
+++ b/pandas/tests/arrays/interval/test_interval.py
@@ -42,10 +42,9 @@ class TestAttributes:
(0, 1),
(Timedelta("0 days"), Timedelta("1 day")),
(Timestamp("2018-01-01"), Timestamp("2018-01-02")),
- pytest.param(
+ (
Timestamp("2018-01-01", tz="US/Eastern"),
Timestamp("2018-01-02", tz="US/Eastern"),
- marks=pytest.mark.xfail(strict=True, reason="GH 27011"),
),
],
)
diff --git a/pandas/tests/indexes/datetimes/test_construction.py b/pandas/tests/indexes/datetimes/test_construction.py
index 6708feda7dd1e..66a22ae7e9e46 100644
--- a/pandas/tests/indexes/datetimes/test_construction.py
+++ b/pandas/tests/indexes/datetimes/test_construction.py
@@ -822,6 +822,12 @@ def test_constructor_wrong_precision_raises(self):
with pytest.raises(ValueError):
pd.DatetimeIndex(["2000"], dtype="datetime64[us]")
+ def test_index_constructor_with_numpy_object_array_and_timestamp_tz_with_nan(self):
+ # GH 27011
+ result = Index(np.array([Timestamp("2019", tz="UTC"), np.nan], dtype=object))
+ expected = DatetimeIndex([Timestamp("2019", tz="UTC"), pd.NaT])
+ tm.assert_index_equal(result, expected)
+
class TestTimeSeries:
def test_dti_constructor_preserve_dti_freq(self):
| - [x] closes #27011
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27556 | 2019-07-24T06:40:29Z | 2019-07-24T11:42:49Z | 2019-07-24T11:42:49Z | 2019-07-24T16:04:24Z |
CLN: more assorted cleanups | diff --git a/pandas/_libs/missing.pyx b/pandas/_libs/missing.pyx
index 1d756115ebd5a..052b081988c9e 100644
--- a/pandas/_libs/missing.pyx
+++ b/pandas/_libs/missing.pyx
@@ -80,11 +80,8 @@ cpdef bint checknull_old(object val):
cdef inline bint _check_none_nan_inf_neginf(object val):
- try:
- return val is None or (isinstance(val, float) and
- (val != val or val == INF or val == NEGINF))
- except ValueError:
- return False
+ return val is None or (isinstance(val, float) and
+ (val != val or val == INF or val == NEGINF))
@cython.wraparound(False)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6200cd14663f8..dd71daab9d4c5 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -38,12 +38,7 @@
is_timedelta64_dtype,
)
from pandas.core.dtypes.dtypes import CategoricalDtype
-from pandas.core.dtypes.generic import (
- ABCCategoricalIndex,
- ABCDataFrame,
- ABCIndexClass,
- ABCSeries,
-)
+from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.inference import is_hashable
from pandas.core.dtypes.missing import isna, notna
@@ -166,19 +161,6 @@ def f(self, other):
return f
-def _maybe_to_categorical(array):
- """
- Coerce to a categorical if a series is given.
-
- Internal use ONLY.
- """
- if isinstance(array, (ABCSeries, ABCCategoricalIndex)):
- return array._values
- elif isinstance(array, np.ndarray):
- return Categorical(array)
- return array
-
-
def contains(cat, key, container):
"""
Helper for membership check for ``key`` in ``cat``.
@@ -1988,23 +1970,6 @@ def take_nd(self, indexer, allow_fill=None, fill_value=None):
take = take_nd
- def _slice(self, slicer):
- """
- Return a slice of myself.
-
- For internal compatibility with numpy arrays.
- """
-
- # only allow 1 dimensional slicing, but can
- # in a 2-d case be passd (slice(None),....)
- if isinstance(slicer, tuple) and len(slicer) == 2:
- if not com.is_null_slice(slicer[0]):
- raise AssertionError("invalid slicing for a 1-ndim " "categorical")
- slicer = slicer[1]
-
- codes = self._codes[slicer]
- return self._constructor(values=codes, dtype=self.dtype, fastpath=True)
-
def __len__(self):
"""
The length of this Categorical.
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 9376b49112f6f..ee3652a211e31 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -601,10 +601,6 @@ def __init__(
dtype=None,
copy=False,
):
- from pandas.core.internals import SingleBlockManager
-
- if isinstance(data, SingleBlockManager):
- data = data.internal_values()
if fill_value is None and isinstance(dtype, SparseDtype):
fill_value = dtype.fill_value
@@ -1859,15 +1855,6 @@ def _formatter(self, boxed=False):
SparseArray._add_unary_ops()
-def _maybe_to_dense(obj):
- """
- try to convert to dense
- """
- if hasattr(obj, "to_dense"):
- return obj.to_dense()
- return obj
-
-
def make_sparse(arr, kind="block", fill_value=None, dtype=None, copy=False):
"""
Convert ndarray to sparse format
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9053edf2d1424..1a854af52c20e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4892,12 +4892,12 @@ def sample(
if weights is not None:
# If a series, align with frame
- if isinstance(weights, pd.Series):
+ if isinstance(weights, ABCSeries):
weights = weights.reindex(self.axes[axis])
# Strings acceptable if a dataframe and axis = 0
if isinstance(weights, str):
- if isinstance(self, pd.DataFrame):
+ if isinstance(self, ABCDataFrame):
if axis == 0:
try:
weights = self[weights]
@@ -6628,7 +6628,7 @@ def replace(
to_replace = [to_replace]
if isinstance(to_replace, (tuple, list)):
- if isinstance(self, pd.DataFrame):
+ if isinstance(self, ABCDataFrame):
return self.apply(
_single_replace, args=(to_replace, method, inplace, limit)
)
@@ -7421,7 +7421,7 @@ def _clip_with_one_bound(self, threshold, method, axis, inplace):
# be transformed to NDFrame from other array like structure.
if (not isinstance(threshold, ABCSeries)) and is_list_like(threshold):
if isinstance(self, ABCSeries):
- threshold = pd.Series(threshold, index=self.index)
+ threshold = self._constructor(threshold, index=self.index)
else:
threshold = _align_method_FRAME(self, threshold, axis)
return self.where(subset, threshold, axis=axis, inplace=inplace)
@@ -7510,9 +7510,9 @@ def clip(self, lower=None, upper=None, axis=None, inplace=False, *args, **kwargs
# so ignore
# GH 19992
# numpy doesn't drop a list-like bound containing NaN
- if not is_list_like(lower) and np.any(pd.isnull(lower)):
+ if not is_list_like(lower) and np.any(isna(lower)):
lower = None
- if not is_list_like(upper) and np.any(pd.isnull(upper)):
+ if not is_list_like(upper) and np.any(isna(upper)):
upper = None
# GH 2747 (arguments were reversed)
@@ -8985,7 +8985,7 @@ def _where(
msg = "Boolean array expected for the condition, not {dtype}"
- if not isinstance(cond, pd.DataFrame):
+ if not isinstance(cond, ABCDataFrame):
# This is a single-dimensional object.
if not is_bool_dtype(cond):
raise ValueError(msg.format(dtype=cond.dtype))
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 5b9cec6903749..b886b7e305ed0 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -35,7 +35,7 @@
is_object_dtype,
is_scalar,
)
-from pandas.core.dtypes.missing import isna, notna
+from pandas.core.dtypes.missing import _isna_ndarraylike, isna, notna
from pandas._typing import FrameOrSeries
import pandas.core.algorithms as algorithms
@@ -44,8 +44,13 @@
from pandas.core.frame import DataFrame
from pandas.core.generic import ABCDataFrame, ABCSeries, NDFrame, _shared_docs
from pandas.core.groupby import base
-from pandas.core.groupby.groupby import GroupBy, _apply_docs, _transform_template
-from pandas.core.index import Index, MultiIndex
+from pandas.core.groupby.groupby import (
+ GroupBy,
+ _apply_docs,
+ _transform_template,
+ groupby,
+)
+from pandas.core.index import Index, MultiIndex, _all_indexes_same
import pandas.core.indexes.base as ibase
from pandas.core.internals import BlockManager, make_block
from pandas.core.series import Series
@@ -162,8 +167,6 @@ def _cython_agg_blocks(self, how, alt=None, numeric_only=True, min_count=-1):
continue
# call our grouper again with only this block
- from pandas.core.groupby.groupby import groupby
-
obj = self.obj[data.items[locs]]
s = groupby(obj, self.grouper)
try:
@@ -348,8 +351,6 @@ def _decide_output_index(self, output, labels):
return output_keys
def _wrap_applied_output(self, keys, values, not_indexed_same=False):
- from pandas.core.index import _all_indexes_same
-
if len(keys) == 0:
return DataFrame(index=keys)
@@ -1590,13 +1591,14 @@ def count(self):
DataFrame
Count of values within each group.
"""
- from pandas.core.dtypes.missing import _isna_ndarraylike as _isna
-
data, _ = self._get_data_to_aggregate()
ids, _, ngroups = self.grouper.group_info
mask = ids != -1
- val = ((mask & ~_isna(np.atleast_2d(blk.get_values()))) for blk in data.blocks)
+ val = (
+ (mask & ~_isna_ndarraylike(np.atleast_2d(blk.get_values())))
+ for blk in data.blocks
+ )
loc = (blk.mgr_locs for blk in data.blocks)
counter = partial(lib.count_level_2d, labels=ids, max_bin=ngroups, axis=1)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 549920e230e8a..75dbe74c897f4 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -3074,10 +3074,10 @@ class CategoricalBlock(ExtensionBlock):
_concatenator = staticmethod(concat_categorical)
def __init__(self, values, placement, ndim=None):
- from pandas.core.arrays.categorical import _maybe_to_categorical
-
# coerce to categorical if we can
- super().__init__(_maybe_to_categorical(values), placement=placement, ndim=ndim)
+ values = extract_array(values)
+ assert isinstance(values, Categorical), type(values)
+ super().__init__(values, placement=placement, ndim=ndim)
@property
def _holder(self):
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 2bdef766a3434..79716520f6654 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -127,8 +127,6 @@ def pivot_table(
table = agged.unstack(to_unstack)
if not dropna:
- from pandas import MultiIndex
-
if table.index.nlevels > 1:
m = MultiIndex.from_arrays(
cartesian_product(table.index.levels), names=table.index.names
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 5f3ed87424d0e..1ab6c792c6402 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -202,19 +202,19 @@ def lexsort_indexer(keys, orders=None, na_position="last"):
# we are already a Categorical
if is_categorical_dtype(key):
- c = key
+ cat = key
# create the Categorical
else:
- c = Categorical(key, ordered=True)
+ cat = Categorical(key, ordered=True)
if na_position not in ["last", "first"]:
raise ValueError("invalid na_position: {!r}".format(na_position))
- n = len(c.categories)
- codes = c.codes.copy()
+ n = len(cat.categories)
+ codes = cat.codes.copy()
- mask = c.codes == -1
+ mask = cat.codes == -1
if order: # ascending
if na_position == "last":
codes = np.where(mask, n, codes)
diff --git a/pandas/core/sparse/series.py b/pandas/core/sparse/series.py
index f5d39c47150a2..0bd20e75d17ec 100644
--- a/pandas/core/sparse/series.py
+++ b/pandas/core/sparse/series.py
@@ -114,6 +114,11 @@ def __init__(
elif is_scalar(data) and index is not None:
data = np.full(len(index), fill_value=data)
+ if isinstance(data, SingleBlockManager):
+ # SparseArray doesn't accept SingleBlockManager
+ index = data.index
+ data = data.blocks[0].values
+
super().__init__(
SparseArray(
data,
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 5098ab3c7220f..14d109ccf6a9c 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -40,6 +40,7 @@
import pandas.core.common as com
from pandas.core.generic import _shared_docs
from pandas.core.groupby.base import GroupByMixin
+from pandas.core.index import Index, MultiIndex, ensure_index
_shared_docs = dict(**_shared_docs)
_doc_template = """
@@ -281,7 +282,6 @@ def _wrap_results(self, results, blocks, obj, exclude=None) -> FrameOrSeries:
"""
from pandas import Series, concat
- from pandas.core.index import ensure_index
final = []
for result, block in zip(results, blocks):
@@ -1691,8 +1691,6 @@ def _on(self):
if self.on is None:
return self.obj.index
elif isinstance(self.obj, ABCDataFrame) and self.on in self.obj.columns:
- from pandas import Index
-
return Index(self.obj[self.on])
else:
raise ValueError(
@@ -2670,7 +2668,7 @@ def dataframe_from_int_dict(data, frame_template):
*_prep_binary(arg1.iloc[:, i], arg2.iloc[:, j])
)
- from pandas import MultiIndex, concat
+ from pandas import concat
result_index = arg1.index.union(arg2.index)
if len(result_index):
| Continuing with the theme of avoiding runtime imports | https://api.github.com/repos/pandas-dev/pandas/pulls/27555 | 2019-07-24T03:40:53Z | 2019-07-25T17:12:05Z | 2019-07-25T17:12:05Z | 2019-07-25T17:32:09Z |
REF: implement module for shared constructor functions | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c7230dd7385c2..21d12d02c9008 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -50,6 +50,7 @@
from pandas.core.dtypes.missing import isna, na_value_for_dtype
from pandas.core import common as com
+from pandas.core.construction import array
from pandas.core.indexers import validate_indices
_shared_docs = {} # type: Dict[str, str]
@@ -1855,8 +1856,6 @@ def searchsorted(arr, value, side="left", sorter=None):
and is_integer_dtype(arr)
and (is_integer(value) or is_integer_dtype(value))
):
- from .arrays.array_ import array
-
# if `arr` and `value` have different dtypes, `arr` would be
# recast by numpy, causing a slow search.
# Before searching below, we therefore try to give `value` the
diff --git a/pandas/core/api.py b/pandas/core/api.py
index f3ea0976a2869..73323d93b8215 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -20,7 +20,8 @@
IntervalDtype,
DatetimeTZDtype,
)
-from pandas.core.arrays import Categorical, array
+from pandas.core.arrays import Categorical
+from pandas.core.construction import array
from pandas.core.groupby import Grouper, NamedAgg
from pandas.io.formats.format import set_eng_float_format
from pandas.core.index import (
diff --git a/pandas/core/arrays/__init__.py b/pandas/core/arrays/__init__.py
index dab29e9ce71d3..5c83ed8cf5e24 100644
--- a/pandas/core/arrays/__init__.py
+++ b/pandas/core/arrays/__init__.py
@@ -1,4 +1,3 @@
-from .array_ import array # noqa: F401
from .base import ( # noqa: F401
ExtensionArray,
ExtensionOpsMixin,
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 6200cd14663f8..8b0325748a353 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -60,6 +60,7 @@
)
from pandas.core.base import NoNewAttributesMixin, PandasObject, _shared_docs
import pandas.core.common as com
+from pandas.core.construction import extract_array, sanitize_array
from pandas.core.missing import interpolate_2d
from pandas.core.sorting import nargsort
@@ -374,7 +375,6 @@ def __init__(
values = maybe_infer_to_datetimelike(values, convert_dates=True)
if not isinstance(values, np.ndarray):
values = _convert_to_list_like(values)
- from pandas.core.internals.construction import sanitize_array
# By convention, empty lists result in object dtype:
if len(values) == 0:
@@ -2162,8 +2162,6 @@ def __setitem__(self, key, value):
If (one or more) Value is not in categories or if a assigned
`Categorical` does not have the same categories
"""
- from pandas.core.internals.arrays import extract_array
-
value = extract_array(value, extract_numpy=True)
# require identical categories set
@@ -2526,8 +2524,6 @@ def isin(self, values):
>>> s.isin(['lama'])
array([ True, False, True, False, True, False])
"""
- from pandas.core.internals.construction import sanitize_array
-
if not is_list_like(values):
raise TypeError(
"only list-like objects are allowed to be passed"
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 77c9a3bc98690..1c35298fcc6b8 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -16,6 +16,7 @@
from pandas import compat
from pandas.core import nanops
from pandas.core.algorithms import searchsorted, take, unique
+from pandas.core.construction import extract_array
from pandas.core.missing import backfill_1d, pad_1d
from .base import ExtensionArray, ExtensionOpsMixin
@@ -222,8 +223,6 @@ def __getitem__(self, item):
return result
def __setitem__(self, key, value):
- from pandas.core.internals.arrays import extract_array
-
value = extract_array(value, extract_numpy=True)
if not lib.is_scalar(key) and is_list_like(key):
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 9376b49112f6f..b8f41b140b245 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -52,6 +52,7 @@
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
from pandas.core.base import PandasObject
import pandas.core.common as com
+from pandas.core.construction import sanitize_array
from pandas.core.missing import interpolate_2d
import pandas.core.ops as ops
@@ -664,7 +665,6 @@ def __init__(
if not is_array_like(data):
try:
# probably shared code in sanitize_series
- from pandas.core.internals.construction import sanitize_array
data = sanitize_array(data, index=None)
except ValueError:
diff --git a/pandas/core/arrays/array_.py b/pandas/core/construction.py
similarity index 50%
rename from pandas/core/arrays/array_.py
rename to pandas/core/construction.py
index 314144db57712..9528723a6dc0f 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/construction.py
@@ -1,16 +1,51 @@
+"""
+Constructor functions intended to be shared by pd.array, Series.__init__,
+and Index.__new__.
+
+These should not depend on core.internals.
+"""
from typing import Optional, Sequence, Union, cast
import numpy as np
+import numpy.ma as ma
from pandas._libs import lib, tslibs
-
+from pandas._libs.tslibs import IncompatibleFrequency, OutOfBoundsDatetime
+
+from pandas.core.dtypes.cast import (
+ construct_1d_arraylike_from_scalar,
+ construct_1d_ndarray_preserving_na,
+ construct_1d_object_array_from_listlike,
+ infer_dtype_from_scalar,
+ maybe_cast_to_datetime,
+ maybe_cast_to_integer_array,
+ maybe_castable,
+ maybe_convert_platform,
+ maybe_upcast,
+)
from pandas.core.dtypes.common import (
+ is_categorical_dtype,
is_datetime64_ns_dtype,
is_extension_array_dtype,
+ is_extension_type,
+ is_float_dtype,
+ is_integer_dtype,
+ is_iterator,
+ is_list_like,
+ is_object_dtype,
is_timedelta64_ns_dtype,
+ pandas_dtype,
)
from pandas.core.dtypes.dtypes import ExtensionDtype, registry
-from pandas.core.dtypes.generic import ABCExtensionArray
+from pandas.core.dtypes.generic import (
+ ABCExtensionArray,
+ ABCIndexClass,
+ ABCPandasArray,
+ ABCSeries,
+)
+from pandas.core.dtypes.missing import isna
+
+import pandas.core.common as com
def array(
@@ -217,7 +252,6 @@ def array(
DatetimeArray,
TimedeltaArray,
)
- from pandas.core.internals.arrays import extract_array
if lib.is_scalar(data):
msg = "Cannot pass scalar '{}' to 'pandas.array'."
@@ -278,3 +312,241 @@ def array(
result = PandasArray._from_sequence(data, dtype=dtype, copy=copy)
return result
+
+
+def extract_array(obj, extract_numpy=False):
+ """
+ Extract the ndarray or ExtensionArray from a Series or Index.
+
+ For all other types, `obj` is just returned as is.
+
+ Parameters
+ ----------
+ obj : object
+ For Series / Index, the underlying ExtensionArray is unboxed.
+ For Numpy-backed ExtensionArrays, the ndarray is extracted.
+
+ extract_numpy : bool, default False
+ Whether to extract the ndarray from a PandasArray
+
+ Returns
+ -------
+ arr : object
+
+ Examples
+ --------
+ >>> extract_array(pd.Series(['a', 'b', 'c'], dtype='category'))
+ [a, b, c]
+ Categories (3, object): [a, b, c]
+
+ Other objects like lists, arrays, and DataFrames are just passed through.
+
+ >>> extract_array([1, 2, 3])
+ [1, 2, 3]
+
+ For an ndarray-backed Series / Index a PandasArray is returned.
+
+ >>> extract_array(pd.Series([1, 2, 3]))
+ <PandasArray>
+ [1, 2, 3]
+ Length: 3, dtype: int64
+
+ To extract all the way down to the ndarray, pass ``extract_numpy=True``.
+
+ >>> extract_array(pd.Series([1, 2, 3]), extract_numpy=True)
+ array([1, 2, 3])
+ """
+ if isinstance(obj, (ABCIndexClass, ABCSeries)):
+ obj = obj.array
+
+ if extract_numpy and isinstance(obj, ABCPandasArray):
+ obj = obj.to_numpy()
+
+ return obj
+
+
+def sanitize_array(data, index, dtype=None, copy=False, raise_cast_failure=False):
+ """
+ Sanitize input data to an ndarray, copy if specified, coerce to the
+ dtype if specified.
+ """
+ if dtype is not None:
+ dtype = pandas_dtype(dtype)
+
+ if isinstance(data, ma.MaskedArray):
+ mask = ma.getmaskarray(data)
+ if mask.any():
+ data, fill_value = maybe_upcast(data, copy=True)
+ data.soften_mask() # set hardmask False if it was True
+ data[mask] = fill_value
+ else:
+ data = data.copy()
+
+ # extract ndarray or ExtensionArray, ensure we have no PandasArray
+ data = extract_array(data, extract_numpy=True)
+
+ # GH#846
+ if isinstance(data, np.ndarray):
+
+ if dtype is not None and is_float_dtype(data.dtype) and is_integer_dtype(dtype):
+ # possibility of nan -> garbage
+ try:
+ subarr = _try_cast(data, dtype, copy, True)
+ except ValueError:
+ if copy:
+ subarr = data.copy()
+ else:
+ subarr = np.array(data, copy=False)
+ else:
+ # we will try to copy be-definition here
+ subarr = _try_cast(data, dtype, copy, raise_cast_failure)
+
+ elif isinstance(data, ABCExtensionArray):
+ # it is already ensured above this is not a PandasArray
+ subarr = data
+
+ if dtype is not None:
+ subarr = subarr.astype(dtype, copy=copy)
+ elif copy:
+ subarr = subarr.copy()
+ return subarr
+
+ elif isinstance(data, (list, tuple)) and len(data) > 0:
+ if dtype is not None:
+ try:
+ subarr = _try_cast(data, dtype, copy, raise_cast_failure)
+ except Exception:
+ if raise_cast_failure: # pragma: no cover
+ raise
+ subarr = np.array(data, dtype=object, copy=copy)
+ subarr = lib.maybe_convert_objects(subarr)
+
+ else:
+ subarr = maybe_convert_platform(data)
+
+ subarr = maybe_cast_to_datetime(subarr, dtype)
+
+ elif isinstance(data, range):
+ # GH#16804
+ arr = np.arange(data.start, data.stop, data.step, dtype="int64")
+ subarr = _try_cast(arr, dtype, copy, raise_cast_failure)
+ else:
+ subarr = _try_cast(data, dtype, copy, raise_cast_failure)
+
+ # scalar like, GH
+ if getattr(subarr, "ndim", 0) == 0:
+ if isinstance(data, list): # pragma: no cover
+ subarr = np.array(data, dtype=object)
+ elif index is not None:
+ value = data
+
+ # figure out the dtype from the value (upcast if necessary)
+ if dtype is None:
+ dtype, value = infer_dtype_from_scalar(value)
+ else:
+ # need to possibly convert the value here
+ value = maybe_cast_to_datetime(value, dtype)
+
+ subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)
+
+ else:
+ return subarr.item()
+
+ # the result that we want
+ elif subarr.ndim == 1:
+ if index is not None:
+
+ # a 1-element ndarray
+ if len(subarr) != len(index) and len(subarr) == 1:
+ subarr = construct_1d_arraylike_from_scalar(
+ subarr[0], len(index), subarr.dtype
+ )
+
+ elif subarr.ndim > 1:
+ if isinstance(data, np.ndarray):
+ raise Exception("Data must be 1-dimensional")
+ else:
+ subarr = com.asarray_tuplesafe(data, dtype=dtype)
+
+ # This is to prevent mixed-type Series getting all casted to
+ # NumPy string type, e.g. NaN --> '-1#IND'.
+ if issubclass(subarr.dtype.type, str):
+ # GH#16605
+ # If not empty convert the data to dtype
+ # GH#19853: If data is a scalar, subarr has already the result
+ if not lib.is_scalar(data):
+ if not np.all(isna(data)):
+ data = np.array(data, dtype=dtype, copy=False)
+ subarr = np.array(data, dtype=object, copy=copy)
+
+ if (
+ not (is_extension_array_dtype(subarr.dtype) or is_extension_array_dtype(dtype))
+ and is_object_dtype(subarr.dtype)
+ and not is_object_dtype(dtype)
+ ):
+ inferred = lib.infer_dtype(subarr, skipna=False)
+ if inferred == "period":
+ from pandas.core.arrays import period_array
+
+ try:
+ subarr = period_array(subarr)
+ except IncompatibleFrequency:
+ pass
+
+ return subarr
+
+
+def _try_cast(arr, dtype, copy, raise_cast_failure):
+ """
+ Convert input to numpy ndarray and optionally cast to a given dtype.
+
+ Parameters
+ ----------
+ arr : array-like
+ dtype : np.dtype, ExtensionDtype or None
+ copy : bool
+ If False, don't copy the data if not needed.
+ raise_cast_failure : bool
+ If True, and if a dtype is specified, raise errors during casting.
+ Otherwise an object array is returned.
+ """
+ # perf shortcut as this is the most common case
+ if isinstance(arr, np.ndarray):
+ if maybe_castable(arr) and not copy and dtype is None:
+ return arr
+
+ try:
+ # GH#15832: Check if we are requesting a numeric dype and
+ # that we can convert the data to the requested dtype.
+ if is_integer_dtype(dtype):
+ subarr = maybe_cast_to_integer_array(arr, dtype)
+
+ subarr = maybe_cast_to_datetime(arr, dtype)
+ # Take care in creating object arrays (but iterators are not
+ # supported):
+ if is_object_dtype(dtype) and (
+ is_list_like(subarr)
+ and not (is_iterator(subarr) or isinstance(subarr, np.ndarray))
+ ):
+ subarr = construct_1d_object_array_from_listlike(subarr)
+ elif not is_extension_type(subarr):
+ subarr = construct_1d_ndarray_preserving_na(subarr, dtype, copy=copy)
+ except OutOfBoundsDatetime:
+ # in case of out of bound datetime64 -> always raise
+ raise
+ except (ValueError, TypeError):
+ if is_categorical_dtype(dtype):
+ # We *do* allow casting to categorical, since we know
+ # that Categorical is the only array type for 'category'.
+ subarr = dtype.construct_array_type()(
+ arr, dtype.categories, ordered=dtype._ordered
+ )
+ elif is_extension_array_dtype(dtype):
+ # create an extension array from its dtype
+ array_type = dtype.construct_array_type()._from_sequence
+ subarr = array_type(arr, dtype=dtype, copy=copy)
+ elif dtype is not None and raise_cast_failure:
+ raise
+ else:
+ subarr = np.array(arr, dtype=object, copy=copy)
+ return subarr
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9053edf2d1424..42e636ed2204f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6,7 +6,7 @@
import operator
import pickle
from textwrap import dedent
-from typing import Callable, FrozenSet, List, Optional, Set
+from typing import Callable, Dict, FrozenSet, List, Optional, Set
import warnings
import weakref
@@ -73,7 +73,7 @@
# goal is to be able to define the docs close to function, while still being
# able to share
-_shared_docs = dict()
+_shared_docs = dict() # type: Dict[str, str]
_shared_doc_kwargs = dict(
axes="keywords for axes",
klass="Series/DataFrame",
diff --git a/pandas/core/internals/arrays.py b/pandas/core/internals/arrays.py
deleted file mode 100644
index 18af328bfa77f..0000000000000
--- a/pandas/core/internals/arrays.py
+++ /dev/null
@@ -1,55 +0,0 @@
-"""
-Methods for cleaning, validating, and unboxing arrays.
-"""
-from pandas.core.dtypes.generic import ABCIndexClass, ABCPandasArray, ABCSeries
-
-
-def extract_array(obj, extract_numpy=False):
- """
- Extract the ndarray or ExtensionArray from a Series or Index.
-
- For all other types, `obj` is just returned as is.
-
- Parameters
- ----------
- obj : object
- For Series / Index, the underlying ExtensionArray is unboxed.
- For Numpy-backed ExtensionArrays, the ndarray is extracted.
-
- extract_numpy : bool, default False
- Whether to extract the ndarray from a PandasArray
-
- Returns
- -------
- arr : object
-
- Examples
- --------
- >>> extract_array(pd.Series(['a', 'b', 'c'], dtype='category'))
- [a, b, c]
- Categories (3, object): [a, b, c]
-
- Other objects like lists, arrays, and DataFrames are just passed through.
-
- >>> extract_array([1, 2, 3])
- [1, 2, 3]
-
- For an ndarray-backed Series / Index a PandasArray is returned.
-
- >>> extract_array(pd.Series([1, 2, 3]))
- <PandasArray>
- [1, 2, 3]
- Length: 3, dtype: int64
-
- To extract all the way down to the ndarray, pass ``extract_numpy=True``.
-
- >>> extract_array(pd.Series([1, 2, 3]), extract_numpy=True)
- array([1, 2, 3])
- """
- if isinstance(obj, (ABCIndexClass, ABCSeries)):
- obj = obj.array
-
- if extract_numpy and isinstance(obj, ABCPandasArray):
- obj = obj.to_numpy()
-
- return obj
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 549920e230e8a..aa22896783b86 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -77,12 +77,12 @@
)
from pandas.core.base import PandasObject
import pandas.core.common as com
+from pandas.core.construction import extract_array
from pandas.core.indexers import (
check_setitem_lengths,
is_empty_indexer,
is_scalar_indexer,
)
-from pandas.core.internals.arrays import extract_array
import pandas.core.missing as missing
from pandas.core.nanops import nanpercentile
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index c437f686bd17b..74b16f0e72883 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -8,18 +8,12 @@
import numpy.ma as ma
from pandas._libs import lib
-from pandas._libs.tslibs import IncompatibleFrequency, OutOfBoundsDatetime
import pandas.compat as compat
from pandas.compat import PY36, raise_with_traceback
from pandas.core.dtypes.cast import (
construct_1d_arraylike_from_scalar,
- construct_1d_ndarray_preserving_na,
- construct_1d_object_array_from_listlike,
- infer_dtype_from_scalar,
maybe_cast_to_datetime,
- maybe_cast_to_integer_array,
- maybe_castable,
maybe_convert_platform,
maybe_infer_to_datetimelike,
maybe_upcast,
@@ -29,13 +23,9 @@
is_datetime64tz_dtype,
is_dtype_equal,
is_extension_array_dtype,
- is_extension_type,
- is_float_dtype,
is_integer_dtype,
- is_iterator,
is_list_like,
is_object_dtype,
- pandas_dtype,
)
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -45,10 +35,10 @@
ABCSeries,
ABCTimedeltaIndex,
)
-from pandas.core.dtypes.missing import isna
from pandas.core import algorithms, common as com
-from pandas.core.arrays import Categorical, ExtensionArray, period_array
+from pandas.core.arrays import Categorical
+from pandas.core.construction import sanitize_array
from pandas.core.index import (
Index,
_get_objs_combined_axis,
@@ -60,7 +50,6 @@
create_block_manager_from_arrays,
create_block_manager_from_blocks,
)
-from pandas.core.internals.arrays import extract_array
# ---------------------------------------------------------------------
# BlockManager Interface
@@ -625,186 +614,3 @@ def sanitize_index(data, index, copy=False):
data = sanitize_array(data, index, copy=copy)
return data
-
-
-def sanitize_array(data, index, dtype=None, copy=False, raise_cast_failure=False):
- """
- Sanitize input data to an ndarray, copy if specified, coerce to the
- dtype if specified.
- """
- if dtype is not None:
- dtype = pandas_dtype(dtype)
-
- if isinstance(data, ma.MaskedArray):
- mask = ma.getmaskarray(data)
- if mask.any():
- data, fill_value = maybe_upcast(data, copy=True)
- data.soften_mask() # set hardmask False if it was True
- data[mask] = fill_value
- else:
- data = data.copy()
-
- # extract ndarray or ExtensionArray, ensure we have no PandasArray
- data = extract_array(data, extract_numpy=True)
-
- # GH#846
- if isinstance(data, np.ndarray):
-
- if dtype is not None and is_float_dtype(data.dtype) and is_integer_dtype(dtype):
- # possibility of nan -> garbage
- try:
- subarr = _try_cast(data, dtype, copy, True)
- except ValueError:
- if copy:
- subarr = data.copy()
- else:
- subarr = np.array(data, copy=False)
- else:
- # we will try to copy be-definition here
- subarr = _try_cast(data, dtype, copy, raise_cast_failure)
-
- elif isinstance(data, ExtensionArray):
- # it is already ensured above this is not a PandasArray
- subarr = data
-
- if dtype is not None:
- subarr = subarr.astype(dtype, copy=copy)
- elif copy:
- subarr = subarr.copy()
- return subarr
-
- elif isinstance(data, (list, tuple)) and len(data) > 0:
- if dtype is not None:
- try:
- subarr = _try_cast(data, dtype, copy, raise_cast_failure)
- except Exception:
- if raise_cast_failure: # pragma: no cover
- raise
- subarr = np.array(data, dtype=object, copy=copy)
- subarr = lib.maybe_convert_objects(subarr)
-
- else:
- subarr = maybe_convert_platform(data)
-
- subarr = maybe_cast_to_datetime(subarr, dtype)
-
- elif isinstance(data, range):
- # GH#16804
- arr = np.arange(data.start, data.stop, data.step, dtype="int64")
- subarr = _try_cast(arr, dtype, copy, raise_cast_failure)
- else:
- subarr = _try_cast(data, dtype, copy, raise_cast_failure)
-
- # scalar like, GH
- if getattr(subarr, "ndim", 0) == 0:
- if isinstance(data, list): # pragma: no cover
- subarr = np.array(data, dtype=object)
- elif index is not None:
- value = data
-
- # figure out the dtype from the value (upcast if necessary)
- if dtype is None:
- dtype, value = infer_dtype_from_scalar(value)
- else:
- # need to possibly convert the value here
- value = maybe_cast_to_datetime(value, dtype)
-
- subarr = construct_1d_arraylike_from_scalar(value, len(index), dtype)
-
- else:
- return subarr.item()
-
- # the result that we want
- elif subarr.ndim == 1:
- if index is not None:
-
- # a 1-element ndarray
- if len(subarr) != len(index) and len(subarr) == 1:
- subarr = construct_1d_arraylike_from_scalar(
- subarr[0], len(index), subarr.dtype
- )
-
- elif subarr.ndim > 1:
- if isinstance(data, np.ndarray):
- raise Exception("Data must be 1-dimensional")
- else:
- subarr = com.asarray_tuplesafe(data, dtype=dtype)
-
- # This is to prevent mixed-type Series getting all casted to
- # NumPy string type, e.g. NaN --> '-1#IND'.
- if issubclass(subarr.dtype.type, str):
- # GH#16605
- # If not empty convert the data to dtype
- # GH#19853: If data is a scalar, subarr has already the result
- if not lib.is_scalar(data):
- if not np.all(isna(data)):
- data = np.array(data, dtype=dtype, copy=False)
- subarr = np.array(data, dtype=object, copy=copy)
-
- if (
- not (is_extension_array_dtype(subarr.dtype) or is_extension_array_dtype(dtype))
- and is_object_dtype(subarr.dtype)
- and not is_object_dtype(dtype)
- ):
- inferred = lib.infer_dtype(subarr, skipna=False)
- if inferred == "period":
- try:
- subarr = period_array(subarr)
- except IncompatibleFrequency:
- pass
-
- return subarr
-
-
-def _try_cast(arr, dtype, copy, raise_cast_failure):
- """
- Convert input to numpy ndarray and optionally cast to a given dtype.
-
- Parameters
- ----------
- arr : array-like
- dtype : np.dtype, ExtensionDtype or None
- copy : bool
- If False, don't copy the data if not needed.
- raise_cast_failure : bool
- If True, and if a dtype is specified, raise errors during casting.
- Otherwise an object array is returned.
- """
- # perf shortcut as this is the most common case
- if isinstance(arr, np.ndarray):
- if maybe_castable(arr) and not copy and dtype is None:
- return arr
-
- try:
- # GH#15832: Check if we are requesting a numeric dype and
- # that we can convert the data to the requested dtype.
- if is_integer_dtype(dtype):
- subarr = maybe_cast_to_integer_array(arr, dtype)
-
- subarr = maybe_cast_to_datetime(arr, dtype)
- # Take care in creating object arrays (but iterators are not
- # supported):
- if is_object_dtype(dtype) and (
- is_list_like(subarr)
- and not (is_iterator(subarr) or isinstance(subarr, np.ndarray))
- ):
- subarr = construct_1d_object_array_from_listlike(subarr)
- elif not is_extension_type(subarr):
- subarr = construct_1d_ndarray_preserving_na(subarr, dtype, copy=copy)
- except OutOfBoundsDatetime:
- # in case of out of bound datetime64 -> always raise
- raise
- except (ValueError, TypeError):
- if is_categorical_dtype(dtype):
- # We *do* allow casting to categorical, since we know
- # that Categorical is the only array type for 'category'.
- subarr = Categorical(arr, dtype.categories, ordered=dtype._ordered)
- elif is_extension_array_dtype(dtype):
- # create an extension array from its dtype
- array_type = dtype.construct_array_type()._from_sequence
- subarr = array_type(arr, dtype=dtype, copy=copy)
- elif dtype is not None and raise_cast_failure:
- raise
- else:
- subarr = np.array(arr, dtype=object, copy=copy)
- return subarr
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 8d5b521ef7799..1f519d4c0867d 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -22,9 +22,9 @@
import pandas.core.algorithms as algos
from pandas.core.arrays import SparseArray
from pandas.core.arrays.categorical import _factorize_from_iterable
+from pandas.core.construction import extract_array
from pandas.core.frame import DataFrame
from pandas.core.index import Index, MultiIndex
-from pandas.core.internals.arrays import extract_array
from pandas.core.series import Series
from pandas.core.sorting import (
compress_group_index,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 418b3fc8c57d0..dcf9dd4def9e5 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -61,6 +61,7 @@
from pandas.core.arrays.categorical import Categorical, CategoricalAccessor
from pandas.core.arrays.sparse import SparseAccessor
import pandas.core.common as com
+from pandas.core.construction import extract_array, sanitize_array
from pandas.core.index import (
Float64Index,
Index,
@@ -76,7 +77,6 @@
from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.core.indexing import check_bool_indexer
from pandas.core.internals import SingleBlockManager
-from pandas.core.internals.construction import sanitize_array
from pandas.core.strings import StringMethods
from pandas.core.tools.datetimes import to_datetime
@@ -801,8 +801,6 @@ def __array_ufunc__(
self, ufunc: Callable, method: str, *inputs: Any, **kwargs: Any
):
# TODO: handle DataFrame
- from pandas.core.internals.construction import extract_array
-
cls = type(self)
# for binary ops, use our custom dunder methods
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 5f3ed87424d0e..454156fd97344 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -15,6 +15,7 @@
from pandas.core.dtypes.missing import isna
import pandas.core.algorithms as algorithms
+from pandas.core.construction import extract_array
_INT64_MAX = np.iinfo(np.int64).max
@@ -240,8 +241,6 @@ def nargsort(items, kind="quicksort", ascending=True, na_position="last"):
handles NaNs. It adds ascending and na_position parameters.
GH #6399, #5231
"""
- from pandas.core.internals.arrays import extract_array
-
items = extract_array(items)
mask = np.asarray(isna(items))
| `core.construction` intended for code to be shared between `pd.array`, `Series.__init__`, and `Index.__new__`. The module should not need to be internals-aware.
This only moves array and extract_array to keep the diff contained. The next step would be to move internals.construction.sanitize_array, which is used in a number of non-internals places and satisfies the not-internals-aware condition.
Several functions from core.dtypes.cast would also make more sense here than in their current locations.
<b>update</b> moved sanitize_array too, and updated imports | https://api.github.com/repos/pandas-dev/pandas/pulls/27551 | 2019-07-24T01:39:36Z | 2019-07-25T17:14:33Z | 2019-07-25T17:14:33Z | 2019-07-25T17:33:28Z |
Revert CI changes from 27542 | diff --git a/ci/deps/azure-36-locale.yaml b/ci/deps/azure-36-locale.yaml
index 7e78e3bf6f373..8f8273f57c3fe 100644
--- a/ci/deps/azure-36-locale.yaml
+++ b/ci/deps/azure-36-locale.yaml
@@ -7,6 +7,7 @@ dependencies:
- bottleneck=1.2.*
- cython=0.28.2
- lxml
+ - matplotlib=2.2.2
- numpy=1.14.*
- openpyxl=2.4.8
- python-dateutil
diff --git a/ci/deps/travis-36-cov.yaml b/ci/deps/travis-36-cov.yaml
index b0b2d6ae77db8..a3f6d5b30f3e1 100644
--- a/ci/deps/travis-36-cov.yaml
+++ b/ci/deps/travis-36-cov.yaml
@@ -11,6 +11,7 @@ dependencies:
- gcsfs
- geopandas
- html5lib
+ - matplotlib
- moto
- nomkl
- numexpr
@@ -38,8 +39,8 @@ dependencies:
- xlsxwriter
- xlwt
# universal
- - pytest>=4.0.2,<5.0.0
- - pytest-xdist==1.28.0
+ - pytest
+ - pytest-xdist
- pytest-cov
- pytest-mock
- hypothesis>=3.58.0
diff --git a/ci/deps/travis-36-locale.yaml b/ci/deps/travis-36-locale.yaml
index 52664c30a98d9..0d9a760914dab 100644
--- a/ci/deps/travis-36-locale.yaml
+++ b/ci/deps/travis-36-locale.yaml
@@ -35,7 +35,7 @@ dependencies:
- xlwt
# universal
- pytest>=4.0.2
- - pytest-xdist>=1.29.0
+ - pytest-xdist
- pytest-mock
- pip
- pip:
| - [X] closes #27546
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Think this was all a perfect storm of temporary issues, so let's see what reverting yields.
Note: didn't revert the sudo tag removal on purpose | https://api.github.com/repos/pandas-dev/pandas/pulls/27550 | 2019-07-23T23:03:08Z | 2019-07-24T11:14:58Z | 2019-07-24T11:14:58Z | 2020-01-16T00:35:06Z |
BUG: Fix interpolate ValueError for datetime64_tz index | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 69f82f7f85040..ae2ff33f3f219 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -56,7 +56,7 @@ Timezones
Numeric
^^^^^^^
--
+- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
-
-
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0afd42e406c1f..9053edf2d1424 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -30,7 +30,6 @@
is_bool,
is_bool_dtype,
is_datetime64_any_dtype,
- is_datetime64_dtype,
is_datetime64tz_dtype,
is_dict_like,
is_extension_array_dtype,
@@ -7023,7 +7022,7 @@ def interpolate(
methods = {"index", "values", "nearest", "time"}
is_numeric_or_datetime = (
is_numeric_dtype(index)
- or is_datetime64_dtype(index)
+ or is_datetime64_any_dtype(index)
or is_timedelta64_dtype(index)
)
if method not in methods and not is_numeric_or_datetime:
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index 8398591f1ff1f..8f4c89ee72ae1 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -1521,10 +1521,16 @@ def test_interp_nonmono_raise(self):
s.interpolate(method="krogh")
@td.skip_if_no_scipy
- def test_interp_datetime64(self):
- df = Series([1, np.nan, 3], index=date_range("1/1/2000", periods=3))
- result = df.interpolate(method="nearest")
- expected = Series([1.0, 1.0, 3.0], index=date_range("1/1/2000", periods=3))
+ @pytest.mark.parametrize("method", ["nearest", "pad"])
+ def test_interp_datetime64(self, method, tz_naive_fixture):
+ df = Series(
+ [1, np.nan, 3], index=date_range("1/1/2000", periods=3, tz=tz_naive_fixture)
+ )
+ result = df.interpolate(method=method)
+ expected = Series(
+ [1.0, 1.0, 3.0],
+ index=date_range("1/1/2000", periods=3, tz=tz_naive_fixture),
+ )
assert_series_equal(result, expected)
def test_interp_limit_no_nans(self):
|
- [ x ] closes #27548
- [ x ] tests added / passed
- [ x ] passes `black pandas`
- [ x ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ x ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27549 | 2019-07-23T21:33:15Z | 2019-07-24T11:42:11Z | 2019-07-24T11:42:10Z | 2019-07-24T11:42:16Z |
CI: troubleshoot failures that necessited #27536 | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 13812663dd907..f704ceffa662e 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -545,7 +545,7 @@ cpdef convert_scalar(ndarray arr, object value):
elif arr.descr.type_num == NPY_TIMEDELTA:
if util.is_array(value):
pass
- elif isinstance(value, timedelta):
+ elif isinstance(value, timedelta) or util.is_timedelta64_object(value):
return Timedelta(value).value
elif util.is_datetime64_object(value):
# exclude np.datetime64("NaT") which would otherwise be picked up
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 3083b655821f8..2d36bfdb93a17 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -690,13 +690,7 @@ def test_dt64_series_assign_nat(nat_val, should_cast, tz):
"nat_val,should_cast",
[
(pd.NaT, True),
- pytest.param(
- np.timedelta64("NaT", "ns"),
- True,
- marks=pytest.mark.xfail(
- reason="Platform-specific failures, unknown cause", strict=False
- ),
- ),
+ (np.timedelta64("NaT", "ns"), True),
(np.datetime64("NaT", "ns"), False),
],
)
| https://api.github.com/repos/pandas-dev/pandas/pulls/27545 | 2019-07-23T17:28:29Z | 2019-07-23T22:33:09Z | 2019-07-23T22:33:09Z | 2019-07-23T22:59:06Z | |
CI debug | diff --git a/.travis.yml b/.travis.yml
index 8335a6ee92bef..9be4291d10874 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -1,4 +1,3 @@
-sudo: false
language: python
python: 3.5
diff --git a/ci/deps/azure-36-locale.yaml b/ci/deps/azure-36-locale.yaml
index 8f8273f57c3fe..7e78e3bf6f373 100644
--- a/ci/deps/azure-36-locale.yaml
+++ b/ci/deps/azure-36-locale.yaml
@@ -7,7 +7,6 @@ dependencies:
- bottleneck=1.2.*
- cython=0.28.2
- lxml
- - matplotlib=2.2.2
- numpy=1.14.*
- openpyxl=2.4.8
- python-dateutil
diff --git a/ci/deps/travis-36-cov.yaml b/ci/deps/travis-36-cov.yaml
index 6f85c32b9a915..b0b2d6ae77db8 100644
--- a/ci/deps/travis-36-cov.yaml
+++ b/ci/deps/travis-36-cov.yaml
@@ -11,7 +11,6 @@ dependencies:
- gcsfs
- geopandas
- html5lib
- - matplotlib
- moto
- nomkl
- numexpr
@@ -39,8 +38,8 @@ dependencies:
- xlsxwriter
- xlwt
# universal
- - pytest>=4.0.2
- - pytest-xdist
+ - pytest>=4.0.2,<5.0.0
+ - pytest-xdist==1.28.0
- pytest-cov
- pytest-mock
- hypothesis>=3.58.0
diff --git a/ci/deps/travis-36-locale.yaml b/ci/deps/travis-36-locale.yaml
index 0d9a760914dab..52664c30a98d9 100644
--- a/ci/deps/travis-36-locale.yaml
+++ b/ci/deps/travis-36-locale.yaml
@@ -35,7 +35,7 @@ dependencies:
- xlwt
# universal
- pytest>=4.0.2
- - pytest-xdist
+ - pytest-xdist>=1.29.0
- pytest-mock
- pip
- pip:
| Closes https://github.com/pandas-dev/pandas/issues/27541 | https://api.github.com/repos/pandas-dev/pandas/pulls/27542 | 2019-07-23T13:53:36Z | 2019-07-23T19:01:01Z | 2019-07-23T19:01:00Z | 2019-07-23T22:02:44Z |
COMPAT: remove Categorical pickle compat with Pandas < 0.16 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 8b2b3a09f8c87..59f69d5e656c1 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -99,6 +99,8 @@ Removal of prior version deprecations/changes
- :meth:`pandas.Series.str.cat` does not accept list-likes *within* list-likes anymore (:issue:`27611`)
- Removed the previously deprecated :meth:`ExtensionArray._formatting_values`. Use :attr:`ExtensionArray._formatter` instead. (:issue:`23601`)
- Removed the previously deprecated ``IntervalIndex.from_intervals`` in favor of the :class:`IntervalIndex` constructor (:issue:`19263`)
+- Ability to read pickles containing :class:`Categorical` instances created with pre-0.16 version of pandas has been removed (:issue:`27538`)
+-
.. _whatsnew_1000.performance:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 5929a8d51fe43..c81bcd491ff5d 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1350,24 +1350,7 @@ def __setstate__(self, state):
if not isinstance(state, dict):
raise Exception("invalid pickle state")
- # Provide compatibility with pre-0.15.0 Categoricals.
- if "_categories" not in state and "_levels" in state:
- state["_categories"] = self.dtype.validate_categories(state.pop("_levels"))
- if "_codes" not in state and "labels" in state:
- state["_codes"] = coerce_indexer_dtype(
- state.pop("labels"), state["_categories"]
- )
-
- # 0.16.0 ordered change
- if "_ordered" not in state:
-
- # >=15.0 < 0.16.0
- if "ordered" in state:
- state["_ordered"] = state.pop("ordered")
- else:
- state["_ordered"] = False
-
- # 0.21.0 CategoricalDtype change
+ # compat with pre 0.21.0 CategoricalDtype change
if "_dtype" not in state:
state["_dtype"] = CategoricalDtype(state["_categories"], state["_ordered"])
diff --git a/pandas/tests/io/data/categorical.0.25.0.pickle b/pandas/tests/io/data/categorical.0.25.0.pickle
new file mode 100644
index 0000000000000..b756060c83d94
Binary files /dev/null and b/pandas/tests/io/data/categorical.0.25.0.pickle differ
diff --git a/pandas/tests/io/data/categorical_0_14_1.pickle b/pandas/tests/io/data/categorical_0_14_1.pickle
deleted file mode 100644
index 94f882b2f3027..0000000000000
--- a/pandas/tests/io/data/categorical_0_14_1.pickle
+++ /dev/null
@@ -1,94 +0,0 @@
-ccopy_reg
-_reconstructor
-p0
-(cpandas.core.categorical
-Categorical
-p1
-c__builtin__
-object
-p2
-Ntp3
-Rp4
-(dp5
-S'_levels'
-p6
-cnumpy.core.multiarray
-_reconstruct
-p7
-(cpandas.core.index
-Index
-p8
-(I0
-tp9
-S'b'
-p10
-tp11
-Rp12
-((I1
-(I4
-tp13
-cnumpy
-dtype
-p14
-(S'O8'
-p15
-I0
-I1
-tp16
-Rp17
-(I3
-S'|'
-p18
-NNNI-1
-I-1
-I63
-tp19
-bI00
-(lp20
-S'a'
-p21
-ag10
-aS'c'
-p22
-aS'd'
-p23
-atp24
-(Ntp25
-tp26
-bsS'labels'
-p27
-g7
-(cnumpy
-ndarray
-p28
-(I0
-tp29
-g10
-tp30
-Rp31
-(I1
-(I3
-tp32
-g14
-(S'i8'
-p33
-I0
-I1
-tp34
-Rp35
-(I3
-S'<'
-p36
-NNNI-1
-I-1
-I0
-tp37
-bI00
-S'\x00\x00\x00\x00\x00\x00\x00\x00\x01\x00\x00\x00\x00\x00\x00\x00\x02\x00\x00\x00\x00\x00\x00\x00'
-p38
-tp39
-bsS'name'
-p40
-S'foobar'
-p41
-sb.
\ No newline at end of file
diff --git a/pandas/tests/io/data/categorical_0_15_2.pickle b/pandas/tests/io/data/categorical_0_15_2.pickle
deleted file mode 100644
index 25cd862976cab..0000000000000
Binary files a/pandas/tests/io/data/categorical_0_15_2.pickle and /dev/null differ
diff --git a/pandas/tests/io/test_common.py b/pandas/tests/io/test_common.py
index 8e09e96fbd471..655fd9d01c1c0 100644
--- a/pandas/tests/io/test_common.py
+++ b/pandas/tests/io/test_common.py
@@ -222,7 +222,7 @@ def test_read_expands_user_home_dir(
(pd.read_sas, "os", ("io", "sas", "data", "test1.sas7bdat")),
(pd.read_json, "os", ("io", "json", "data", "tsframe_v012.json")),
(pd.read_msgpack, "os", ("io", "msgpack", "data", "frame.mp")),
- (pd.read_pickle, "os", ("io", "data", "categorical_0_14_1.pickle")),
+ (pd.read_pickle, "os", ("io", "data", "categorical.0.25.0.pickle")),
],
)
def test_read_fspath_all(self, reader, module, path, datapath):
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 30555508f0998..9fbb4dbcb581e 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -194,38 +194,6 @@ def python_unpickler(path):
compare_element(result, expected, typ)
-def test_pickle_v0_14_1(datapath):
-
- cat = pd.Categorical(
- values=["a", "b", "c"], ordered=False, categories=["a", "b", "c", "d"]
- )
- pickle_path = datapath("io", "data", "categorical_0_14_1.pickle")
- # This code was executed once on v0.14.1 to generate the pickle:
- #
- # cat = Categorical(labels=np.arange(3), levels=['a', 'b', 'c', 'd'],
- # name='foobar')
- # with open(pickle_path, 'wb') as f: pickle.dump(cat, f)
- #
- tm.assert_categorical_equal(cat, pd.read_pickle(pickle_path))
-
-
-def test_pickle_v0_15_2(datapath):
- # ordered -> _ordered
- # GH 9347
-
- cat = pd.Categorical(
- values=["a", "b", "c"], ordered=False, categories=["a", "b", "c", "d"]
- )
- pickle_path = datapath("io", "data", "categorical_0_15_2.pickle")
- # This code was executed once on v0.15.2 to generate the pickle:
- #
- # cat = Categorical(labels=np.arange(3), levels=['a', 'b', 'c', 'd'],
- # name='foobar')
- # with open(pickle_path, 'wb') as f: pickle.dump(cat, f)
- #
- tm.assert_categorical_equal(cat, pd.read_pickle(pickle_path))
-
-
def test_pickle_path_pathlib():
df = tm.makeDataFrame()
result = tm.round_trip_pathlib(df.to_pickle, pd.read_pickle)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Removes Categorical pickle compat with Pandas < 0.16.
I've added a ``categorical.pickle`` file to replace the deprecated file. It was create like this:
```python
>>> cat = pd.Categorical(['a', 'b', 'c', 'd'])
>>> pickle.dump(cat, open(<filepath>, 'wb'))
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/27538 | 2019-07-23T07:12:56Z | 2019-09-19T16:06:40Z | 2019-09-19T16:06:40Z | 2019-09-19T19:57:00Z |
TST: label-based indexing fails with certain list indexers in case of… | diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index 19c288a4b63ae..abe0cd86c90d7 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -1081,3 +1081,21 @@ def test_series_loc_getitem_label_list_missing_values():
with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
result = s.loc[key]
tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "columns, column_key, expected_columns, check_column_type",
+ [
+ ([2011, 2012, 2013], [2011, 2012], [0, 1], True),
+ ([2011, 2012, "All"], [2011, 2012], [0, 1], False),
+ ([2011, 2012, "All"], [2011, "All"], [0, 2], True),
+ ],
+)
+def test_loc_getitem_label_list_integer_labels(
+ columns, column_key, expected_columns, check_column_type
+):
+ # gh-14836
+ df = DataFrame(np.random.rand(3, 3), columns=columns, index=list("ABC"))
+ expected = df.iloc[:, expected_columns]
+ result = df.loc[["A", "B", "C"], column_key]
+ tm.assert_frame_equal(result, expected, check_column_type=check_column_type)
| … mixed integers/strings columns names
- [ ] closes #14836
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [n/a ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27537 | 2019-07-23T04:17:32Z | 2019-07-23T21:16:16Z | 2019-07-23T21:16:16Z | 2019-07-24T11:54:51Z |
xfail to fix CI | diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index 2d36bfdb93a17..3083b655821f8 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -690,7 +690,13 @@ def test_dt64_series_assign_nat(nat_val, should_cast, tz):
"nat_val,should_cast",
[
(pd.NaT, True),
- (np.timedelta64("NaT", "ns"), True),
+ pytest.param(
+ np.timedelta64("NaT", "ns"),
+ True,
+ marks=pytest.mark.xfail(
+ reason="Platform-specific failures, unknown cause", strict=False
+ ),
+ ),
(np.datetime64("NaT", "ns"), False),
],
)
| to keep the wheels running while I troubleshoot this | https://api.github.com/repos/pandas-dev/pandas/pulls/27536 | 2019-07-23T03:06:51Z | 2019-07-23T11:42:41Z | 2019-07-23T11:42:41Z | 2019-07-23T14:56:07Z |
DEPR: remove .ix from tests/indexing/test_indexing.py | diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 58c054fa27d76..8d1971801cccc 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -2,7 +2,6 @@
from datetime import datetime
import re
-from warnings import catch_warnings, simplefilter
import weakref
import numpy as np
@@ -20,8 +19,6 @@
from pandas.tests.indexing.common import Base, _mklbl
import pandas.util.testing as tm
-ignore_ix = pytest.mark.filterwarnings("ignore:\\n.ix:FutureWarning")
-
# ------------------------------------------------------------------------
# Indexing test cases
@@ -75,7 +72,6 @@ def test_setitem_ndarray_1d(self):
(lambda x: x, "getitem"),
(lambda x: x.loc, "loc"),
(lambda x: x.iloc, "iloc"),
- pytest.param(lambda x: x.ix, "ix", marks=ignore_ix),
],
)
def test_getitem_ndarray_3d(self, index, obj, idxr, idxr_id):
@@ -141,7 +137,6 @@ def test_getitem_ndarray_3d(self, index, obj, idxr, idxr_id):
(lambda x: x, "setitem"),
(lambda x: x.loc, "loc"),
(lambda x: x.iloc, "iloc"),
- pytest.param(lambda x: x.ix, "ix", marks=ignore_ix),
],
)
def test_setitem_ndarray_3d(self, index, obj, idxr, idxr_id):
@@ -163,27 +158,20 @@ def test_setitem_ndarray_3d(self, index, obj, idxr, idxr_id):
r"^\[\[\[" # pandas.core.indexing.IndexingError
)
- if (
- (idxr_id == "iloc")
- or (
- (
- isinstance(obj, Series)
- and idxr_id == "setitem"
- and index.inferred_type
- in [
- "floating",
- "string",
- "datetime64",
- "period",
- "timedelta64",
- "boolean",
- "categorical",
- ]
- )
- )
- or (
- idxr_id == "ix"
- and index.inferred_type in ["string", "datetime64", "period", "boolean"]
+ if (idxr_id == "iloc") or (
+ (
+ isinstance(obj, Series)
+ and idxr_id == "setitem"
+ and index.inferred_type
+ in [
+ "floating",
+ "string",
+ "datetime64",
+ "period",
+ "timedelta64",
+ "boolean",
+ "categorical",
+ ]
)
):
idxr[nd3] = 0
@@ -427,10 +415,6 @@ def test_indexing_mixed_frame_bug(self):
df.loc[idx, "test"] = temp
assert df.iloc[0, 2] == "-----"
- # if I look at df, then element [0,2] equals '_'. If instead I type
- # df.ix[idx,'test'], I get '-----', finally by typing df.iloc[0,2] I
- # get '_'.
-
def test_multitype_list_index_access(self):
# GH 10610
df = DataFrame(np.random.random((10, 5)), columns=["a"] + [20, 21, 22, 23])
@@ -592,21 +576,17 @@ def test_multi_assign(self):
def test_setitem_list(self):
# GH 6043
- # ix with a list
+ # iloc with a list
df = DataFrame(index=[0, 1], columns=[0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- df.ix[1, 0] = [1, 2, 3]
- df.ix[1, 0] = [1, 2]
+ df.iloc[1, 0] = [1, 2, 3]
+ df.iloc[1, 0] = [1, 2]
result = DataFrame(index=[0, 1], columns=[0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- result.ix[1, 0] = [1, 2]
+ result.iloc[1, 0] = [1, 2]
tm.assert_frame_equal(result, df)
- # ix with an object
+ # iloc with an object
class TO:
def __init__(self, value):
self.value = value
@@ -623,24 +603,18 @@ def view(self):
return self
df = DataFrame(index=[0, 1], columns=[0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- df.ix[1, 0] = TO(1)
- df.ix[1, 0] = TO(2)
+ df.iloc[1, 0] = TO(1)
+ df.iloc[1, 0] = TO(2)
result = DataFrame(index=[0, 1], columns=[0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- result.ix[1, 0] = TO(2)
+ result.iloc[1, 0] = TO(2)
tm.assert_frame_equal(result, df)
# remains object dtype even after setting it back
df = DataFrame(index=[0, 1], columns=[0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- df.ix[1, 0] = TO(1)
- df.ix[1, 0] = np.nan
+ df.iloc[1, 0] = TO(1)
+ df.iloc[1, 0] = np.nan
result = DataFrame(index=[0, 1], columns=[0])
tm.assert_frame_equal(result, df)
@@ -777,55 +751,52 @@ def test_contains_with_float_index(self):
def test_index_type_coercion(self):
- with catch_warnings(record=True):
- simplefilter("ignore")
-
- # GH 11836
- # if we have an index type and set it with something that looks
- # to numpy like the same, but is actually, not
- # (e.g. setting with a float or string '0')
- # then we need to coerce to object
+ # GH 11836
+ # if we have an index type and set it with something that looks
+ # to numpy like the same, but is actually, not
+ # (e.g. setting with a float or string '0')
+ # then we need to coerce to object
- # integer indexes
- for s in [Series(range(5)), Series(range(5), index=range(1, 6))]:
+ # integer indexes
+ for s in [Series(range(5)), Series(range(5), index=range(1, 6))]:
- assert s.index.is_integer()
+ assert s.index.is_integer()
- for indexer in [lambda x: x.ix, lambda x: x.loc, lambda x: x]:
- s2 = s.copy()
- indexer(s2)[0.1] = 0
- assert s2.index.is_floating()
- assert indexer(s2)[0.1] == 0
+ for indexer in [lambda x: x.loc, lambda x: x]:
+ s2 = s.copy()
+ indexer(s2)[0.1] = 0
+ assert s2.index.is_floating()
+ assert indexer(s2)[0.1] == 0
- s2 = s.copy()
- indexer(s2)[0.0] = 0
- exp = s.index
- if 0 not in s:
- exp = Index(s.index.tolist() + [0])
- tm.assert_index_equal(s2.index, exp)
+ s2 = s.copy()
+ indexer(s2)[0.0] = 0
+ exp = s.index
+ if 0 not in s:
+ exp = Index(s.index.tolist() + [0])
+ tm.assert_index_equal(s2.index, exp)
- s2 = s.copy()
- indexer(s2)["0"] = 0
- assert s2.index.is_object()
+ s2 = s.copy()
+ indexer(s2)["0"] = 0
+ assert s2.index.is_object()
- for s in [Series(range(5), index=np.arange(5.0))]:
+ for s in [Series(range(5), index=np.arange(5.0))]:
- assert s.index.is_floating()
+ assert s.index.is_floating()
- for idxr in [lambda x: x.ix, lambda x: x.loc, lambda x: x]:
+ for idxr in [lambda x: x.loc, lambda x: x]:
- s2 = s.copy()
- idxr(s2)[0.1] = 0
- assert s2.index.is_floating()
- assert idxr(s2)[0.1] == 0
+ s2 = s.copy()
+ idxr(s2)[0.1] = 0
+ assert s2.index.is_floating()
+ assert idxr(s2)[0.1] == 0
- s2 = s.copy()
- idxr(s2)[0.0] = 0
- tm.assert_index_equal(s2.index, s.index)
+ s2 = s.copy()
+ idxr(s2)[0.0] = 0
+ tm.assert_index_equal(s2.index, s.index)
- s2 = s.copy()
- idxr(s2)["0"] = 0
- assert s2.index.is_object()
+ s2 = s.copy()
+ idxr(s2)["0"] = 0
+ assert s2.index.is_object()
class TestMisc(Base):
@@ -887,22 +858,7 @@ def run_tests(df, rhs, right):
tm.assert_frame_equal(left, right)
left = df.copy()
- with catch_warnings(record=True):
- # XXX: finer-filter here.
- simplefilter("ignore")
- left.ix[slice_one, slice_two] = rhs
- tm.assert_frame_equal(left, right)
-
- left = df.copy()
- with catch_warnings(record=True):
- simplefilter("ignore")
- left.ix[idx_one, idx_two] = rhs
- tm.assert_frame_equal(left, right)
-
- left = df.copy()
- with catch_warnings(record=True):
- simplefilter("ignore")
- left.ix[lbl_one, lbl_two] = rhs
+ left.iloc[slice_one, slice_two] = rhs
tm.assert_frame_equal(left, right)
xs = np.arange(20).reshape(5, 4)
@@ -933,7 +889,7 @@ def assert_slices_equivalent(l_slc, i_slc):
tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
if not idx.is_integer:
- # For integer indices, ix and plain getitem are position-based.
+ # For integer indices, .loc and plain getitem are position-based.
tm.assert_series_equal(s[l_slc], s.iloc[i_slc])
tm.assert_series_equal(s.loc[l_slc], s.iloc[i_slc])
@@ -951,10 +907,6 @@ def test_slice_with_zero_step_raises(self):
s[::0]
with pytest.raises(ValueError, match="slice step cannot be zero"):
s.loc[::0]
- with catch_warnings(record=True):
- simplefilter("ignore")
- with pytest.raises(ValueError, match="slice step cannot be zero"):
- s.ix[::0]
def test_indexing_assignment_dict_already_exists(self):
df = DataFrame({"x": [1, 2, 6], "y": [2, 2, 8], "z": [-5, 0, 5]}).set_index("z")
@@ -965,17 +917,12 @@ def test_indexing_assignment_dict_already_exists(self):
tm.assert_frame_equal(df, expected)
def test_indexing_dtypes_on_empty(self):
- # Check that .iloc and .ix return correct dtypes GH9983
+ # Check that .iloc returns correct dtypes GH9983
df = DataFrame({"a": [1, 2, 3], "b": ["b", "b2", "b3"]})
- with catch_warnings(record=True):
- simplefilter("ignore")
- df2 = df.ix[[], :]
+ df2 = df.iloc[[], :]
assert df2.loc[:, "a"].dtype == np.int64
tm.assert_series_equal(df2.loc[:, "a"], df2.iloc[:, 0])
- with catch_warnings(record=True):
- simplefilter("ignore")
- tm.assert_series_equal(df2.loc[:, "a"], df2.ix[:, 0])
def test_range_in_series_indexing(self):
# range can cause an indexing error
@@ -1048,9 +995,6 @@ def test_no_reference_cycle(self):
df = DataFrame({"a": [0, 1], "b": [2, 3]})
for name in ("loc", "iloc", "at", "iat"):
getattr(df, name)
- with catch_warnings(record=True):
- simplefilter("ignore")
- getattr(df, "ix")
wr = weakref.ref(df)
del df
assert wr() is None
@@ -1235,12 +1179,6 @@ def test_extension_array_cross_section_converts():
AttributeError,
"type object 'NDFrame' has no attribute '_AXIS_ALIASES'",
),
- pytest.param(
- lambda x: x.ix,
- ValueError,
- "NDFrameIndexer does not support NDFrame objects with ndim > 2",
- marks=ignore_ix,
- ),
],
)
def test_ndframe_indexing_raises(idxr, error, error_message):
| https://api.github.com/repos/pandas-dev/pandas/pulls/27535 | 2019-07-23T02:48:23Z | 2019-07-24T13:01:33Z | 2019-07-24T13:01:33Z | 2019-07-24T14:00:07Z | |
DEPR: remove .ix tests from tests/indexing/test_floats.py | diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index 78ff6580bb1e1..56a78081bc624 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -1,5 +1,3 @@
-from warnings import catch_warnings
-
import numpy as np
import pytest
@@ -7,8 +5,6 @@
import pandas.util.testing as tm
from pandas.util.testing import assert_almost_equal, assert_series_equal
-ignore_ix = pytest.mark.filterwarnings("ignore:\\n.ix:FutureWarning")
-
class TestFloatIndexers:
def check(self, result, original, indexer, getitem):
@@ -62,7 +58,6 @@ def test_scalar_error(self):
with pytest.raises(TypeError, match=msg):
s.iloc[3.0] = 0
- @ignore_ix
def test_scalar_non_numeric(self):
# GH 4892
@@ -86,11 +81,7 @@ def test_scalar_non_numeric(self):
]:
# getting
- for idxr, getitem in [
- (lambda x: x.ix, False),
- (lambda x: x.iloc, False),
- (lambda x: x, True),
- ]:
+ for idxr, getitem in [(lambda x: x.iloc, False), (lambda x: x, True)]:
# gettitem on a DataFrame is a KeyError as it is indexing
# via labels on the columns
@@ -106,9 +97,8 @@ def test_scalar_non_numeric(self):
"Cannot index by location index with a"
" non-integer key".format(klass=type(i), kind=str(float))
)
- with catch_warnings(record=True):
- with pytest.raises(error, match=msg):
- idxr(s)[3.0]
+ with pytest.raises(error, match=msg):
+ idxr(s)[3.0]
# label based can be a TypeError or KeyError
if s.index.inferred_type in ["string", "unicode", "mixed"]:
@@ -158,10 +148,9 @@ def test_scalar_non_numeric(self):
s2.loc[3.0] = 10
assert s2.index.is_object()
- for idxr in [lambda x: x.ix, lambda x: x]:
+ for idxr in [lambda x: x]:
s2 = s.copy()
- with catch_warnings(record=True):
- idxr(s2)[3.0] = 0
+ idxr(s2)[3.0] = 0
assert s2.index.is_object()
# fallsback to position selection, series only
@@ -175,7 +164,6 @@ def test_scalar_non_numeric(self):
with pytest.raises(TypeError, match=msg):
s[3.0]
- @ignore_ix
def test_scalar_with_mixed(self):
s2 = Series([1, 2, 3], index=["a", "b", "c"])
@@ -183,7 +171,7 @@ def test_scalar_with_mixed(self):
# lookup in a pure stringstr
# with an invalid indexer
- for idxr in [lambda x: x.ix, lambda x: x, lambda x: x.iloc]:
+ for idxr in [lambda x: x, lambda x: x.iloc]:
msg = (
r"cannot do label indexing"
@@ -193,9 +181,8 @@ def test_scalar_with_mixed(self):
klass=str(Index), kind=str(float)
)
)
- with catch_warnings(record=True):
- with pytest.raises(TypeError, match=msg):
- idxr(s2)[1.0]
+ with pytest.raises(TypeError, match=msg):
+ idxr(s2)[1.0]
with pytest.raises(KeyError, match=r"^1$"):
s2.loc[1.0]
@@ -220,23 +207,6 @@ def test_scalar_with_mixed(self):
expected = 2
assert result == expected
- # mixed index so we have label
- # indexing
- for idxr in [lambda x: x.ix]:
- with catch_warnings(record=True):
-
- msg = (
- r"cannot do label indexing"
- r" on {klass} with these indexers \[1\.0\] of"
- r" {kind}".format(klass=str(Index), kind=str(float))
- )
- with pytest.raises(TypeError, match=msg):
- idxr(s3)[1.0]
-
- result = idxr(s3)[1]
- expected = 2
- assert result == expected
-
msg = "Cannot index by location index with a non-integer key"
with pytest.raises(TypeError, match=msg):
s3.iloc[1.0]
@@ -247,7 +217,6 @@ def test_scalar_with_mixed(self):
expected = 3
assert result == expected
- @ignore_ix
def test_scalar_integer(self):
# test how scalar float indexers work on int indexes
@@ -261,22 +230,13 @@ def test_scalar_integer(self):
]:
# coerce to equal int
- for idxr, getitem in [
- (lambda x: x.ix, False),
- (lambda x: x.loc, False),
- (lambda x: x, True),
- ]:
+ for idxr, getitem in [(lambda x: x.loc, False), (lambda x: x, True)]:
- with catch_warnings(record=True):
- result = idxr(s)[3.0]
+ result = idxr(s)[3.0]
self.check(result, s, 3, getitem)
# coerce to equal int
- for idxr, getitem in [
- (lambda x: x.ix, False),
- (lambda x: x.loc, False),
- (lambda x: x, True),
- ]:
+ for idxr, getitem in [(lambda x: x.loc, False), (lambda x: x, True)]:
if isinstance(s, Series):
@@ -292,20 +252,18 @@ def compare(x, y):
expected = Series(100.0, index=range(len(s)), name=3)
s2 = s.copy()
- with catch_warnings(record=True):
- idxr(s2)[3.0] = 100
+ idxr(s2)[3.0] = 100
- result = idxr(s2)[3.0]
- compare(result, expected)
+ result = idxr(s2)[3.0]
+ compare(result, expected)
- result = idxr(s2)[3]
- compare(result, expected)
+ result = idxr(s2)[3]
+ compare(result, expected)
# contains
# coerce to equal int
assert 3.0 in s
- @ignore_ix
def test_scalar_float(self):
# scalar float indexers work on a float index
@@ -319,11 +277,7 @@ def test_scalar_float(self):
# assert all operations except for iloc are ok
indexer = index[3]
- for idxr, getitem in [
- (lambda x: x.ix, False),
- (lambda x: x.loc, False),
- (lambda x: x, True),
- ]:
+ for idxr, getitem in [(lambda x: x.loc, False), (lambda x: x, True)]:
# getting
result = idxr(s)[indexer]
@@ -332,14 +286,12 @@ def test_scalar_float(self):
# setting
s2 = s.copy()
- with catch_warnings(record=True):
- result = idxr(s2)[indexer]
+ result = idxr(s2)[indexer]
self.check(result, s, 3, getitem)
# random integer is a KeyError
- with catch_warnings(record=True):
- with pytest.raises(KeyError, match=r"^3\.5$"):
- idxr(s)[3.5]
+ with pytest.raises(KeyError, match=r"^3\.5$"):
+ idxr(s)[3.5]
# contains
assert 3.0 in s
@@ -365,7 +317,6 @@ def test_scalar_float(self):
with pytest.raises(TypeError, match=msg):
s2.iloc[3.0] = 0
- @ignore_ix
def test_slice_non_numeric(self):
# GH 4892
@@ -397,12 +348,7 @@ def test_slice_non_numeric(self):
with pytest.raises(TypeError, match=msg):
s.iloc[l]
- for idxr in [
- lambda x: x.ix,
- lambda x: x.loc,
- lambda x: x.iloc,
- lambda x: x,
- ]:
+ for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
"cannot do slice indexing"
@@ -414,9 +360,8 @@ def test_slice_non_numeric(self):
kind_int=str(int),
)
)
- with catch_warnings(record=True):
- with pytest.raises(TypeError, match=msg):
- idxr(s)[l]
+ with pytest.raises(TypeError, match=msg):
+ idxr(s)[l]
# setitem
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
@@ -429,12 +374,7 @@ def test_slice_non_numeric(self):
with pytest.raises(TypeError, match=msg):
s.iloc[l] = 0
- for idxr in [
- lambda x: x.ix,
- lambda x: x.loc,
- lambda x: x.iloc,
- lambda x: x,
- ]:
+ for idxr in [lambda x: x.loc, lambda x: x.iloc, lambda x: x]:
msg = (
"cannot do slice indexing"
r" on {klass} with these indexers"
@@ -445,11 +385,9 @@ def test_slice_non_numeric(self):
kind_int=str(int),
)
)
- with catch_warnings(record=True):
- with pytest.raises(TypeError, match=msg):
- idxr(s)[l] = 0
+ with pytest.raises(TypeError, match=msg):
+ idxr(s)[l] = 0
- @ignore_ix
def test_slice_integer(self):
# same as above, but for Integer based indexes
@@ -468,10 +406,9 @@ def test_slice_integer(self):
# getitem
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
- for idxr in [lambda x: x.loc, lambda x: x.ix]:
+ for idxr in [lambda x: x.loc]:
- with catch_warnings(record=True):
- result = idxr(s)[l]
+ result = idxr(s)[l]
# these are all label indexing
# except getitem which is positional
@@ -494,9 +431,8 @@ def test_slice_integer(self):
# getitem out-of-bounds
for l in [slice(-6, 6), slice(-6.0, 6.0)]:
- for idxr in [lambda x: x.loc, lambda x: x.ix]:
- with catch_warnings(record=True):
- result = idxr(s)[l]
+ for idxr in [lambda x: x.loc]:
+ result = idxr(s)[l]
# these are all label indexing
# except getitem which is positional
@@ -523,10 +459,9 @@ def test_slice_integer(self):
(slice(2.5, 3.5), slice(3, 4)),
]:
- for idxr in [lambda x: x.loc, lambda x: x.ix]:
+ for idxr in [lambda x: x.loc]:
- with catch_warnings(record=True):
- result = idxr(s)[l]
+ result = idxr(s)[l]
if oob:
res = slice(0, 0)
else:
@@ -546,11 +481,10 @@ def test_slice_integer(self):
# setitem
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
- for idxr in [lambda x: x.loc, lambda x: x.ix]:
+ for idxr in [lambda x: x.loc]:
sc = s.copy()
- with catch_warnings(record=True):
- idxr(sc)[l] = 0
- result = idxr(sc)[l].values.ravel()
+ idxr(sc)[l] = 0
+ result = idxr(sc)[l].values.ravel()
assert (result == 0).all()
# positional indexing
@@ -585,7 +519,6 @@ def test_integer_positional_indexing(self):
with pytest.raises(TypeError, match=msg):
idxr(s)[l]
- @ignore_ix
def test_slice_integer_frame_getitem(self):
# similar to above, but on the getitem dim (of a DataFrame)
@@ -663,10 +596,7 @@ def f(idxr):
s[l] = 0
f(lambda x: x.loc)
- with catch_warnings(record=True):
- f(lambda x: x.ix)
- @ignore_ix
def test_slice_float(self):
# same as above, but for floats
@@ -679,20 +609,18 @@ def test_slice_float(self):
for l in [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)]:
expected = s.iloc[3:4]
- for idxr in [lambda x: x.ix, lambda x: x.loc, lambda x: x]:
+ for idxr in [lambda x: x.loc, lambda x: x]:
# getitem
- with catch_warnings(record=True):
- result = idxr(s)[l]
+ result = idxr(s)[l]
if isinstance(s, Series):
tm.assert_series_equal(result, expected)
else:
tm.assert_frame_equal(result, expected)
# setitem
s2 = s.copy()
- with catch_warnings(record=True):
- idxr(s2)[l] = 0
- result = idxr(s2)[l].values.ravel()
+ idxr(s2)[l] = 0
+ result = idxr(s2)[l].values.ravel()
assert (result == 0).all()
def test_floating_index_doc_example(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/27533 | 2019-07-23T01:42:23Z | 2019-07-23T21:57:21Z | 2019-07-23T21:57:21Z | 2019-07-24T11:55:49Z | |
CLN: simplify join take call | diff --git a/pandas/_libs/join.pyx b/pandas/_libs/join.pyx
index f9e1ebb11116b..238bfd0be0aa7 100644
--- a/pandas/_libs/join.pyx
+++ b/pandas/_libs/join.pyx
@@ -8,8 +8,9 @@ from numpy cimport (ndarray,
uint32_t, uint64_t, float32_t, float64_t)
cnp.import_array()
-from pandas._libs.algos import groupsort_indexer, ensure_platform_int
-from pandas.core.algorithms import take_nd
+from pandas._libs.algos import (
+ groupsort_indexer, ensure_platform_int, take_1d_int64_int64
+)
def inner_join(const int64_t[:] left, const int64_t[:] right,
@@ -67,8 +68,8 @@ def left_outer_join(const int64_t[:] left, const int64_t[:] right,
Py_ssize_t max_groups, sort=True):
cdef:
Py_ssize_t i, j, k, count = 0
- ndarray[int64_t] left_count, right_count
- ndarray left_sorter, right_sorter, rev
+ ndarray[int64_t] left_count, right_count, left_sorter, right_sorter
+ ndarray rev
ndarray[int64_t] left_indexer, right_indexer
int64_t lc, rc
@@ -124,10 +125,8 @@ def left_outer_join(const int64_t[:] left, const int64_t[:] right,
# no multiple matches for any row on the left
# this is a short-cut to avoid groupsort_indexer
# otherwise, the `else` path also works in this case
- left_sorter = ensure_platform_int(left_sorter)
-
rev = np.empty(len(left), dtype=np.intp)
- rev.put(left_sorter, np.arange(len(left)))
+ rev.put(ensure_platform_int(left_sorter), np.arange(len(left)))
else:
rev, _ = groupsort_indexer(left_indexer, len(left))
@@ -201,9 +200,12 @@ def full_outer_join(const int64_t[:] left, const int64_t[:] right,
_get_result_indexer(right_sorter, right_indexer))
-def _get_result_indexer(sorter, indexer):
+cdef _get_result_indexer(ndarray[int64_t] sorter, ndarray[int64_t] indexer):
if len(sorter) > 0:
- res = take_nd(sorter, indexer, fill_value=-1)
+ # cython-only equivalent to
+ # `res = algos.take_nd(sorter, indexer, fill_value=-1)`
+ res = np.empty(len(indexer), dtype=np.int64)
+ take_1d_int64_int64(sorter, indexer, res, -1)
else:
# length-0 case
res = np.empty(len(indexer), dtype=np.int64)
| We can track down the dtypes that get passed to `_get_result_indexer`, and since it is always int64/int64, we can reduce the call to `algos.take_nd` down to a direct cython call.
This removes the dependency of this module on non-cython parts of the code | https://api.github.com/repos/pandas-dev/pandas/pulls/27531 | 2019-07-23T00:29:55Z | 2019-07-24T11:56:26Z | 2019-07-24T11:56:26Z | 2019-07-24T13:21:25Z |
CLN: remove arrmap, closes #27251 | diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 0dbe525f7506e..038447ad252fe 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -674,31 +674,6 @@ def backfill_2d_inplace(algos_t[:, :] values,
val = values[j, i]
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def arrmap(algos_t[:] index, object func):
- cdef:
- Py_ssize_t length = index.shape[0]
- Py_ssize_t i = 0
- ndarray[object] result = np.empty(length, dtype=np.object_)
-
- from pandas._libs.lib import maybe_convert_objects
-
- for i in range(length):
- result[i] = func(index[i])
-
- return maybe_convert_objects(result)
-
-
-arrmap_float64 = arrmap["float64_t"]
-arrmap_float32 = arrmap["float32_t"]
-arrmap_object = arrmap["object"]
-arrmap_int64 = arrmap["int64_t"]
-arrmap_int32 = arrmap["int32_t"]
-arrmap_uint64 = arrmap["uint64_t"]
-arrmap_bool = arrmap["uint8_t"]
-
-
@cython.boundscheck(False)
@cython.wraparound(False)
def is_monotonic(ndarray[algos_t, ndim=1] arr, bint timelike):
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 111f6bf04b523..c7230dd7385c2 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1100,7 +1100,9 @@ def _get_score(at):
return _get_score(q)
else:
q = np.asarray(q, np.float64)
- return algos.arrmap_float64(q, _get_score)
+ result = [_get_score(x) for x in q]
+ result = np.array(result, dtype=np.float64)
+ return result
# --------------- #
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index c0d73821020b5..d81ee79418e9c 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1685,12 +1685,6 @@ def test_pad_backfill_object_segfault():
tm.assert_numpy_array_equal(result, expected)
-def test_arrmap():
- values = np.array(["foo", "foo", "bar", "bar", "baz", "qux"], dtype="O")
- result = libalgos.arrmap_object(values, lambda x: x in ["foo", "bar"])
- assert result.dtype == np.bool_
-
-
class TestTseriesUtil:
def test_combineFunc(self):
pass
| https://api.github.com/repos/pandas-dev/pandas/pulls/27530 | 2019-07-23T00:25:28Z | 2019-07-23T21:14:23Z | 2019-07-23T21:14:23Z | 2019-07-23T21:39:50Z | |
BUG: Fix fields functions with readonly data, vaex#357 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index eed48f9e46897..ee3521ae31949 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -89,7 +89,7 @@ Categorical
Datetimelike
^^^^^^^^^^^^
- Bug in :meth:`Series.__setitem__` incorrectly casting ``np.timedelta64("NaT")`` to ``np.datetime64("NaT")`` when inserting into a :class:`Series` with datetime64 dtype (:issue:`27311`)
--
+- Bug in :meth:`Series.dt` property lookups when the underlying data is read-only (:issue:`27529`)
-
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index 2a41b5ff2339c..2ed85595f7e3a 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -45,7 +45,7 @@ def get_time_micros(ndarray[int64_t] dtindex):
@cython.wraparound(False)
@cython.boundscheck(False)
-def build_field_sarray(int64_t[:] dtindex):
+def build_field_sarray(const int64_t[:] dtindex):
"""
Datetime as int64 representation to a structured array of fields
"""
@@ -87,7 +87,7 @@ def build_field_sarray(int64_t[:] dtindex):
@cython.wraparound(False)
@cython.boundscheck(False)
-def get_date_name_field(int64_t[:] dtindex, object field, object locale=None):
+def get_date_name_field(const int64_t[:] dtindex, object field, object locale=None):
"""
Given a int64-based datetime index, return array of strings of date
name based on requested field (e.g. weekday_name)
@@ -137,7 +137,7 @@ def get_date_name_field(int64_t[:] dtindex, object field, object locale=None):
@cython.wraparound(False)
@cython.boundscheck(False)
-def get_start_end_field(int64_t[:] dtindex, object field,
+def get_start_end_field(const int64_t[:] dtindex, object field,
object freqstr=None, int month_kw=12):
"""
Given an int64-based datetime index return array of indicators
@@ -380,7 +380,7 @@ def get_start_end_field(int64_t[:] dtindex, object field,
@cython.wraparound(False)
@cython.boundscheck(False)
-def get_date_field(int64_t[:] dtindex, object field):
+def get_date_field(const int64_t[:] dtindex, object field):
"""
Given a int64-based datetime index, extract the year, month, etc.,
field and return an array of these values.
@@ -542,7 +542,7 @@ def get_date_field(int64_t[:] dtindex, object field):
@cython.wraparound(False)
@cython.boundscheck(False)
-def get_timedelta_field(int64_t[:] tdindex, object field):
+def get_timedelta_field(const int64_t[:] tdindex, object field):
"""
Given a int64-based timedelta index, extract the days, hrs, sec.,
field and return an array of these values.
diff --git a/pandas/tests/tslibs/test_fields.py b/pandas/tests/tslibs/test_fields.py
new file mode 100644
index 0000000000000..cd729956a027c
--- /dev/null
+++ b/pandas/tests/tslibs/test_fields.py
@@ -0,0 +1,31 @@
+import numpy as np
+
+from pandas._libs.tslibs import fields
+
+import pandas.util.testing as tm
+
+
+def test_fields_readonly():
+ # https://github.com/vaexio/vaex/issues/357
+ # fields functions should't raise when we pass read-only data
+ dtindex = np.arange(5, dtype=np.int64) * 10 ** 9 * 3600 * 24 * 32
+ dtindex.flags.writeable = False
+
+ result = fields.get_date_name_field(dtindex, "month_name")
+ expected = np.array(
+ ["January", "February", "March", "April", "May"], dtype=np.object
+ )
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = fields.get_date_field(dtindex, "Y")
+ expected = np.array([1970, 1970, 1970, 1970, 1970], dtype=np.int32)
+ tm.assert_numpy_array_equal(result, expected)
+
+ result = fields.get_start_end_field(dtindex, "is_month_start", None)
+ expected = np.array([True, False, False, False, False], dtype=np.bool_)
+ tm.assert_numpy_array_equal(result, expected)
+
+ # treat dtindex as timedeltas for this next one
+ result = fields.get_timedelta_field(dtindex, "days")
+ expected = np.arange(5, dtype=np.int32) * 32
+ tm.assert_numpy_array_equal(result, expected)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27529 | 2019-07-22T23:49:50Z | 2019-07-24T03:47:55Z | 2019-07-24T03:47:55Z | 2019-07-24T13:22:20Z |
Backport PR #27510 on branch 0.25.x (BUG: Retain tz transformation in groupby.transform) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 6234bc0f7bd35..69f82f7f85040 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -120,7 +120,7 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
--
+- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` where applying a timezone conversion lambda function would drop timezone information (:issue:`27496`)
-
-
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 7fd0ca94e7997..5b9cec6903749 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -42,7 +42,7 @@
from pandas.core.base import DataError, SpecificationError
import pandas.core.common as com
from pandas.core.frame import DataFrame
-from pandas.core.generic import NDFrame, _shared_docs
+from pandas.core.generic import ABCDataFrame, ABCSeries, NDFrame, _shared_docs
from pandas.core.groupby import base
from pandas.core.groupby.groupby import GroupBy, _apply_docs, _transform_template
from pandas.core.index import Index, MultiIndex
@@ -1025,8 +1025,8 @@ def transform(self, func, *args, **kwargs):
object.__setattr__(group, "name", name)
res = wrapper(group)
- if hasattr(res, "values"):
- res = res.values
+ if isinstance(res, (ABCDataFrame, ABCSeries)):
+ res = res._values
indexer = self._get_index(name)
s = klass(res, indexer)
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 1eab3ba253f4d..9a8b7cf18f2c0 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -1001,3 +1001,27 @@ def test_ffill_not_in_axis(func, key, val):
expected = df
assert_frame_equal(result, expected)
+
+
+def test_transform_lambda_with_datetimetz():
+ # GH 27496
+ df = DataFrame(
+ {
+ "time": [
+ Timestamp("2010-07-15 03:14:45"),
+ Timestamp("2010-11-19 18:47:06"),
+ ],
+ "timezone": ["Etc/GMT+4", "US/Eastern"],
+ }
+ )
+ result = df.groupby(["timezone"])["time"].transform(
+ lambda x: x.dt.tz_localize(x.name)
+ )
+ expected = Series(
+ [
+ Timestamp("2010-07-15 03:14:45", tz="Etc/GMT+4"),
+ Timestamp("2010-11-19 18:47:06", tz="US/Eastern"),
+ ],
+ name="time",
+ )
+ assert_series_equal(result, expected)
| Backport PR #27510: BUG: Retain tz transformation in groupby.transform | https://api.github.com/repos/pandas-dev/pandas/pulls/27521 | 2019-07-22T15:28:18Z | 2019-07-24T11:56:58Z | 2019-07-24T11:56:58Z | 2019-07-24T11:56:59Z |
DOC:Remove DataFrame.append from the 10min intro | diff --git a/doc/source/getting_started/10min.rst b/doc/source/getting_started/10min.rst
index 510c7ef97aa98..9045e5b32c29f 100644
--- a/doc/source/getting_started/10min.rst
+++ b/doc/source/getting_started/10min.rst
@@ -468,6 +468,13 @@ Concatenating pandas objects together with :func:`concat`:
pd.concat(pieces)
+.. note::
+ Adding a column to a ``DataFrame`` is relatively fast. However, adding
+ a row requires a copy, and may be expensive. We recommend passing a
+ pre-built list of records to the ``DataFrame`` constructor instead
+ of building a ``DataFrame`` by iteratively appending records to it.
+ See :ref:`Appending to dataframe <merging.concatenation>` for more.
+
Join
~~~~
@@ -491,21 +498,6 @@ Another example that can be given is:
right
pd.merge(left, right, on='key')
-
-Append
-~~~~~~
-
-Append rows to a dataframe. See the :ref:`Appending <merging.concatenation>`
-section.
-
-.. ipython:: python
-
- df = pd.DataFrame(np.random.randn(8, 4), columns=['A', 'B', 'C', 'D'])
- df
- s = df.iloc[3]
- df.append(s, ignore_index=True)
-
-
Grouping
--------
| Remove the `append` section from 10 min intro doc as complexity of that is very different than `list.append`
- [x] closes #27518
| https://api.github.com/repos/pandas-dev/pandas/pulls/27520 | 2019-07-22T14:44:09Z | 2019-08-02T12:43:47Z | 2019-08-02T12:43:47Z | 2019-08-02T16:10:02Z |
more type hints for io/formats/format.py | diff --git a/.gitignore b/.gitignore
index 56828fa1d9331..e85da9c9b976b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -66,6 +66,9 @@ coverage_html_report
# hypothesis test database
.hypothesis/
__pycache__
+# pytest-monkeytype
+monkeytype.sqlite3
+
# OS generated files #
######################
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 11b69064723c5..c5b0b7b222e27 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -6,16 +6,33 @@
from functools import partial
from io import StringIO
from shutil import get_terminal_size
-from typing import TYPE_CHECKING, List, Optional, TextIO, Tuple, Union, cast
+from typing import (
+ TYPE_CHECKING,
+ Any,
+ Callable,
+ Dict,
+ Iterable,
+ List,
+ Optional,
+ TextIO,
+ Tuple,
+ Type,
+ Union,
+ cast,
+)
from unicodedata import east_asian_width
+from dateutil.tz.tz import tzutc
+from dateutil.zoneinfo import tzfile
import numpy as np
+from numpy import float64, int32, ndarray
from pandas._config.config import get_option, set_option
from pandas._libs import lib
from pandas._libs.tslib import format_array_from_datetime
from pandas._libs.tslibs import NaT, Timedelta, Timestamp, iNaT
+from pandas._libs.tslibs.nattype import NaTType
from pandas.core.dtypes.common import (
is_categorical_dtype,
@@ -40,10 +57,14 @@
)
from pandas.core.dtypes.missing import isna, notna
+from pandas._typing import FilePathOrBuffer
+from pandas.core.arrays.datetimes import DatetimeArray
+from pandas.core.arrays.timedeltas import TimedeltaArray
from pandas.core.base import PandasObject
import pandas.core.common as com
from pandas.core.index import Index, ensure_index
from pandas.core.indexes.datetimes import DatetimeIndex
+from pandas.core.indexes.timedeltas import TimedeltaIndex
from pandas.io.common import _expand_user, _stringify_path
from pandas.io.formats.printing import adjoin, justify, pprint_thing
@@ -51,6 +72,11 @@
if TYPE_CHECKING:
from pandas import Series, DataFrame, Categorical
+formatters_type = Union[
+ List[Callable], Tuple[Callable, ...], Dict[Union[str, int], Callable]
+]
+float_format_type = Union[str, Callable, "EngFormatter"]
+
common_docstring = """
Parameters
----------
@@ -66,11 +92,11 @@
Whether to print index (row) labels.
na_rep : str, optional, default 'NaN'
String representation of NAN to use.
- formatters : list or dict of one-param. functions, optional
+ formatters : list, tuple or dict of one-param. functions, optional
Formatter functions to apply to columns' elements by position or
name.
The result of each function must be a unicode string.
- List must be of length equal to the number of columns.
+ List/tuple must be of length equal to the number of columns.
float_format : one-parameter function, optional, default None
Formatter function to apply to columns' elements if they are
floats. The result of this function must be a unicode string.
@@ -354,13 +380,13 @@ class TextAdjustment:
def __init__(self):
self.encoding = get_option("display.encoding")
- def len(self, text):
+ def len(self, text: str) -> int:
return len(text)
- def justify(self, texts, max_len, mode="right"):
+ def justify(self, texts: Any, max_len: int, mode: str = "right") -> List[str]:
return justify(texts, max_len, mode=mode)
- def adjoin(self, space, *lists, **kwargs):
+ def adjoin(self, space: int, *lists, **kwargs) -> str:
return adjoin(space, *lists, strlen=self.len, justfunc=self.justify, **kwargs)
@@ -377,7 +403,7 @@ def __init__(self):
# Ambiguous width can be changed by option
self._EAW_MAP = {"Na": 1, "N": 1, "W": 2, "F": 2, "H": 1}
- def len(self, text):
+ def len(self, text: str) -> int:
"""
Calculate display width considering unicode East Asian Width
"""
@@ -388,7 +414,9 @@ def len(self, text):
self._EAW_MAP.get(east_asian_width(c), self.ambiguous_width) for c in text
)
- def justify(self, texts, max_len, mode="right"):
+ def justify(
+ self, texts: Iterable[str], max_len: int, mode: str = "right"
+ ) -> List[str]:
# re-calculate padding space per str considering East Asian Width
def _get_pad(t):
return max_len - self.len(t) + len(t)
@@ -401,7 +429,7 @@ def _get_pad(t):
return [x.rjust(_get_pad(x)) for x in texts]
-def _get_adjustment():
+def _get_adjustment() -> TextAdjustment:
use_east_asian_width = get_option("display.unicode.east_asian_width")
if use_east_asian_width:
return EastAsianTextAdjustment()
@@ -411,17 +439,21 @@ def _get_adjustment():
class TableFormatter:
- show_dimensions = None
+ show_dimensions = None # type: bool
+ is_truncated = None # type: bool
+ formatters = None # type: formatters_type
+ columns = None # type: Index
@property
- def should_show_dimensions(self):
+ def should_show_dimensions(self) -> Optional[bool]:
return self.show_dimensions is True or (
self.show_dimensions == "truncate" and self.is_truncated
)
- def _get_formatter(self, i):
+ def _get_formatter(self, i: Union[str, int]) -> Optional[Callable]:
if isinstance(self.formatters, (list, tuple)):
if is_integer(i):
+ i = cast(int, i)
return self.formatters[i]
else:
return None
@@ -446,26 +478,26 @@ class DataFrameFormatter(TableFormatter):
def __init__(
self,
- frame,
- buf=None,
- columns=None,
- col_space=None,
- header=True,
- index=True,
- na_rep="NaN",
- formatters=None,
- justify=None,
- float_format=None,
- sparsify=None,
- index_names=True,
- line_width=None,
- max_rows=None,
- min_rows=None,
- max_cols=None,
- show_dimensions=False,
- decimal=".",
- table_id=None,
- render_links=False,
+ frame: "DataFrame",
+ buf: Optional[FilePathOrBuffer] = None,
+ columns: Optional[List[str]] = None,
+ col_space: Optional[Union[str, int]] = None,
+ header: Union[bool, List[str]] = True,
+ index: bool = True,
+ na_rep: str = "NaN",
+ formatters: Optional[formatters_type] = None,
+ justify: Optional[str] = None,
+ float_format: Optional[float_format_type] = None,
+ sparsify: Optional[bool] = None,
+ index_names: bool = True,
+ line_width: Optional[int] = None,
+ max_rows: Optional[int] = None,
+ min_rows: Optional[int] = None,
+ max_cols: Optional[int] = None,
+ show_dimensions: bool = False,
+ decimal: str = ".",
+ table_id: Optional[str] = None,
+ render_links: bool = False,
**kwds
):
self.frame = frame
@@ -532,9 +564,12 @@ def _chk_truncate(self) -> None:
prompt_row = 1
if self.show_dimensions:
show_dimension_rows = 3
+ # assume we only get here if self.header is boolean.
+ # i.e. not to_latex() where self.header may be List[str]
+ self.header = cast(bool, self.header)
n_add_rows = self.header + dot_row + show_dimension_rows + prompt_row
# rows available to fill with actual data
- max_rows_adj = self.h - n_add_rows
+ max_rows_adj = self.h - n_add_rows # type: Optional[int]
self.max_rows_adj = max_rows_adj
# Format only rows and columns that could potentially fit the
@@ -561,9 +596,12 @@ def _chk_truncate(self) -> None:
frame = self.frame
if truncate_h:
+ # cast here since if truncate_h is True, max_cols_adj is not None
+ max_cols_adj = cast(int, max_cols_adj)
if max_cols_adj == 0:
col_num = len(frame.columns)
elif max_cols_adj == 1:
+ max_cols = cast(int, max_cols)
frame = frame.iloc[:, :max_cols]
col_num = max_cols
else:
@@ -573,6 +611,8 @@ def _chk_truncate(self) -> None:
)
self.tr_col_num = col_num
if truncate_v:
+ # cast here since if truncate_v is True, max_rows_adj is not None
+ max_rows_adj = cast(int, max_rows_adj)
if max_rows_adj == 1:
row_num = max_rows
frame = frame.iloc[:max_rows, :]
@@ -586,12 +626,16 @@ def _chk_truncate(self) -> None:
self.tr_frame = frame
self.truncate_h = truncate_h
self.truncate_v = truncate_v
- self.is_truncated = self.truncate_h or self.truncate_v
+ self.is_truncated = bool(self.truncate_h or self.truncate_v)
def _to_str_columns(self) -> List[List[str]]:
"""
Render a DataFrame to a list of columns (as lists of strings).
"""
+ # this method is not used by to_html where self.col_space
+ # could be a string so safe to cast
+ self.col_space = cast(int, self.col_space)
+
frame = self.tr_frame
# may include levels names also
@@ -610,6 +654,8 @@ def _to_str_columns(self) -> List[List[str]]:
stringified.append(fmt_values)
else:
if is_list_like(self.header):
+ # cast here since can't be bool if is_list_like
+ self.header = cast(List[str], self.header)
if len(self.header) != len(self.columns):
raise ValueError(
(
@@ -656,6 +702,8 @@ def _to_str_columns(self) -> List[List[str]]:
if truncate_v:
n_header_rows = len(str_index) - len(frame)
row_num = self.tr_row_num
+ # cast here since if truncate_v is True, self.tr_row_num is not None
+ row_num = cast(int, row_num)
for ix, col in enumerate(strcols):
# infer from above row
cwidth = self.adj.len(strcols[ix][row_num])
@@ -704,8 +752,8 @@ def to_string(self) -> None:
): # need to wrap around
text = self._join_multiline(*strcols)
else: # max_cols == 0. Try to fit frame to terminal
- text = self.adj.adjoin(1, *strcols).split("\n")
- max_len = Series(text).str.len().max()
+ lines = self.adj.adjoin(1, *strcols).split("\n")
+ max_len = Series(lines).str.len().max()
# plus truncate dot col
dif = max_len - self.w
# '+ 1' to avoid too wide repr (GH PR #17023)
@@ -742,10 +790,10 @@ def to_string(self) -> None:
)
)
- def _join_multiline(self, *strcols):
+ def _join_multiline(self, *args) -> str:
lwidth = self.line_width
adjoin_width = 1
- strcols = list(strcols)
+ strcols = list(args)
if self.index:
idx = strcols.pop(0)
lwidth -= np.array([self.adj.len(x) for x in idx]).max() + adjoin_width
@@ -758,6 +806,8 @@ def _join_multiline(self, *strcols):
nbins = len(col_bins)
if self.truncate_v:
+ # cast here since if truncate_v is True, max_rows_adj is not None
+ self.max_rows_adj = cast(int, self.max_rows_adj)
nrows = self.max_rows_adj + 1
else:
nrows = len(self.frame)
@@ -779,13 +829,13 @@ def _join_multiline(self, *strcols):
def to_latex(
self,
- column_format=None,
- longtable=False,
- encoding=None,
- multicolumn=False,
- multicolumn_format=None,
- multirow=False,
- ):
+ column_format: Optional[str] = None,
+ longtable: bool = False,
+ encoding: Optional[str] = None,
+ multicolumn: bool = False,
+ multicolumn_format: Optional[str] = None,
+ multirow: bool = False,
+ ) -> None:
"""
Render a DataFrame to a LaTeX tabular/longtable environment output.
"""
@@ -920,7 +970,8 @@ def show_col_idx_names(self) -> bool:
def _get_formatted_index(self, frame: "DataFrame") -> List[str]:
# Note: this is only used by to_string() and to_latex(), not by
- # to_html().
+ # to_html(). so safe to cast col_space here.
+ self.col_space = cast(int, self.col_space)
index = frame.index
columns = frame.columns
fmt = self._get_formatter("__index__")
@@ -972,16 +1023,16 @@ def _get_column_name_list(self) -> List[str]:
def format_array(
- values,
- formatter,
- float_format=None,
- na_rep="NaN",
- digits=None,
- space=None,
- justify="right",
- decimal=".",
- leading_space=None,
-):
+ values: Any,
+ formatter: Optional[Callable],
+ float_format: Optional[float_format_type] = None,
+ na_rep: str = "NaN",
+ digits: Optional[int] = None,
+ space: Optional[Union[str, int]] = None,
+ justify: str = "right",
+ decimal: str = ".",
+ leading_space: Optional[bool] = None,
+) -> List[str]:
"""
Format an array for printing.
@@ -1010,7 +1061,7 @@ def format_array(
"""
if is_datetime64_dtype(values.dtype):
- fmt_klass = Datetime64Formatter
+ fmt_klass = Datetime64Formatter # type: Type[GenericArrayFormatter]
elif is_datetime64tz_dtype(values):
fmt_klass = Datetime64TZFormatter
elif is_timedelta64_dtype(values.dtype):
@@ -1051,17 +1102,17 @@ def format_array(
class GenericArrayFormatter:
def __init__(
self,
- values,
- digits=7,
- formatter=None,
- na_rep="NaN",
- space=12,
- float_format=None,
- justify="right",
- decimal=".",
- quoting=None,
- fixed_width=True,
- leading_space=None,
+ values: Any,
+ digits: int = 7,
+ formatter: Optional[Callable] = None,
+ na_rep: str = "NaN",
+ space: Union[str, int] = 12,
+ float_format: Optional[float_format_type] = None,
+ justify: str = "right",
+ decimal: str = ".",
+ quoting: Optional[int] = None,
+ fixed_width: bool = True,
+ leading_space: Optional[bool] = None,
):
self.values = values
self.digits = digits
@@ -1075,11 +1126,11 @@ def __init__(
self.fixed_width = fixed_width
self.leading_space = leading_space
- def get_result(self):
+ def get_result(self) -> Union[ndarray, List[str]]:
fmt_values = self._format_strings()
return _make_fixed_width(fmt_values, self.justify)
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
if self.float_format is None:
float_format = get_option("display.float_format")
if float_format is None:
@@ -1161,7 +1212,11 @@ def __init__(self, *args, **kwargs):
self.formatter = self.float_format
self.float_format = None
- def _value_formatter(self, float_format=None, threshold=None):
+ def _value_formatter(
+ self,
+ float_format: Optional[float_format_type] = None,
+ threshold: Optional[Union[float, int]] = None,
+ ) -> Callable:
"""Returns a function to be applied on each value to format it
"""
@@ -1207,7 +1262,7 @@ def formatter(value):
return formatter
- def get_result_as_array(self):
+ def get_result_as_array(self) -> Union[ndarray, List[str]]:
"""
Returns the float values converted into strings using
the parameters given at initialisation, as a numpy array
@@ -1259,7 +1314,7 @@ def format_values_with(float_format):
if self.fixed_width:
float_format = partial(
"{value: .{digits:d}f}".format, digits=self.digits
- )
+ ) # type: Optional[float_format_type]
else:
float_format = self.float_format
else:
@@ -1296,7 +1351,7 @@ def format_values_with(float_format):
return formatted_values
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
# shortcut
if self.formatter is not None:
return [self.formatter(x) for x in self.values]
@@ -1305,19 +1360,25 @@ def _format_strings(self):
class IntArrayFormatter(GenericArrayFormatter):
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
formatter = self.formatter or (lambda x: "{x: d}".format(x=x))
fmt_values = [formatter(x) for x in self.values]
return fmt_values
class Datetime64Formatter(GenericArrayFormatter):
- def __init__(self, values, nat_rep="NaT", date_format=None, **kwargs):
+ def __init__(
+ self,
+ values: Union[ndarray, "Series", DatetimeIndex, DatetimeArray],
+ nat_rep: str = "NaT",
+ date_format: None = None,
+ **kwargs
+ ):
super().__init__(values, **kwargs)
self.nat_rep = nat_rep
self.date_format = date_format
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
""" we by definition have DO NOT have a TZ """
values = self.values
@@ -1337,7 +1398,7 @@ def _format_strings(self):
class ExtensionArrayFormatter(GenericArrayFormatter):
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
values = self.values
if isinstance(values, (ABCIndexClass, ABCSeries)):
values = values._values
@@ -1363,7 +1424,11 @@ def _format_strings(self):
return fmt_values
-def format_percentiles(percentiles):
+def format_percentiles(
+ percentiles: Union[
+ ndarray, List[Union[int, float]], List[float], List[Union[str, float]]
+ ]
+) -> List[str]:
"""
Outputs rounded and formatted percentiles.
@@ -1429,7 +1494,7 @@ def format_percentiles(percentiles):
return [i + "%" for i in out]
-def _is_dates_only(values):
+def _is_dates_only(values: Union[ndarray, DatetimeArray, Index, DatetimeIndex]) -> bool:
# return a boolean if we are only dates (and don't have a timezone)
assert values.ndim == 1
@@ -1448,7 +1513,11 @@ def _is_dates_only(values):
return False
-def _format_datetime64(x, tz=None, nat_rep="NaT"):
+def _format_datetime64(
+ x: Union[NaTType, Timestamp],
+ tz: Optional[Union[tzfile, tzutc]] = None,
+ nat_rep: str = "NaT",
+) -> str:
if x is None or (is_scalar(x) and isna(x)):
return nat_rep
@@ -1461,7 +1530,9 @@ def _format_datetime64(x, tz=None, nat_rep="NaT"):
return str(x)
-def _format_datetime64_dateonly(x, nat_rep="NaT", date_format=None):
+def _format_datetime64_dateonly(
+ x: Union[NaTType, Timestamp], nat_rep: str = "NaT", date_format: None = None
+) -> str:
if x is None or (is_scalar(x) and isna(x)):
return nat_rep
@@ -1474,7 +1545,9 @@ def _format_datetime64_dateonly(x, nat_rep="NaT", date_format=None):
return x._date_repr
-def _get_format_datetime64(is_dates_only, nat_rep="NaT", date_format=None):
+def _get_format_datetime64(
+ is_dates_only: bool, nat_rep: str = "NaT", date_format: None = None
+) -> Callable:
if is_dates_only:
return lambda x, tz=None: _format_datetime64_dateonly(
@@ -1484,7 +1557,9 @@ def _get_format_datetime64(is_dates_only, nat_rep="NaT", date_format=None):
return lambda x, tz=None: _format_datetime64(x, tz=tz, nat_rep=nat_rep)
-def _get_format_datetime64_from_values(values, date_format):
+def _get_format_datetime64_from_values(
+ values: Union[ndarray, DatetimeArray, DatetimeIndex], date_format: Optional[str]
+) -> Optional[str]:
""" given values and a date_format, return a string format """
if isinstance(values, np.ndarray) and values.ndim > 1:
@@ -1499,7 +1574,7 @@ def _get_format_datetime64_from_values(values, date_format):
class Datetime64TZFormatter(Datetime64Formatter):
- def _format_strings(self):
+ def _format_strings(self) -> List[str]:
""" we by definition have a TZ """
values = self.values.astype(object)
@@ -1513,12 +1588,18 @@ def _format_strings(self):
class Timedelta64Formatter(GenericArrayFormatter):
- def __init__(self, values, nat_rep="NaT", box=False, **kwargs):
+ def __init__(
+ self,
+ values: Union[ndarray, TimedeltaIndex],
+ nat_rep: str = "NaT",
+ box: bool = False,
+ **kwargs
+ ):
super().__init__(values, **kwargs)
self.nat_rep = nat_rep
self.box = box
- def _format_strings(self):
+ def _format_strings(self) -> ndarray:
formatter = self.formatter or _get_format_timedelta64(
self.values, nat_rep=self.nat_rep, box=self.box
)
@@ -1526,7 +1607,11 @@ def _format_strings(self):
return fmt_values
-def _get_format_timedelta64(values, nat_rep="NaT", box=False):
+def _get_format_timedelta64(
+ values: Union[ndarray, TimedeltaIndex, TimedeltaArray],
+ nat_rep: str = "NaT",
+ box: bool = False,
+) -> Callable:
"""
Return a formatter function for a range of timedeltas.
These will all have the same format argument
@@ -1567,7 +1652,12 @@ def _formatter(x):
return _formatter
-def _make_fixed_width(strings, justify="right", minimum=None, adj=None):
+def _make_fixed_width(
+ strings: Union[ndarray, List[str]],
+ justify: str = "right",
+ minimum: Optional[int] = None,
+ adj: Optional[TextAdjustment] = None,
+) -> Union[ndarray, List[str]]:
if len(strings) == 0 or justify == "all":
return strings
@@ -1595,7 +1685,7 @@ def just(x):
return result
-def _trim_zeros_complex(str_complexes, na_rep="NaN"):
+def _trim_zeros_complex(str_complexes: ndarray, na_rep: str = "NaN") -> List[str]:
"""
Separates the real and imaginary parts from the complex number, and
executes the _trim_zeros_float method on each of those.
@@ -1613,7 +1703,9 @@ def separate_and_trim(str_complex, na_rep):
return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
-def _trim_zeros_float(str_floats, na_rep="NaN"):
+def _trim_zeros_float(
+ str_floats: Union[ndarray, List[str]], na_rep: str = "NaN"
+) -> List[str]:
"""
Trims zeros, leaving just one before the decimal points if need be.
"""
@@ -1637,7 +1729,7 @@ def _cond(values):
return [x + "0" if x.endswith(".") and _is_number(x) else x for x in trimmed]
-def _has_names(index):
+def _has_names(index: Index) -> bool:
if isinstance(index, ABCMultiIndex):
return com._any_not_none(*index.names)
else:
@@ -1672,11 +1764,11 @@ class EngFormatter:
24: "Y",
}
- def __init__(self, accuracy=None, use_eng_prefix=False):
+ def __init__(self, accuracy: Optional[int] = None, use_eng_prefix: bool = False):
self.accuracy = accuracy
self.use_eng_prefix = use_eng_prefix
- def __call__(self, num):
+ def __call__(self, num: Union[float64, int, float]) -> str:
""" Formats a number in engineering notation, appending a letter
representing the power of 1000 of the original number. Some examples:
@@ -1743,7 +1835,7 @@ def __call__(self, num):
return formatted
-def set_eng_float_format(accuracy=3, use_eng_prefix=False):
+def set_eng_float_format(accuracy: int = 3, use_eng_prefix: bool = False) -> None:
"""
Alter default behavior on how float is formatted in DataFrame.
Format float in engineering format. By accuracy, we mean the number of
@@ -1756,7 +1848,7 @@ def set_eng_float_format(accuracy=3, use_eng_prefix=False):
set_option("display.column_space", max(12, accuracy + 9))
-def _binify(cols, line_width):
+def _binify(cols: List[int32], line_width: Union[int32, int]) -> List[int]:
adjoin_width = 1
bins = []
curr_width = 0
@@ -1776,7 +1868,9 @@ def _binify(cols, line_width):
return bins
-def get_level_lengths(levels, sentinel=""):
+def get_level_lengths(
+ levels: Any, sentinel: Union[bool, object, str] = ""
+) -> List[Dict[int, int]]:
"""For each index in each level the function returns lengths of indexes.
Parameters
@@ -1816,7 +1910,7 @@ def get_level_lengths(levels, sentinel=""):
return result
-def buffer_put_lines(buf, lines):
+def buffer_put_lines(buf: TextIO, lines: List[str]) -> None:
"""
Appends lines to a buffer.
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 91e90a78d87a7..19305126f4e5f 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -4,7 +4,7 @@
from collections import OrderedDict
from textwrap import dedent
-from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
+from typing import Any, Dict, Iterable, List, Optional, Tuple, Union, cast
from pandas._config import get_option
@@ -82,8 +82,9 @@ def row_levels(self) -> int:
def _get_columns_formatted_values(self) -> Iterable:
return self.columns
+ # https://github.com/python/mypy/issues/1237
@property
- def is_truncated(self) -> bool:
+ def is_truncated(self) -> bool: # type: ignore
return self.fmt.is_truncated
@property
@@ -458,6 +459,8 @@ def _write_hierarchical_rows(
# Insert ... row and adjust idx_values and
# level_lengths to take this into account.
ins_row = self.fmt.tr_row_num
+ # cast here since if truncate_v is True, self.fmt.tr_row_num is not None
+ ins_row = cast(int, ins_row)
inserted = False
for lnum, records in enumerate(level_lengths):
rec_new = {}
| another pass to follow-on from #27418 | https://api.github.com/repos/pandas-dev/pandas/pulls/27512 | 2019-07-22T01:02:12Z | 2019-07-24T11:58:30Z | 2019-07-24T11:58:30Z | 2019-07-24T23:33:10Z |
BUG: display.precision of negative complex numbers | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 9007d1c06f197..169968314d70e 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -57,6 +57,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
+- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
-
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 9a84f1ddd87a5..a8c3252f39966 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2593,12 +2593,12 @@ def memory_usage(self, index=True, deep=False):
... for t in dtypes])
>>> df = pd.DataFrame(data)
>>> df.head()
- int64 float64 complex128 object bool
- 0 1 1.0 1.0+0.0j 1 True
- 1 1 1.0 1.0+0.0j 1 True
- 2 1 1.0 1.0+0.0j 1 True
- 3 1 1.0 1.0+0.0j 1 True
- 4 1 1.0 1.0+0.0j 1 True
+ int64 float64 complex128 object bool
+ 0 1 1.0 1.000000+0.000000j 1 True
+ 1 1 1.0 1.000000+0.000000j 1 True
+ 2 1 1.0 1.000000+0.000000j 1 True
+ 3 1 1.0 1.000000+0.000000j 1 True
+ 4 1 1.0 1.000000+0.000000j 1 True
>>> df.memory_usage()
Index 128
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 1aa9b43197b30..b47ab63b6d41f 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -5,6 +5,7 @@
from functools import partial
from io import StringIO
+import re
from shutil import get_terminal_size
from typing import (
TYPE_CHECKING,
@@ -1688,17 +1689,10 @@ def _trim_zeros_complex(str_complexes: ndarray, na_rep: str = "NaN") -> List[str
Separates the real and imaginary parts from the complex number, and
executes the _trim_zeros_float method on each of those.
"""
-
- def separate_and_trim(str_complex, na_rep):
- num_arr = str_complex.split("+")
- return (
- _trim_zeros_float([num_arr[0]], na_rep)
- + ["+"]
- + _trim_zeros_float([num_arr[1][:-1]], na_rep)
- + ["j"]
- )
-
- return ["".join(separate_and_trim(x, na_rep)) for x in str_complexes]
+ return [
+ "".join(_trim_zeros_float(re.split(r"([j+-])", x), na_rep))
+ for x in str_complexes
+ ]
def _trim_zeros_float(
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index 818bbc566aca8..ad47f714c9550 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1537,7 +1537,7 @@ def test_to_string_float_index(self):
assert result == expected
def test_to_string_complex_float_formatting(self):
- # GH #25514
+ # GH #25514, 25745
with pd.option_context("display.precision", 5):
df = DataFrame(
{
@@ -1545,6 +1545,7 @@ def test_to_string_complex_float_formatting(self):
(0.4467846931321966 + 0.0715185102060818j),
(0.2739442392974528 + 0.23515228785438969j),
(0.26974928742135185 + 0.3250604054898979j),
+ (-1j),
]
}
)
@@ -1552,7 +1553,8 @@ def test_to_string_complex_float_formatting(self):
expected = (
" x\n0 0.44678+0.07152j\n"
"1 0.27394+0.23515j\n"
- "2 0.26975+0.32506j"
+ "2 0.26975+0.32506j\n"
+ "3 -0.00000-1.00000j"
)
assert result == expected
| - [x] closes #27484
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27511 | 2019-07-22T00:58:27Z | 2019-07-25T15:58:18Z | 2019-07-25T15:58:18Z | 2019-07-25T20:06:59Z |
BUG: Retain tz transformation in groupby.transform | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 6234bc0f7bd35..69f82f7f85040 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -120,7 +120,7 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
--
+- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` where applying a timezone conversion lambda function would drop timezone information (:issue:`27496`)
-
-
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 7fd0ca94e7997..5b9cec6903749 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -42,7 +42,7 @@
from pandas.core.base import DataError, SpecificationError
import pandas.core.common as com
from pandas.core.frame import DataFrame
-from pandas.core.generic import NDFrame, _shared_docs
+from pandas.core.generic import ABCDataFrame, ABCSeries, NDFrame, _shared_docs
from pandas.core.groupby import base
from pandas.core.groupby.groupby import GroupBy, _apply_docs, _transform_template
from pandas.core.index import Index, MultiIndex
@@ -1025,8 +1025,8 @@ def transform(self, func, *args, **kwargs):
object.__setattr__(group, "name", name)
res = wrapper(group)
- if hasattr(res, "values"):
- res = res.values
+ if isinstance(res, (ABCDataFrame, ABCSeries)):
+ res = res._values
indexer = self._get_index(name)
s = klass(res, indexer)
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 1eab3ba253f4d..9a8b7cf18f2c0 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -1001,3 +1001,27 @@ def test_ffill_not_in_axis(func, key, val):
expected = df
assert_frame_equal(result, expected)
+
+
+def test_transform_lambda_with_datetimetz():
+ # GH 27496
+ df = DataFrame(
+ {
+ "time": [
+ Timestamp("2010-07-15 03:14:45"),
+ Timestamp("2010-11-19 18:47:06"),
+ ],
+ "timezone": ["Etc/GMT+4", "US/Eastern"],
+ }
+ )
+ result = df.groupby(["timezone"])["time"].transform(
+ lambda x: x.dt.tz_localize(x.name)
+ )
+ expected = Series(
+ [
+ Timestamp("2010-07-15 03:14:45", tz="Etc/GMT+4"),
+ Timestamp("2010-11-19 18:47:06", tz="US/Eastern"),
+ ],
+ name="time",
+ )
+ assert_series_equal(result, expected)
| - [x] closes #27496
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27510 | 2019-07-21T21:00:49Z | 2019-07-22T15:27:40Z | 2019-07-22T15:27:40Z | 2019-07-22T16:47:29Z |
remove references to v0.19 in docs and code | diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index 2edd242d8cad9..3f6f56376861f 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -2061,8 +2061,6 @@ Convert a subset of columns to a specified type using :meth:`~DataFrame.astype`.
dft
dft.dtypes
-.. versionadded:: 0.19.0
-
Convert certain columns to a specific dtype by passing a dict to :meth:`~DataFrame.astype`.
.. ipython:: python
diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index 7dca34385c1ee..8ca96ba0daa5e 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -834,8 +834,6 @@ See also the section on :ref:`merge dtypes<merging.dtypes>` for notes about pres
Unioning
~~~~~~~~
-.. versionadded:: 0.19.0
-
If you want to combine categoricals that do not necessarily have the same
categories, the :func:`~pandas.api.types.union_categoricals` function will
combine a list-like of categoricals. The new categories will be the union of
diff --git a/doc/source/user_guide/computation.rst b/doc/source/user_guide/computation.rst
index 4f44fcaab63d4..22017a2aa8b42 100644
--- a/doc/source/user_guide/computation.rst
+++ b/doc/source/user_guide/computation.rst
@@ -408,9 +408,7 @@ For some windowing functions, additional parameters must be specified:
Time-aware rolling
~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.19.0
-
-New in version 0.19.0 are the ability to pass an offset (or convertible) to a ``.rolling()`` method and have it produce
+It is possible to pass an offset (or convertible) to a ``.rolling()`` method and have it produce
variable sized windows based on the passed time window. For each time point, this includes all preceding values occurring
within the indicated time delta.
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 8b21e4f29d994..135a6b89a9e55 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -454,8 +454,6 @@ worth trying.
Specifying categorical dtype
''''''''''''''''''''''''''''
-.. versionadded:: 0.19.0
-
``Categorical`` columns can be parsed directly by specifying ``dtype='category'`` or
``dtype=CategoricalDtype(categories, ordered)``.
@@ -2193,8 +2191,6 @@ With max_level=1 the following snippet normalizes until 1st nesting level of the
Line delimited json
'''''''''''''''''''
-.. versionadded:: 0.19.0
-
pandas is able to read and write line-delimited json files that are common in data processing pipelines
using Hadoop or Spark.
@@ -2492,16 +2488,12 @@ Specify values that should be converted to NaN:
dfs = pd.read_html(url, na_values=['No Acquirer'])
-.. versionadded:: 0.19
-
Specify whether to keep the default set of NaN values:
.. code-block:: python
dfs = pd.read_html(url, keep_default_na=False)
-.. versionadded:: 0.19
-
Specify converters for columns. This is useful for numerical text data that has
leading zeros. By default columns that are numerical are cast to numeric
types and the leading zeros are lost. To avoid this, we can convert these
@@ -2513,8 +2505,6 @@ columns to strings.
dfs = pd.read_html(url_mcc, match='Telekom Albania', header=0,
converters={'MNC': str})
-.. versionadded:: 0.19
-
Use some combination of the above:
.. code-block:: python
diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst
index 6e63e672bb968..4c0d3b75a4f79 100644
--- a/doc/source/user_guide/merging.rst
+++ b/doc/source/user_guide/merging.rst
@@ -819,8 +819,6 @@ The ``indicator`` argument will also accept string arguments, in which case the
Merge dtypes
~~~~~~~~~~~~
-.. versionadded:: 0.19.0
-
Merging will preserve the dtype of the join keys.
.. ipython:: python
@@ -1386,8 +1384,6 @@ fill/interpolate missing data:
Merging asof
~~~~~~~~~~~~
-.. versionadded:: 0.19.0
-
A :func:`merge_asof` is similar to an ordered left-join except that we match on
nearest key rather than equal keys. For each row in the ``left`` ``DataFrame``,
we select the last row in the ``right`` ``DataFrame`` whose ``on`` key is less
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index 3255202d09a48..f9ef0f4984995 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -509,8 +509,6 @@ then ``extractall(pat).xs(0, level='match')`` gives the same result as
``Index`` also supports ``.str.extractall``. It returns a ``DataFrame`` which has the
same result as a ``Series.str.extractall`` with a default index (starts from 0).
-.. versionadded:: 0.19.0
-
.. ipython:: python
pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index 5bdff441d9e1f..edfad9c15758c 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -1928,8 +1928,6 @@ objects:
Period dtypes
~~~~~~~~~~~~~
-.. versionadded:: 0.19.0
-
``PeriodIndex`` has a custom ``period`` dtype. This is a pandas extension
dtype similar to the :ref:`timezone aware dtype <timeseries.timezone_series>` (``datetime64[ns, tz]``).
diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 6589900c8491c..96a896cae29a6 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -438,10 +438,6 @@ columns:
.. _visualization.box.return:
-.. warning::
-
- The default changed from ``'dict'`` to ``'axes'`` in version 0.19.0.
-
In ``boxplot``, the return type can be controlled by the ``return_type``, keyword. The valid choices are ``{"axes", "dict", "both", None}``.
Faceting, created by ``DataFrame.boxplot`` with the ``by``
keyword, will affect the output type as well:
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 2c38e071d3d44..6e24e6a38b5ab 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -816,8 +816,6 @@ def duplicated(values, keep="first"):
"""
Return boolean ndarray denoting duplicate values.
- .. versionadded:: 0.19.0
-
Parameters
----------
values : ndarray-like
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index df5cd12a479f0..fbcd6d55b1444 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -514,9 +514,6 @@ def astype(self, dtype, copy=True):
By default, astype always returns a newly allocated object.
If copy is set to False and dtype is categorical, the original
object is returned.
-
- .. versionadded:: 0.19.0
-
"""
if is_categorical_dtype(dtype):
# GH 10696/18593
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 9480e2e425f79..3f7a36764049e 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1463,8 +1463,6 @@ def is_monotonic(self):
Return boolean if values in the object are
monotonic_increasing.
- .. versionadded:: 0.19.0
-
Returns
-------
bool
@@ -1481,8 +1479,6 @@ def is_monotonic_decreasing(self):
Return boolean if values in the object are
monotonic_decreasing.
- .. versionadded:: 0.19.0
-
Returns
-------
bool
diff --git a/pandas/core/computation/ops.py b/pandas/core/computation/ops.py
index 9e6928372808e..59ed7143e6cd2 100644
--- a/pandas/core/computation/ops.py
+++ b/pandas/core/computation/ops.py
@@ -306,8 +306,6 @@ def _cast_inplace(terms, acceptable_dtypes, dtype):
acceptable_dtypes : list of acceptable numpy.dtype
Will not cast if term's dtype in this list.
- .. versionadded:: 0.19.0
-
dtype : str or numpy.dtype
The dtype to cast to.
"""
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index ac74ad5726a99..ef1f1ccdb4e16 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -225,8 +225,6 @@ def union_categoricals(to_union, sort_categories=False, ignore_order=False):
Combine list-like of Categorical-like, unioning categories. All
categories must have the same dtype.
- .. versionadded:: 0.19.0
-
Parameters
----------
to_union : list-like of Categorical, CategoricalIndex,
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8fdfc960603c6..967815c47511c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2043,9 +2043,6 @@ def to_stata(
variable_labels : dict
Dictionary containing columns as keys and variable labels as
values. Each label must be 80 characters or smaller.
-
- .. versionadded:: 0.19.0
-
version : {114, 117}, default 114
Version to use in the output dta file. Version 114 can be used
read by Stata 10 and later. Version 117 can be read by Stata 13
@@ -2074,8 +2071,6 @@ def to_stata(
* Column listed in convert_dates is not in DataFrame
* Categorical label contains more than 32,000 characters
- .. versionadded:: 0.19.0
-
See Also
--------
read_stata : Import Stata data files.
@@ -2265,9 +2260,6 @@ def to_html(
border : int
A ``border=border`` attribute is included in the opening
`<table>` tag. Default ``pd.options.display.html.border``.
-
- .. versionadded:: 0.19.0
-
table_id : str, optional
A css id is included in the opening `<table>` tag if specified.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ecc421df3fbb1..d6bbc19084e10 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2328,8 +2328,6 @@ def to_json(
throw ValueError if incorrect 'orient' since others are not list
like.
- .. versionadded:: 0.19.0
-
compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}
A string representing the compression to use in the output file,
@@ -7088,8 +7086,6 @@ def asof(self, where, subset=None):
In case of a :class:`~pandas.DataFrame`, the last row without NaN
considering only the subset of columns (if not `None`)
- .. versionadded:: 0.19.0 For DataFrame
-
If there is no good value, NaN is returned for a Series or
a Series of NaN values for a DataFrame
@@ -8207,14 +8203,10 @@ def resample(
For a DataFrame, column to use instead of index for resampling.
Column must be datetime-like.
- .. versionadded:: 0.19.0
-
level : str or int, optional
For a MultiIndex, level (name or number) to use for
resampling. `level` must be datetime-like.
- .. versionadded:: 0.19.0
-
Returns
-------
Resampler object
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index bf94bf7ab6489..3f9b38e84c066 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -788,8 +788,6 @@ def view(self, cls=None):
satisfied, the original data is used to create a new Index
or the original Index is returned.
- .. versionadded:: 0.19.0
-
Returns
-------
Index
@@ -3993,8 +3991,6 @@ def memory_usage(self, deep=False):
entries are from self where cond is True and otherwise are from
other.
- .. versionadded:: 0.19.0
-
Parameters
----------
cond : boolean array-like with the same length as self
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c1a07c129f7cd..39cdd450e17c6 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -217,8 +217,6 @@ def merge_ordered(
* outer: use union of keys from both frames (SQL: full outer join)
* inner: use intersection of keys from both frames (SQL: inner join)
- .. versionadded:: 0.19.0
-
Returns
-------
merged : DataFrame
@@ -328,8 +326,6 @@ def merge_asof(
Optionally match on equivalent keys with 'by' before searching with 'on'.
- .. versionadded:: 0.19.0
-
Parameters
----------
left : DataFrame
@@ -345,26 +341,14 @@ def merge_asof(
Field name to join on in right DataFrame.
left_index : boolean
Use the index of the left DataFrame as the join key.
-
- .. versionadded:: 0.19.2
-
right_index : boolean
Use the index of the right DataFrame as the join key.
-
- .. versionadded:: 0.19.2
-
by : column name or list of column names
Match on these columns before performing merge operation.
left_by : column name
Field names to match on in the left DataFrame.
-
- .. versionadded:: 0.19.2
-
right_by : column name
Field names to match on in the right DataFrame.
-
- .. versionadded:: 0.19.2
-
suffixes : 2-length sequence (tuple, list, ...)
Suffix to apply to overlapping column names in the left and right
side, respectively.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1f63d8d92f1c0..90f069542d332 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2700,9 +2700,6 @@ def append(self, to_append, ignore_index=False, verify_integrity=False):
Series to append with self.
ignore_index : bool, default False
If True, do not use the index labels.
-
- .. versionadded:: 0.19.0
-
verify_integrity : bool, default False
If True, raise Exception on creating index with duplicates.
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 523c4dc5e867b..5f3ed87424d0e 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -399,8 +399,6 @@ def safe_sort(values, labels=None, na_sentinel=-1, assume_unique=False, verify=T
``values`` should be unique if ``labels`` is not None.
Safe for use with mixed types (int, str), orders ints before strs.
- .. versionadded:: 0.19.0
-
Parameters
----------
values : list-like
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index e1a976b874c25..a0e2c8d9cab65 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -59,8 +59,6 @@ def to_numeric(arg, errors="raise", downcast=None):
checked satisfy that specification, no downcasting will be
performed on the data.
- .. versionadded:: 0.19.0
-
Returns
-------
ret : numeric if parsing succeeded.
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index f5ab81ad9089e..73e126cf230a5 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -60,8 +60,6 @@ def hash_pandas_object(
"""
Return a data hash of the Index/Series/DataFrame
- .. versionadded:: 0.19.2
-
Parameters
----------
index : boolean, default True
@@ -245,8 +243,6 @@ def hash_array(vals, encoding="utf8", hash_key=None, categorize=True):
"""
Given a 1d array, return an array of deterministic integers.
- .. versionadded:: 0.19.2
-
Parameters
----------
vals : ndarray, Categorical
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 20d5453cc43a2..7f4ec3e96c75b 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -485,8 +485,7 @@ class Window(_Window):
If its an offset then this will be the time period of each window. Each
window will be a variable sized based on the observations included in
- the time-period. This is only valid for datetimelike indexes. This is
- new in 0.19.0
+ the time-period. This is only valid for datetimelike indexes.
min_periods : int, default None
Minimum number of observations in window required to have a value
(otherwise result is NA). For a window that is specified by an offset,
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 763b12949ba0a..7afc234446a71 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -120,14 +120,8 @@
content.
true_values : list, default None
Values to consider as True.
-
- .. versionadded:: 0.19.0
-
false_values : list, default None
Values to consider as False.
-
- .. versionadded:: 0.19.0
-
skiprows : list-like
Rows to skip at the beginning (0-indexed).
nrows : int, default None
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 11b69064723c5..1589e339bc884 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -846,8 +846,6 @@ def to_html(
border : int
A ``border=border`` attribute is included in the opening
``<table>`` tag. Default ``pd.options.display.html.border``.
-
- .. versionadded:: 0.19.0
"""
from pandas.io.formats.html import HTMLFormatter, NotebookFormatter
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 12c8ec4214b38..9d2647f226f00 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -1011,32 +1011,22 @@ def read_html(
Character to recognize as decimal point (e.g. use ',' for European
data).
- .. versionadded:: 0.19.0
-
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
input argument, the cell (not column) content, and return the
transformed content.
- .. versionadded:: 0.19.0
-
na_values : iterable, default None
Custom NA values
- .. versionadded:: 0.19.0
-
keep_default_na : bool, default True
If na_values are specified and keep_default_na is False the default NaN
values are overridden, otherwise they're appended to
- .. versionadded:: 0.19.0
-
displayed_only : bool, default True
Whether elements with "display: none" should be parsed
- .. versionadded:: 0.23.0
-
Returns
-------
dfs : list of DataFrames
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index ada7e6f43125d..4af362d8343f2 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -458,13 +458,9 @@ def read_json(
encoding : str, default is 'utf-8'
The encoding to use to decode py3 bytes.
- .. versionadded:: 0.19.0
-
lines : bool, default False
Read the file as a json object per line.
- .. versionadded:: 0.19.0
-
chunksize : int, optional
Return JsonReader object for iteration.
See the `line-delimited json docs
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 3433d25609255..83e4aac30f2a1 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -303,7 +303,6 @@ def read_hdf(path_or_buf, key=None, mode="r", **kwargs):
such as a file handler (e.g. via builtin ``open`` function)
or ``StringIO``.
- .. versionadded:: 0.19.0 support for pathlib, py.path.
.. versionadded:: 0.21.0 support for __fspath__ protocol.
key : object, optional
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 8dbcee829ee1e..32122a9daa1db 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -2096,8 +2096,6 @@ class StataWriter(StataParser):
Dictionary containing columns as keys and variable labels as values.
Each label must be 80 characters or smaller.
- .. versionadded:: 0.19.0
-
Returns
-------
writer : StataWriter instance
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 07b9598f15902..2edc6b99db721 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1396,8 +1396,6 @@ class SemiMonthEnd(SemiMonthOffset):
Two DateOffset's per month repeating on the last
day of the month and day_of_month.
- .. versionadded:: 0.19.0
-
Parameters
----------
n : int
@@ -1457,8 +1455,6 @@ class SemiMonthBegin(SemiMonthOffset):
Two DateOffset's per month repeating on the first
day of the month and day_of_month.
- .. versionadded:: 0.19.0
-
Parameters
----------
n : int
| - [x] xref #27507
| https://api.github.com/repos/pandas-dev/pandas/pulls/27508 | 2019-07-21T18:41:29Z | 2019-07-22T12:08:40Z | 2019-07-22T12:08:39Z | 2019-07-23T09:11:06Z |
remove last references to v0.18 in docs and code | diff --git a/doc/source/user_guide/advanced.rst b/doc/source/user_guide/advanced.rst
index a42ab4f0255bd..22a9791ffde30 100644
--- a/doc/source/user_guide/advanced.rst
+++ b/doc/source/user_guide/advanced.rst
@@ -810,15 +810,10 @@ values **not** in the categories, similarly to how you can reindex **any** panda
Int64Index and RangeIndex
~~~~~~~~~~~~~~~~~~~~~~~~~
-.. warning::
-
- Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the changes, see :ref:`here <whatsnew_0180.float_indexers>`.
-
-:class:`Int64Index` is a fundamental basic index in pandas.
-This is an immutable array implementing an ordered, sliceable set.
-Prior to 0.18.0, the ``Int64Index`` would provide the default index for all ``NDFrame`` objects.
+:class:`Int64Index` is a fundamental basic index in pandas. This is an immutable array
+implementing an ordered, sliceable set.
-:class:`RangeIndex` is a sub-class of ``Int64Index`` added in version 0.18.0, now providing the default index for all ``NDFrame`` objects.
+:class:`RangeIndex` is a sub-class of ``Int64Index`` that provides the default index for all ``NDFrame`` objects.
``RangeIndex`` is an optimized version of ``Int64Index`` that can represent a monotonic ordered set. These are analogous to Python `range types <https://docs.python.org/3/library/stdtypes.html#typesseq-range>`__.
.. _indexing.float64index:
@@ -880,16 +875,6 @@ In non-float indexes, slicing using floats will raise a ``TypeError``.
In [1]: pd.Series(range(5))[3.5:4.5]
TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
-.. warning::
-
- Using a scalar float indexer for ``.iloc`` has been removed in 0.18.0, so the following will raise a ``TypeError``:
-
- .. code-block:: ipython
-
- In [3]: pd.Series(range(5)).iloc[3.0]
- TypeError: cannot do positional indexing on <class 'pandas.indexes.range.RangeIndex'> with these indexers [3.0] of <type 'float'>
-
-
Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
irregular timedelta-like indexing scheme, but the data is recorded as floats. This could, for
example, be millisecond offsets.
diff --git a/doc/source/user_guide/computation.rst b/doc/source/user_guide/computation.rst
index 4f44fcaab63d4..8ab075c0d6cac 100644
--- a/doc/source/user_guide/computation.rst
+++ b/doc/source/user_guide/computation.rst
@@ -893,10 +893,9 @@ Therefore, there is an assumption that :math:`x_0` is not an ordinary value
but rather an exponentially weighted moment of the infinite series up to that
point.
-One must have :math:`0 < \alpha \leq 1`, and while since version 0.18.0
-it has been possible to pass :math:`\alpha` directly, it's often easier
-to think about either the **span**, **center of mass (com)** or **half-life**
-of an EW moment:
+One must have :math:`0 < \alpha \leq 1`, and while it is possible to pass
+:math:`\alpha` directly, it's often easier to think about either the
+**span**, **center of mass (com)** or **half-life** of an EW moment:
.. math::
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index b0d08f23dbc82..b77bfb9778837 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -628,8 +628,6 @@ new or modified columns is returned and the original frame is unchanged.
df.eval('e = a - c', inplace=False)
df
-.. versionadded:: 0.18.0
-
As a convenience, multiple assignments can be performed by using a
multi-line string.
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index 2c27ec12f6923..e3b75afcf945e 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -36,10 +36,6 @@ this area.
should be avoided. See :ref:`Returning a View versus Copy
<indexing.view_versus_copy>`.
-.. warning::
-
- Indexing on an integer-based Index with floats has been clarified in 0.18.0, for a summary of the changes, see :ref:`here <whatsnew_0180.float_indexers>`.
-
See the :ref:`MultiIndex / Advanced Indexing <advanced>` for ``MultiIndex`` and more advanced indexing documentation.
See the :ref:`cookbook<cookbook.selection>` for some advanced strategies.
@@ -83,8 +79,6 @@ of multi-axis indexing.
* A ``callable`` function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
- .. versionadded:: 0.18.1
-
See more at :ref:`Selection by Position <indexing.integer>`,
:ref:`Advanced Indexing <advanced>` and :ref:`Advanced
Hierarchical <advanced.advanced_hierarchical>`.
@@ -1101,9 +1095,7 @@ This is equivalent to (but faster than) the following.
df2 = df.copy()
df.apply(lambda x, y: x.where(x > 0, y), y=df['A'])
-.. versionadded:: 0.18.1
-
-Where can accept a callable as condition and ``other`` arguments. The function must
+``where`` can accept a callable as condition and ``other`` arguments. The function must
be with one argument (the calling Series or DataFrame) and that returns valid output
as condition and ``other`` argument.
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 8b21e4f29d994..c6911d92f7965 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -296,7 +296,6 @@ compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``'zip'``, ``'xz'``, ``None``
the ZIP file must contain only one data file to be read in.
Set to ``None`` for no decompression.
- .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
.. versionchanged:: 0.24.0 'infer' option added and set to default.
thousands : str, default ``None``
Thousands separator.
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index 06ad8632712c2..f118fe84d523a 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -484,8 +484,6 @@ not contain any instances of a particular category.
Normalization
~~~~~~~~~~~~~
-.. versionadded:: 0.18.1
-
Frequency tables can also be normalized to show percentages rather than counts
using the ``normalize`` argument:
diff --git a/doc/source/user_guide/style.ipynb b/doc/source/user_guide/style.ipynb
index 8aa1f63ecf22a..006f928c037bd 100644
--- a/doc/source/user_guide/style.ipynb
+++ b/doc/source/user_guide/style.ipynb
@@ -6,10 +6,6 @@
"source": [
"# Styling\n",
"\n",
- "*New in version 0.17.1*\n",
- "\n",
- "<span style=\"color: red\">*Provisional: This is a new feature and still under development. We'll be adding features and possibly making breaking changes in future releases. We'd love to hear your feedback.*</span>\n",
- "\n",
"This document is written as a Jupyter Notebook, and can be viewed or downloaded [here](http://nbviewer.ipython.org/github/pandas-dev/pandas/blob/master/doc/source/style.ipynb).\n",
"\n",
"You can apply **conditional formatting**, the visual styling of a DataFrame\n",
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index 3255202d09a48..32a5343773507 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -366,13 +366,12 @@ Extract first match in each subject (extract)
.. warning::
- In version 0.18.0, ``extract`` gained the ``expand`` argument. When
- ``expand=False`` it returns a ``Series``, ``Index``, or
+ Before version 0.23, argument ``expand`` of the ``extract`` method defaulted to
+ ``False``. When ``expand=False``, ``expand`` returns a ``Series``, ``Index``, or
``DataFrame``, depending on the subject and regular expression
- pattern (same behavior as pre-0.18.0). When ``expand=True`` it
- always returns a ``DataFrame``, which is more consistent and less
- confusing from the perspective of a user. ``expand=True`` is the
- default since version 0.23.0.
+ pattern. When ``expand=True``, it always returns a ``DataFrame``,
+ which is more consistent and less confusing from the perspective of a user.
+ ``expand=True`` has been the default since version 0.23.0.
The ``extract`` method accepts a `regular expression
<https://docs.python.org/3/library/re.html>`__ with at least one
@@ -468,8 +467,6 @@ Extract all matches in each subject (extractall)
.. _text.extractall:
-.. versionadded:: 0.18.0
-
Unlike ``extract`` (which returns only the first match),
.. ipython:: python
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index 5bdff441d9e1f..e1ca5c0621c99 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -607,8 +607,6 @@ We are stopping on the included end-point as it is part of the index:
dft['2013-1-15':'2013-1-15 12:30:00']
-.. versionadded:: 0.18.0
-
``DatetimeIndex`` partial string indexing also works on a ``DataFrame`` with a ``MultiIndex``:
.. ipython:: python
@@ -1514,11 +1512,6 @@ Converting to Python datetimes
Resampling
----------
-.. warning::
-
- The interface to ``.resample`` has changed in 0.18.0 to be more groupby-like and hence more flexible.
- See the :ref:`whatsnew docs <whatsnew_0180.breaking.resample>` for a comparison with prior versions.
-
Pandas has a simple, powerful, and efficient functionality for performing
resampling operations during frequency conversion (e.g., converting secondly
data into 5-minutely data). This is extremely common in, but not limited to,
@@ -1528,8 +1521,8 @@ financial applications.
on each of its groups. See some :ref:`cookbook examples <cookbook.resample>` for
some advanced strategies.
-Starting in version 0.18.1, the ``resample()`` function can be used directly from
-``DataFrameGroupBy`` objects, see the :ref:`groupby docs <groupby.transform.window_resample>`.
+The ``resample()`` method can be used directly from ``DataFrameGroupBy`` objects,
+see the :ref:`groupby docs <groupby.transform.window_resample>`.
.. note::
diff --git a/doc/source/user_guide/visualization.rst b/doc/source/user_guide/visualization.rst
index 6589900c8491c..c71bf62fd4024 100644
--- a/doc/source/user_guide/visualization.rst
+++ b/doc/source/user_guide/visualization.rst
@@ -1632,18 +1632,3 @@ when plotting a large number of points.
:suppress:
plt.close('all')
-
-
-.. _rplot:
-
-
-Trellis plotting interface
---------------------------
-
-.. warning::
-
- The ``rplot`` trellis plotting interface has been **removed**. Please use
- external packages like `seaborn <https://github.com/mwaskom/seaborn>`_ for
- similar but more refined functionality and refer to our 0.18.1 documentation
- `here <http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html>`__
- for how to convert to using it.
diff --git a/doc/source/whatsnew/v0.16.0.rst b/doc/source/whatsnew/v0.16.0.rst
index b903c4dae4c5a..42b3b9332ca98 100644
--- a/doc/source/whatsnew/v0.16.0.rst
+++ b/doc/source/whatsnew/v0.16.0.rst
@@ -530,7 +530,7 @@ Deprecations
`seaborn <http://stanford.edu/~mwaskom/software/seaborn/>`_ for similar
but more refined functionality (:issue:`3445`).
The documentation includes some examples how to convert your existing code
- using ``rplot`` to seaborn: :ref:`rplot docs <rplot>`.
+ from ``rplot`` to seaborn `here <http://pandas.pydata.org/pandas-docs/version/0.18.1/visualization.html#trellis-plotting-interface>`__.
- The ``pandas.sandbox.qtpandas`` interface is deprecated and will be removed in a future version.
We refer users to the external package `pandas-qt <https://github.com/datalyze-solutions/pandas-qt>`_. (:issue:`9615`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8fdfc960603c6..59f8fbc403ea8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3204,8 +3204,6 @@ def eval(self, expr, inplace=False, **kwargs):
If the expression contains an assignment, whether to perform the
operation inplace and mutate the existing DataFrame. Otherwise,
a new DataFrame is returned.
-
- .. versionadded:: 0.18.0.
kwargs : dict
See the documentation for :func:`eval` for complete details
on the keyword arguments accepted by
@@ -6317,8 +6315,6 @@ def unstack(self, level=-1, fill_value=None):
fill_value : replace NaN with this value if the unstack produces
missing values
- .. versionadded:: 0.18.0
-
Returns
-------
Series or DataFrame
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ecc421df3fbb1..0c0531bb8678d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2986,8 +2986,6 @@ def to_latex(
defaults to 'utf-8'.
decimal : str, default '.'
Character recognized as decimal separator, e.g. ',' in Europe.
-
- .. versionadded:: 0.18.0
multicolumn : bool, default True
Use \multicolumn to enhance MultiIndex columns.
The default will be read from the config module.
@@ -6819,14 +6817,6 @@ def replace(
`scipy.interpolate.BPoly.from_derivatives` which
replaces 'piecewise_polynomial' interpolation method in
scipy 0.18.
-
- .. versionadded:: 0.18.1
-
- Added support for the 'akima' method.
- Added interpolate method 'from_derivatives' which replaces
- 'piecewise_polynomial' in SciPy 0.18; backwards-compatible with
- SciPy < 0.18
-
axis : {0 or 'index', 1 or 'columns', None}, default None
Axis to interpolate along.
limit : int, optional
@@ -9146,10 +9136,6 @@ def _where(
If other is callable, it is computed on the %(klass)s and
should return scalar or %(klass)s. The callable must not
change input %(klass)s (though pandas doesn't check it).
-
- .. versionadded:: 0.18.1
- A callable can be used as other.
-
inplace : bool, default False
Whether to perform the operation in place on the data.
axis : int, default None
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 8850cddc45b0d..9109410f45e42 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1139,8 +1139,6 @@ def _wrap_result(self, result):
class DatetimeIndexResamplerGroupby(_GroupByMixin, DatetimeIndexResampler):
"""
Provides a resample of a groupby implementation
-
- .. versionadded:: 0.18.1
"""
@property
@@ -1285,8 +1283,6 @@ def _adjust_binner_for_upsample(self, binner):
class TimedeltaIndexResamplerGroupby(_GroupByMixin, TimedeltaIndexResampler):
"""
Provides a resample of a groupby implementation.
-
- .. versionadded:: 0.18.1
"""
@property
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1f63d8d92f1c0..7e8b26be98a29 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3593,22 +3593,21 @@ def nsmallest(self, n=5, keep="first"):
def swaplevel(self, i=-2, j=-1, copy=True):
"""
- Swap levels i and j in a MultiIndex.
+ Swap levels i and j in a :class:`MultiIndex`.
+
+ Default is to swap the two innermost levels of the index.
Parameters
----------
i, j : int, str (can be mixed)
Level of index to be swapped. Can pass level name as string.
+ copy : bool, default True
+ Whether to copy underlying data.
Returns
-------
Series
Series with levels swapped in MultiIndex.
-
- .. versionchanged:: 0.18.1
-
- The indexes ``i`` and ``j`` are now optional, and default to
- the two innermost levels of the index.
"""
new_index = self.index.swaplevel(i, j)
return self._constructor(self._values, index=new_index, copy=copy).__finalize__(
@@ -4459,10 +4458,6 @@ def isin(self, values):
raise a ``TypeError``. Instead, turn a single string into a
list of one element.
- .. versionadded:: 0.18.1
-
- Support for values as a set.
-
Returns
-------
Series
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index b188bd23cdf32..68c769e389529 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -914,8 +914,6 @@ def str_extractall(arr, pat, flags=0):
Series has exactly one match, extractall(pat).xs(0, level='match')
is the same as extract(pat).
- .. versionadded:: 0.18.0
-
Parameters
----------
pat : str
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 20d5453cc43a2..730afc401edca 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -1949,9 +1949,6 @@ def corr(self, other=None, pairwise=None, **kwargs):
class RollingGroupby(_GroupByMixin, Rolling):
"""
Provide a rolling groupby implementation.
-
- .. versionadded:: 0.18.1
-
"""
@property
@@ -2224,9 +2221,6 @@ def corr(self, other=None, pairwise=None, **kwargs):
class ExpandingGroupby(_GroupByMixin, Expanding):
"""
Provide a expanding groupby implementation.
-
- .. versionadded:: 0.18.1
-
"""
@property
@@ -2281,9 +2275,6 @@ class EWM(_Rolling):
alpha : float, optional
Specify smoothing factor :math:`\alpha` directly,
:math:`0 < \alpha \leq 1`.
-
- .. versionadded:: 0.18.0
-
min_periods : int, default 0
Minimum number of observations in window required to have a value
(otherwise result is NA).
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 20aafb26fb7ea..3e5b200c4643b 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -326,9 +326,6 @@
used as the sep. Equivalent to setting ``sep='\\s+'``. If this option
is set to True, nothing should be passed in for the ``delimiter``
parameter.
-
- .. versionadded:: 0.18.1 support for the Python parser.
-
low_memory : bool, default True
Internally process the file in chunks, resulting in lower memory use
while parsing, but possibly mixed type inference. To ensure no mixed
| - [x] xref #27463
Somehow all the changes didn't get added to #27463, so here's a follow-up.
I've also removed a reference to 0.17 in style.ipynb which also makes the styler api non-provisional. | https://api.github.com/repos/pandas-dev/pandas/pulls/27507 | 2019-07-21T18:18:53Z | 2019-07-22T12:07:10Z | 2019-07-22T12:07:10Z | 2019-07-22T12:08:28Z |
REF: de-privatize dtypes.concat functions | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index df5cd12a479f0..7337d3a7c8f0b 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2481,9 +2481,9 @@ def _can_hold_na(self):
@classmethod
def _concat_same_type(self, to_concat):
- from pandas.core.dtypes.concat import _concat_categorical
+ from pandas.core.dtypes.concat import concat_categorical
- return _concat_categorical(to_concat)
+ return concat_categorical(to_concat)
def isin(self, values):
"""
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index ac74ad5726a99..b37a0b6fae674 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -25,7 +25,6 @@
ABCIndexClass,
ABCPeriodIndex,
ABCRangeIndex,
- ABCSparseDataFrame,
ABCTimedeltaIndex,
)
@@ -71,41 +70,7 @@ def get_dtype_kinds(l):
return typs
-def _get_series_result_type(result, objs=None):
- """
- return appropriate class of Series concat
- input is either dict or array-like
- """
- from pandas import SparseSeries, SparseDataFrame, DataFrame
-
- # concat Series with axis 1
- if isinstance(result, dict):
- # concat Series with axis 1
- if all(isinstance(c, (SparseSeries, SparseDataFrame)) for c in result.values()):
- return SparseDataFrame
- else:
- return DataFrame
-
- # otherwise it is a SingleBlockManager (axis = 0)
- return objs[0]._constructor
-
-
-def _get_frame_result_type(result, objs):
- """
- return appropriate class of DataFrame-like concat
- if all blocks are sparse, return SparseDataFrame
- otherwise, return 1st obj
- """
-
- if result.blocks and (any(isinstance(obj, ABCSparseDataFrame) for obj in objs)):
- from pandas.core.sparse.api import SparseDataFrame
-
- return SparseDataFrame
- else:
- return next(obj for obj in objs if not isinstance(obj, ABCSparseDataFrame))
-
-
-def _concat_compat(to_concat, axis=0):
+def concat_compat(to_concat, axis=0):
"""
provide concatenation of an array of arrays each of which is a single
'normalized' dtypes (in that for example, if it's object, then it is a
@@ -142,12 +107,12 @@ def is_nonempty(x):
_contains_period = any(typ.startswith("period") for typ in typs)
if "category" in typs:
- # this must be prior to _concat_datetime,
+ # this must be prior to concat_datetime,
# to support Categorical + datetime-like
- return _concat_categorical(to_concat, axis=axis)
+ return concat_categorical(to_concat, axis=axis)
elif _contains_datetime or "timedelta" in typs or _contains_period:
- return _concat_datetime(to_concat, axis=axis, typs=typs)
+ return concat_datetime(to_concat, axis=axis, typs=typs)
# these are mandated to handle empties as well
elif "sparse" in typs:
@@ -174,7 +139,7 @@ def is_nonempty(x):
return np.concatenate(to_concat, axis=axis)
-def _concat_categorical(to_concat, axis=0):
+def concat_categorical(to_concat, axis=0):
"""Concatenate an object/categorical array of arrays, each of which is a
single dtype
@@ -214,7 +179,7 @@ def _concat_categorical(to_concat, axis=0):
else np.asarray(x.astype(object))
for x in to_concat
]
- result = _concat_compat(to_concat)
+ result = concat_compat(to_concat)
if axis == 1:
result = result.reshape(1, len(result))
return result
@@ -404,7 +369,7 @@ def _concatenate_2d(to_concat, axis):
return np.concatenate(to_concat, axis=axis)
-def _concat_datetime(to_concat, axis=0, typs=None):
+def concat_datetime(to_concat, axis=0, typs=None):
"""
provide concatenation of an datetimelike array of arrays each of which is a
single M8[ns], datetimet64[ns, tz] or m8[ns] dtype
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index bf94bf7ab6489..cded70813b87d 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -15,6 +15,7 @@
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution, cache_readonly
+from pandas.core.dtypes import concat as _concat
from pandas.core.dtypes.cast import maybe_cast_to_integer_array
from pandas.core.dtypes.common import (
ensure_categorical,
@@ -45,7 +46,7 @@
is_unsigned_integer_dtype,
pandas_dtype,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCDateOffset,
@@ -2542,7 +2543,7 @@ def _union(self, other, sort):
if len(indexer) > 0:
other_diff = algos.take_nd(rvals, indexer, allow_fill=False)
- result = _concat._concat_compat((lvals, other_diff))
+ result = concat_compat((lvals, other_diff))
else:
result = lvals
@@ -2788,7 +2789,7 @@ def symmetric_difference(self, other, result_name=None, sort=None):
right_indexer = (indexer == -1).nonzero()[0]
right_diff = other.values.take(right_indexer)
- the_diff = _concat._concat_compat([left_diff, right_diff])
+ the_diff = concat_compat([left_diff, right_diff])
if sort is None:
try:
the_diff = sorting.safe_sort(the_diff)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 5024eebe03bb4..e9296eea2b8a3 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -18,7 +18,7 @@
is_scalar,
is_string_like,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import isna
@@ -608,7 +608,7 @@ def _fast_union(self, other, sort=None):
left_start = left[0]
loc = right.searchsorted(left_start, side="left")
right_chunk = right.values[:loc]
- dates = _concat._concat_compat((left.values, right_chunk))
+ dates = concat_compat((left.values, right_chunk))
return self._shallow_copy(dates)
# DTIs are not in the "correct" order and we want
# to sort
@@ -624,7 +624,7 @@ def _fast_union(self, other, sort=None):
if left_end < right_end:
loc = right.searchsorted(left_end, side="right")
right_chunk = right.values[loc:]
- dates = _concat._concat_compat((left.values, right_chunk))
+ dates = concat_compat((left.values, right_chunk))
return self._shallow_copy(dates)
else:
return left
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 19d0d2341dac1..1c2a8c4f0c10c 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -18,7 +18,7 @@
is_timedelta64_ns_dtype,
pandas_dtype,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.missing import isna
from pandas.core.accessor import delegate_names
@@ -462,7 +462,7 @@ def _fast_union(self, other):
if left_end < right_end:
loc = right.searchsorted(left_end, side="right")
right_chunk = right.values[loc:]
- dates = _concat._concat_compat((left.values, right_chunk))
+ dates = concat_compat((left.values, right_chunk))
return self._shallow_copy(dates)
else:
return left
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 5aee37bc3b833..fb6974110d80b 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -21,7 +21,7 @@
is_sequence,
is_sparse,
)
-from pandas.core.dtypes.concat import _concat_compat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
from pandas.core.dtypes.missing import _infer_fill_value, isna
@@ -607,7 +607,7 @@ def _setitem_with_indexer_missing(self, indexer, value):
if len(self.obj._values):
# GH#22717 handle casting compatibility that np.concatenate
# does incorrectly
- new_values = _concat_compat([self.obj._values, new_values])
+ new_values = concat_compat([self.obj._values, new_values])
self.obj._data = self.obj._constructor(
new_values, index=new_index, name=self.obj.name
)._data
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index ace57938f948c..16c93e6267da2 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -49,7 +49,7 @@
is_timedelta64_dtype,
pandas_dtype,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_categorical, concat_datetime
from pandas.core.dtypes.dtypes import CategoricalDtype, ExtensionDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -2563,7 +2563,7 @@ def concat_same_type(self, to_concat, placement=None):
# Instead of placing the condition here, it could also go into the
# is_uniform_join_units check, but I'm not sure what is better.
if len({x.dtype for x in to_concat}) > 1:
- values = _concat._concat_datetime([x.values for x in to_concat])
+ values = concat_datetime([x.values for x in to_concat])
placement = placement or slice(0, len(values), 1)
if self.ndim > 1:
@@ -3088,7 +3088,7 @@ class CategoricalBlock(ExtensionBlock):
is_categorical = True
_verify_integrity = True
_can_hold_na = True
- _concatenator = staticmethod(_concat._concat_categorical)
+ _concatenator = staticmethod(concat_categorical)
def __init__(self, values, placement, ndim=None):
from pandas.core.arrays.categorical import _maybe_to_categorical
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 9ccd4b80869a0..121c61d8d3623 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -19,7 +19,7 @@
is_sparse,
is_timedelta64_dtype,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.missing import isna
import pandas.core.algorithms as algos
@@ -211,7 +211,7 @@ def get_reindexed_values(self, empty_dtype, upcasted_na):
if not self.indexers:
if not self.block._can_consolidate:
- # preserve these for validation in _concat_compat
+ # preserve these for validation in concat_compat
return self.block.values
if self.block.is_bool and not self.block.is_categorical:
@@ -265,7 +265,7 @@ def concatenate_join_units(join_units, concat_axis, copy):
else:
concat_values = concat_values.copy()
else:
- concat_values = _concat._concat_compat(to_concat, axis=concat_axis)
+ concat_values = concat_compat(to_concat, axis=concat_axis)
return concat_values
@@ -380,7 +380,7 @@ def is_uniform_join_units(join_units):
"""
Check if the join units consist of blocks of uniform type that can
be concatenated using Block.concat_same_type instead of the generic
- concatenate_join_units (which uses `_concat._concat_compat`).
+ concatenate_join_units (which uses `concat_compat`).
"""
return (
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 394c0773409f2..344d41ed26943 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -26,7 +26,7 @@
is_scalar,
is_sparse,
)
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.concat import concat_compat
from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import ABCExtensionArray, ABCSeries
from pandas.core.dtypes.missing import isna
@@ -532,7 +532,7 @@ def get_axe(block, qs, axes):
return self.__class__(blocks, new_axes)
# single block, i.e. ndim == {1}
- values = _concat._concat_compat([b.values for b in blocks])
+ values = concat_compat([b.values for b in blocks])
# compute the orderings of our original data
if len(self.blocks) > 1:
@@ -1647,11 +1647,11 @@ def concat(self, to_concat, new_axis):
new_block = blocks[0].concat_same_type(blocks)
else:
values = [x.values for x in blocks]
- values = _concat._concat_compat(values)
+ values = concat_compat(values)
new_block = make_block(values, placement=slice(0, len(values), 1))
else:
values = [x._block.values for x in to_concat]
- values = _concat._concat_compat(values)
+ values = concat_compat(values)
new_block = make_block(values, placement=slice(0, len(values), 1))
mgr = SingleBlockManager(new_block, new_axis)
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 5a476dceca1f3..ca4175e4a474a 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -6,7 +6,7 @@
import numpy as np
-import pandas.core.dtypes.concat as _concat
+from pandas.core.dtypes.generic import ABCSparseDataFrame
from pandas import DataFrame, Index, MultiIndex, Series
from pandas.core import common as com
@@ -439,13 +439,13 @@ def get_result(self):
mgr = self.objs[0]._data.concat(
[x._data for x in self.objs], self.new_axes
)
- cons = _concat._get_series_result_type(mgr, self.objs)
+ cons = _get_series_result_type(mgr, self.objs)
return cons(mgr, name=name).__finalize__(self, method="concat")
# combine as columns in a frame
else:
data = dict(zip(range(len(self.objs)), self.objs))
- cons = _concat._get_series_result_type(data)
+ cons = _get_series_result_type(data)
index, columns = self.new_axes
df = cons(data, index=index)
@@ -475,7 +475,7 @@ def get_result(self):
if not self.copy:
new_data._consolidate_inplace()
- cons = _concat._get_frame_result_type(new_data, self.objs)
+ cons = _get_frame_result_type(new_data, self.objs)
return cons._from_axes(new_data, self.new_axes).__finalize__(
self, method="concat"
)
@@ -708,3 +708,37 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None):
return MultiIndex(
levels=new_levels, codes=new_codes, names=new_names, verify_integrity=False
)
+
+
+def _get_series_result_type(result, objs=None):
+ """
+ return appropriate class of Series concat
+ input is either dict or array-like
+ """
+ from pandas import SparseSeries, SparseDataFrame, DataFrame
+
+ # concat Series with axis 1
+ if isinstance(result, dict):
+ # concat Series with axis 1
+ if all(isinstance(c, (SparseSeries, SparseDataFrame)) for c in result.values()):
+ return SparseDataFrame
+ else:
+ return DataFrame
+
+ # otherwise it is a SingleBlockManager (axis = 0)
+ return objs[0]._constructor
+
+
+def _get_frame_result_type(result, objs):
+ """
+ return appropriate class of DataFrame-like concat
+ if all blocks are sparse, return SparseDataFrame
+ otherwise, return 1st obj
+ """
+
+ if result.blocks and (any(isinstance(obj, ABCSparseDataFrame) for obj in objs)):
+ from pandas.core.sparse.api import SparseDataFrame
+
+ return SparseDataFrame
+ else:
+ return next(obj for obj in objs if not isinstance(obj, ABCSparseDataFrame))
diff --git a/pandas/core/reshape/melt.py b/pandas/core/reshape/melt.py
index 9a69942a70e01..187a1913c3e15 100644
--- a/pandas/core/reshape/melt.py
+++ b/pandas/core/reshape/melt.py
@@ -171,9 +171,9 @@ def lreshape(data, groups, dropna=True, label=None):
for target, names in zip(keys, values):
to_concat = [data[col].values for col in names]
- import pandas.core.dtypes.concat as _concat
+ from pandas.core.dtypes.concat import concat_compat
- mdata[target] = _concat._concat_compat(to_concat)
+ mdata[target] = concat_compat(to_concat)
pivot_cols.append(target)
for col in id_cols:
| Preliminary to possibly finding new homes for some of these functions that don't really fit in core.dtypes (discussed briefly at the sprint). xref #25273 | https://api.github.com/repos/pandas-dev/pandas/pulls/27499 | 2019-07-21T04:09:01Z | 2019-07-22T21:14:54Z | 2019-07-22T21:14:54Z | 2019-07-22T21:30:50Z |
PERF: speed up MultiIndex.is_monotonic by 50x | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 9caf127553e05..49885e4426feb 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -70,7 +70,8 @@ Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- Performance improvement in indexing with a non-unique :class:`IntervalIndex` (:issue:`27489`)
--
+- Performance improvement in `MultiIndex.is_monotonic` (:issue:`27495`)
+
.. _whatsnew_1000.bug_fixes:
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 1a7d98b6f9c61..0123230bc0f63 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1359,6 +1359,12 @@ def is_monotonic_increasing(self):
increasing) values.
"""
+ if all(x.is_monotonic for x in self.levels):
+ # If each level is sorted, we can operate on the codes directly. GH27495
+ return libalgos.is_lexsorted(
+ [x.astype("int64", copy=False) for x in self.codes]
+ )
+
# reversed() because lexsort() wants the most significant key last.
values = [
self._get_level_values(i).values for i in reversed(range(len(self.levels)))
| The current logic for `MultiIndex.is_monotonic` relies on `np.lexsort()` on `MultiIndex._values`. While the result is cached, this is slow as it triggers the creation of `._values`, needs to perform an `O(n log(n))` sort, as well as populate the hashmap of a transient `Index`.
This PR significantly speeds up this check by directly operating on `.codes` when `.levels` are individually sorted. This means we can leverage `libalgos.is_lexsorted()` which is `O(n)` (but has the downside of needing `int64` when `MultiIndex` compacts levels).
```
before after ratio
[5bd57f90] [3320dded]
- 31.9±0.9ms 627±50μs 0.02 index_cached_properties.IndexCache.time_is_monotonic_decreasing('MultiIndex')
- 29.6±0.7ms 528±80μs 0.02 index_cached_properties.IndexCache.time_is_monotonic('MultiIndex')
- 29.9±1ms 524±90μs 0.02 index_cached_properties.IndexCache.time_is_monotonic_increasing('MultiIndex')
```
- [ ] closes #27099
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27495 | 2019-07-20T23:54:04Z | 2019-07-23T22:04:24Z | 2019-07-23T22:04:24Z | 2019-07-23T22:04:28Z |
DOC: setup 1.0.0 docs | diff --git a/doc/source/whatsnew/index.rst b/doc/source/whatsnew/index.rst
index 592b4748126c1..b7555ed94a1ed 100644
--- a/doc/source/whatsnew/index.rst
+++ b/doc/source/whatsnew/index.rst
@@ -10,6 +10,14 @@ This is the list of changes to pandas between each release. For full details,
see the commit logs at http://github.com/pandas-dev/pandas. For install and
upgrade instructions, see :ref:`install`.
+Version 1.0
+-----------
+
+.. toctree::
+ :maxdepth: 2
+
+ v1.0.0
+
Version 0.25
------------
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
new file mode 100644
index 0000000000000..9caf127553e05
--- /dev/null
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -0,0 +1,200 @@
+.. _whatsnew_1000:
+
+What's new in 1.0.0 (??)
+------------------------
+
+.. warning::
+
+ Starting with the 0.25.x series of releases, pandas only supports Python 3.5.3 and higher.
+ See :ref:`install.dropping-27` for more details.
+
+.. warning::
+
+ The minimum supported Python version will be bumped to 3.6 in a future release.
+
+{{ header }}
+
+These are the changes in pandas 1.0.0. See :ref:`release` for a full changelog
+including other versions of pandas.
+
+
+Enhancements
+~~~~~~~~~~~~
+
+.. _whatsnew_1000.enhancements.other:
+
+-
+-
+
+Other enhancements
+^^^^^^^^^^^^^^^^^^
+
+.. _whatsnew_1000.api_breaking:
+
+-
+-
+
+Backwards incompatible API changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. _whatsnew_1000.api.other:
+
+-
+-
+
+Other API changes
+^^^^^^^^^^^^^^^^^
+
+-
+-
+
+.. _whatsnew_1000.deprecations:
+
+Deprecations
+~~~~~~~~~~~~
+
+-
+-
+
+.. _whatsnew_1000.prior_deprecations:
+
+Removal of prior version deprecations/changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-
+-
+
+.. _whatsnew_1000.performance:
+
+Performance improvements
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+- Performance improvement in indexing with a non-unique :class:`IntervalIndex` (:issue:`27489`)
+-
+
+.. _whatsnew_1000.bug_fixes:
+
+Bug fixes
+~~~~~~~~~
+
+
+Categorical
+^^^^^^^^^^^
+
+-
+-
+
+
+Datetimelike
+^^^^^^^^^^^^
+
+-
+-
+
+
+Timedelta
+^^^^^^^^^
+
+-
+-
+
+Timezones
+^^^^^^^^^
+
+-
+-
+
+
+Numeric
+^^^^^^^
+
+-
+-
+
+Conversion
+^^^^^^^^^^
+
+-
+-
+
+Strings
+^^^^^^^
+
+-
+-
+
+
+Interval
+^^^^^^^^
+
+-
+-
+
+Indexing
+^^^^^^^^
+
+-
+-
+
+Missing
+^^^^^^^
+
+-
+-
+
+MultiIndex
+^^^^^^^^^^
+
+-
+-
+
+I/O
+^^^
+
+-
+-
+
+Plotting
+^^^^^^^^
+
+-
+-
+
+Groupby/resample/rolling
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+-
+-
+
+Reshaping
+^^^^^^^^^
+
+-
+-
+
+Sparse
+^^^^^^
+
+-
+-
+
+
+Build Changes
+^^^^^^^^^^^^^
+
+
+ExtensionArray
+^^^^^^^^^^^^^^
+
+-
+-
+
+
+Other
+^^^^^
+
+
+.. _whatsnew_1000.contributors:
+
+Contributors
+~~~~~~~~~~~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/27491 | 2019-07-20T16:57:06Z | 2019-07-20T20:07:00Z | 2019-07-20T20:07:00Z | 2019-07-20T20:07:00Z | |
PERF: speed up IntervalIndex._intersection_non_unique by ~50x | diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 561cf436c9af4..7372dada3b48a 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -1250,15 +1250,9 @@ def _intersection_non_unique(self, other: "IntervalIndex") -> "IntervalIndex":
first_nan_loc = np.arange(len(self))[self.isna()][0]
mask[first_nan_loc] = True
- lmiss = other.left.get_indexer_non_unique(self.left)[1]
- lmatch = np.setdiff1d(np.arange(len(self)), lmiss)
-
- for i in lmatch:
- potential = other.left.get_loc(self.left[i])
- if is_scalar(potential):
- if self.right[i] == other.right[potential]:
- mask[i] = True
- elif self.right[i] in other.right[potential]:
+ other_tups = set(zip(other.left, other.right))
+ for i, tup in enumerate(zip(self.left, self.right)):
+ if tup in other_tups:
mask[i] = True
return self[mask]
| I've been backfilling `asv` data and noticed the following regression in `IntervalIndexMethod.time_intersection_both_duplicate` (see [here](https://qwhelan.github.io/pandas/#index_object.IntervalIndexMethod.time_intersection_both_duplicate?machine=T470-W10&os=Linux%204.4.0-17134-Microsoft&ram=20822880&p-param1=100000&commits=83eb2428-cdc07fde)):

This regression was missed as the benchmark was added in #26711, which was after introduction in #26225.
This PR both simplifies the `IntervalIndex._intersection_non_unique` logic (now equivalent to `MultiIndex._intersection_non_unique`) and provides a `~50x` speedup:
```
before after ratio
[9bab81e0] [2848036e]
<interval_non_unique_intersection~1> <interval_non_unique_intersection>
- 12.6±0.1ms 725±30μs 0.06 index_object.IntervalIndexMethod.time_intersection_both_duplicate(1000)
- 4.96±0s 96.7±6ms 0.02 index_object.IntervalIndexMethod.time_intersection_both_duplicate(100000)
```
The new numbers are about `10x` faster than the old baseline.
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27489 | 2019-07-20T09:39:05Z | 2019-07-20T16:41:02Z | 2019-07-20T16:41:02Z | 2019-07-20T17:00:50Z |
API: Add entrypoint for plotting | diff --git a/Makefile b/Makefile
index baceefe6d49ff..9e69eb7922925 100644
--- a/Makefile
+++ b/Makefile
@@ -15,7 +15,7 @@ lint-diff:
git diff upstream/master --name-only -- "*.py" | xargs flake8
black:
- black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
+ black . --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
develop: build
python setup.py develop
diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 96a8440d85694..06d45e38bfcdb 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -56,7 +56,7 @@ if [[ -z "$CHECK" || "$CHECK" == "lint" ]]; then
black --version
MSG='Checking black formatting' ; echo $MSG
- black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist)'
+ black . --check --exclude '(asv_bench/env|\.egg|\.git|\.hg|\.mypy_cache|\.nox|\.tox|\.venv|_build|buck-out|build|dist|setup.py)'
RET=$(($RET + $?)) ; echo $MSG "DONE"
# `setup.cfg` contains the list of error codes that are being ignored in flake8
diff --git a/doc/source/development/extending.rst b/doc/source/development/extending.rst
index b492a4edd70a4..e341dcb8318bc 100644
--- a/doc/source/development/extending.rst
+++ b/doc/source/development/extending.rst
@@ -441,5 +441,22 @@ This would be more or less equivalent to:
The backend module can then use other visualization tools (Bokeh, Altair,...)
to generate the plots.
+Libraries implementing the plotting backend should use `entry points <https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins>`__
+to make their backend discoverable to pandas. The key is ``"pandas_plotting_backends"``. For example, pandas
+registers the default "matplotlib" backend as follows.
+
+.. code-block:: python
+
+ # in setup.py
+ setup( # noqa: F821
+ ...,
+ entry_points={
+ "pandas_plotting_backends": [
+ "matplotlib = pandas:plotting._matplotlib",
+ ],
+ },
+ )
+
+
More information on how to implement a third-party plotting backend can be found at
https://github.com/pandas-dev/pandas/blob/master/pandas/plotting/__init__.py#L1.
diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 9007d1c06f197..2cadd921cc386 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -113,7 +113,7 @@ I/O
Plotting
^^^^^^^^
--
+- Added a pandas_plotting_backends entrypoint group for registering plot backends. See :ref:`extending.plotting-backends` for more (:issue:`26747`).
-
-
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 0610780edb28d..a3c1499845c2a 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -1533,6 +1533,53 @@ def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwargs):
return self(kind="hexbin", x=x, y=y, C=C, **kwargs)
+_backends = {}
+
+
+def _find_backend(backend: str):
+ """
+ Find a pandas plotting backend>
+
+ Parameters
+ ----------
+ backend : str
+ The identifier for the backend. Either an entrypoint item registered
+ with pkg_resources, or a module name.
+
+ Notes
+ -----
+ Modifies _backends with imported backends as a side effect.
+
+ Returns
+ -------
+ types.ModuleType
+ The imported backend.
+ """
+ import pkg_resources # Delay import for performance.
+
+ for entry_point in pkg_resources.iter_entry_points("pandas_plotting_backends"):
+ if entry_point.name == "matplotlib":
+ # matplotlib is an optional dependency. When
+ # missing, this would raise.
+ continue
+ _backends[entry_point.name] = entry_point.load()
+
+ try:
+ return _backends[backend]
+ except KeyError:
+ # Fall back to unregisted, module name approach.
+ try:
+ module = importlib.import_module(backend)
+ except ImportError:
+ # We re-raise later on.
+ pass
+ else:
+ _backends[backend] = module
+ return module
+
+ raise ValueError("No backend {}".format(backend))
+
+
def _get_plot_backend(backend=None):
"""
Return the plotting backend to use (e.g. `pandas.plotting._matplotlib`).
@@ -1546,7 +1593,18 @@ def _get_plot_backend(backend=None):
The backend is imported lazily, as matplotlib is a soft dependency, and
pandas can be used without it being installed.
"""
- backend_str = backend or pandas.get_option("plotting.backend")
- if backend_str == "matplotlib":
- backend_str = "pandas.plotting._matplotlib"
- return importlib.import_module(backend_str)
+ backend = backend or pandas.get_option("plotting.backend")
+
+ if backend == "matplotlib":
+ # Because matplotlib is an optional dependency and first-party backend,
+ # we need to attempt an import here to raise an ImportError if needed.
+ import pandas.plotting._matplotlib as module
+
+ _backends["matplotlib"] = module
+
+ if backend in _backends:
+ return _backends[backend]
+
+ module = _find_backend(backend)
+ _backends[backend] = module
+ return module
diff --git a/pandas/tests/plotting/test_backend.py b/pandas/tests/plotting/test_backend.py
index 51f2abb6cc2f4..e79e7b6239eb3 100644
--- a/pandas/tests/plotting/test_backend.py
+++ b/pandas/tests/plotting/test_backend.py
@@ -1,5 +1,11 @@
+import sys
+import types
+
+import pkg_resources
import pytest
+import pandas.util._test_decorators as td
+
import pandas
@@ -36,3 +42,44 @@ def test_backend_is_correct(monkeypatch):
pandas.set_option("plotting.backend", "matplotlib")
except ImportError:
pass
+
+
+@td.skip_if_no_mpl
+def test_register_entrypoint():
+ mod = types.ModuleType("my_backend")
+ mod.plot = lambda *args, **kwargs: 1
+
+ backends = pkg_resources.get_entry_map("pandas")
+ my_entrypoint = pkg_resources.EntryPoint(
+ "pandas_plotting_backend",
+ mod.__name__,
+ dist=pkg_resources.get_distribution("pandas"),
+ )
+ backends["pandas_plotting_backends"]["my_backend"] = my_entrypoint
+ # TODO: the docs recommend importlib.util.module_from_spec. But this works for now.
+ sys.modules["my_backend"] = mod
+
+ result = pandas.plotting._core._get_plot_backend("my_backend")
+ assert result is mod
+
+ # TODO: https://github.com/pandas-dev/pandas/issues/27517
+ # Remove the td.skip_if_no_mpl
+ with pandas.option_context("plotting.backend", "my_backend"):
+ result = pandas.plotting._core._get_plot_backend()
+
+ assert result is mod
+
+
+def test_register_import():
+ mod = types.ModuleType("my_backend2")
+ mod.plot = lambda *args, **kwargs: 1
+ sys.modules["my_backend2"] = mod
+
+ result = pandas.plotting._core._get_plot_backend("my_backend2")
+ assert result is mod
+
+
+@td.skip_if_mpl
+def test_no_matplotlib_ok():
+ with pytest.raises(ImportError):
+ pandas.plotting._core._get_plot_backend("matplotlib")
diff --git a/setup.py b/setup.py
index 53e12da53cdeb..d2c6b18b892cd 100755
--- a/setup.py
+++ b/setup.py
@@ -830,5 +830,10 @@ def srcpath(name=None, suffix=".pyx", subdir="src"):
"hypothesis>=3.58",
]
},
+ entry_points={
+ "pandas_plotting_backends": [
+ "matplotlib = pandas:plotting._matplotlib",
+ ],
+ },
**setuptools_kwargs
)
| Libraries, including pandas, register backends via entrypoints.
xref #26747
cc @datapythonista @jakevdp @philippjfr | https://api.github.com/repos/pandas-dev/pandas/pulls/27488 | 2019-07-20T03:36:53Z | 2019-07-25T17:18:13Z | 2019-07-25T17:18:13Z | 2019-07-25T20:21:50Z |
Correctly re-instate Matplotlib converters | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 4e1bfac77fae2..b39dd30003d50 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -110,6 +110,9 @@ Plotting
^^^^^^^^
- Added a pandas_plotting_backends entrypoint group for registering plot backends. See :ref:`extending.plotting-backends` for more (:issue:`26747`).
+- Fixed the re-instatement of Matplotlib datetime converters after calling
+ `pandas.plotting.deregister_matplotlib_converters()` (:issue:`27481`).
+-
- Fix compatibility issue with matplotlib when passing a pandas ``Index`` to a plot call (:issue:`27775`).
-
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 15648d59c8f98..893854ab26e37 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -64,11 +64,12 @@ def register(explicit=True):
pairs = get_pairs()
for type_, cls in pairs:
- converter = cls()
- if type_ in units.registry:
+ # Cache previous converter if present
+ if type_ in units.registry and not isinstance(units.registry[type_], cls):
previous = units.registry[type_]
_mpl_units[type_] = previous
- units.registry[type_] = converter
+ # Replace with pandas converter
+ units.registry[type_] = cls()
def deregister():
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 35d12706f0590..7001264c41c05 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -40,6 +40,21 @@ def test_initial_warning():
assert "Using an implicitly" in out
+def test_registry_mpl_resets():
+ # Check that Matplotlib converters are properly reset (see issue #27481)
+ code = (
+ "import matplotlib.units as units; "
+ "import matplotlib.dates as mdates; "
+ "n_conv = len(units.registry); "
+ "import pandas as pd; "
+ "pd.plotting.register_matplotlib_converters(); "
+ "pd.plotting.deregister_matplotlib_converters(); "
+ "assert len(units.registry) == n_conv"
+ )
+ call = [sys.executable, "-c", code]
+ subprocess.check_output(call)
+
+
def test_timtetonum_accepts_unicode():
assert converter.time2num("00:01") == converter.time2num("00:01")
| Fixes https://github.com/pandas-dev/pandas/issues/27479. Converters should only be cached to be re-instated if they are original, and not pandas converters.
I'm struggling to write a test for this; I think it requires a clean state with no pandas imports, so one can check what was in the units registry before pandas was imported. I think this means a new test file with no pandas imports at the top, but maybe someone has a better idea? | https://api.github.com/repos/pandas-dev/pandas/pulls/27481 | 2019-07-19T18:06:00Z | 2019-08-20T14:26:50Z | 2019-08-20T14:26:49Z | 2019-08-21T16:34:46Z |
Add a Roadmap | diff --git a/doc/source/development/index.rst b/doc/source/development/index.rst
index a149f31118ed5..c7710ff19f078 100644
--- a/doc/source/development/index.rst
+++ b/doc/source/development/index.rst
@@ -16,3 +16,4 @@ Development
internals
extending
developer
+ roadmap
diff --git a/doc/source/development/roadmap.rst b/doc/source/development/roadmap.rst
new file mode 100644
index 0000000000000..88e0a18e6b81a
--- /dev/null
+++ b/doc/source/development/roadmap.rst
@@ -0,0 +1,193 @@
+.. _roadmap:
+
+=======
+Roadmap
+=======
+
+This page provides an overview of the major themes in pandas' development. Each of
+these items requires a relatively large amount of effort to implement. These may
+be achieved more quickly with dedicated funding or interest from contributors.
+
+An item being on the roadmap does not mean that it will *necessarily* happen, even
+with unlimited funding. During the implementation period we may discover issues
+preventing the adoption of the feature.
+
+Additionally, an item *not* being on the roadmap does not exclude it from inclusion
+in pandas. The roadmap is intended for larger, fundamental changes to the project that
+are likely to take months or years of developer time. Smaller-scoped items will continue
+to be tracked on our `issue tracker <https://github.com/pandas-dev/pandas/issues>`__.
+
+See :ref:`roadmap.evolution` for proposing changes to this document.
+
+Extensibility
+-------------
+
+Pandas :ref:`extending.extension-types` allow for extending NumPy types with custom
+data types and array storage. Pandas uses extension types internally, and provides
+an interface for 3rd-party libraries to define their own custom data types.
+
+Many parts of pandas still unintentionally convert data to a NumPy array.
+These problems are especially pronounced for nested data.
+
+We'd like to improve the handling of extension arrays throughout the library,
+making their behavior more consistent with the handling of NumPy arrays. We'll do this
+by cleaning up pandas' internals and adding new methods to the extension array interface.
+
+String data type
+----------------
+
+Currently, pandas stores text data in an ``object`` -dtype NumPy array.
+The current implementation has two primary drawbacks: First, ``object`` -dtype
+is not specific to strings: any Python object can be stored in an ``object`` -dtype
+array, not just strings. Second: this is not efficient. The NumPy memory model
+isn't especially well-suited to variable width text data.
+
+To solve the first issue, we propose a new extension type for string data. This
+will initially be opt-in, with users explicitly requesting ``dtype="string"``.
+The array backing this string dtype may initially be the current implementation:
+an ``object`` -dtype NumPy array of Python strings.
+
+To solve the second issue (performance), we'll explore alternative in-memory
+array libraries (for example, Apache Arrow). As part of the work, we may
+need to implement certain operations expected by pandas users (for example
+the algorithm used in, ``Series.str.upper``). That work may be done outside of
+pandas.
+
+Apache Arrow interoperability
+-----------------------------
+
+`Apache Arrow <https://arrow.apache.org>`__ is a cross-language development
+platform for in-memory data. The Arrow logical types are closely aligned with
+typical pandas use cases.
+
+We'd like to provide better-integrated support for Arrow memory and data types
+within pandas. This will let us take advantage of its I/O capabilities and
+provide for better interoperability with other languages and libraries
+using Arrow.
+
+Block manager rewrite
+---------------------
+
+We'd like to replace pandas current internal data structures (a collection of
+1 or 2-D arrays) with a simpler collection of 1-D arrays.
+
+Pandas internal data model is quite complex. A DataFrame is made up of
+one or more 2-dimensional "blocks", with one or more blocks per dtype. This
+collection of 2-D arrays is managed by the BlockManager.
+
+The primary benefit of the BlockManager is improved performance on certain
+operations (construction from a 2D array, binary operations, reductions across the columns),
+especially for wide DataFrames. However, the BlockManager substantially increases the
+complexity and maintenance burden of pandas.
+
+By replacing the BlockManager we hope to achieve
+
+* Substantially simpler code
+* Easier extensibility with new logical types
+* Better user control over memory use and layout
+* Improved micro-performance
+* Option to provide a C / Cython API to pandas' internals
+
+See `these design documents <https://dev.pandas.io/pandas2/internal-architecture.html#removal-of-blockmanager-new-dataframe-internals>`__
+for more.
+
+Decoupling of indexing and internals
+------------------------------------
+
+The code for getting and setting values in pandas' data structures needs refactoring.
+In particular, we must clearly separate code that converts keys (e.g., the argument
+to ``DataFrame.loc``) to positions from code that uses uses these positions to get
+or set values. This is related to the proposed BlockManager rewrite. Currently, the
+BlockManager sometimes uses label-based, rather than position-based, indexing.
+We propose that it should only work with positional indexing, and the translation of keys
+to positions should be entirely done at a higher level.
+
+Indexing is a complicated API with many subtleties. This refactor will require care
+and attention. More details are discussed at
+https://github.com/pandas-dev/pandas/wiki/(Tentative)-rules-for-restructuring-indexing-code
+
+Numba-accelerated operations
+----------------------------
+
+`Numba <https://numba.pydata.org>`__ is a JIT compiler for Python code. We'd like to provide
+ways for users to apply their own Numba-jitted functions where pandas accepts user-defined functions
+(for example, :meth:`Series.apply`, :meth:`DataFrame.apply`, :meth:`DataFrame.applymap`,
+and in groupby and window contexts). This will improve the performance of
+user-defined-functions in these operations by staying within compiled code.
+
+
+Documentation improvements
+--------------------------
+
+We'd like to improve the content, structure, and presentation of the pandas documentation.
+Some specific goals include
+
+* Overhaul the HTML theme with a modern, responsive design (:issue:`15556`)
+* Improve the "Getting Started" documentation, designing and writing learning paths
+ for users different backgrounds (e.g. brand new to programming, familiar with
+ other languages like R, already familiar with Python).
+* Improve the overall organization of the documentation and specific subsections
+ of the documentation to make navigation and finding content easier.
+
+Package docstring validation
+----------------------------
+
+To improve the quality and consistency of pandas docstrings, we've developed
+tooling to check docstrings in a variety of ways.
+https://github.com/pandas-dev/pandas/blob/master/scripts/validate_docstrings.py
+contains the checks.
+
+Like many other projects, pandas uses the
+`numpydoc <https://numpydoc.readthedocs.io/en/latest/>`__ style for writing
+docstrings. With the collaboration of the numpydoc maintainers, we'd like to
+move the checks to a package other than pandas so that other projects can easily
+use them as well.
+
+Performance monitoring
+----------------------
+
+Pandas uses `airspeed velocity <https://asv.readthedocs.io/en/stable/>`__ to
+monitor for performance regressions. ASV itself is a fabulous tool, but requires
+some additional work to be integrated into an open source project's workflow.
+
+The `asv-runner <https://github.com/asv-runner>`__ organization, currently made up
+of pandas maintainers, provides tools built on top of ASV. We have a physical
+machine for running a number of project's benchmarks, and tools managing the
+benchmark runs and reporting on results.
+
+We'd like to fund improvements and maintenance of these tools to
+
+* Be more stable. Currently, they're maintained on the nights and weekends when
+ a maintainer has free time.
+* Tune the system for benchmarks to improve stability, following
+ https://pyperf.readthedocs.io/en/latest/system.html
+* Build a GitHub bot to request ASV runs *before* a PR is merged. Currently, the
+ benchmarks are only run nightly.
+
+.. _roadmap.evolution:
+
+Roadmap Evolution
+-----------------
+
+Pandas continues to evolve. The direction is primarily determined by community
+interest. Everyone is welcome to review existing items on the roadmap and
+to propose a new item.
+
+Each item on the roadmap should be a short summary of a larger design proposal.
+The proposal should include
+
+1. Short summary of the changes, which would be appropriate for inclusion in
+ the roadmap if accepted.
+2. Motivation for the changes.
+3. An explanation of why the change is in scope for pandas.
+4. Detailed design: Preferably with example-usage (even if not implemented yet)
+ and API documentation
+5. API Change: Any API changes that may result from the proposal.
+
+That proposal may then be submitted as a GitHub issue, where the pandas maintainers
+can review and comment on the design. The `pandas mailing list <https://mail.python.org/mailman/listinfo/pandas-dev>`__
+should be notified of the proposal.
+
+When there's agreement that an implementation
+would be welcome, the roadmap should be updated to include the summary and a
+link to the discussion issue.
| This PR adds a roadmap document. This is useful when pursing funding; we can point to a list of known items that we'd like to do if we had the person time (and funding) to tackle them.
Let's have two discussions
1. Do we want this? Roadmaps tend to go stale. How can we keep this up to date?
2. If so, what items should go on it? I've mostly picked from https://docs.google.com/document/d/151ct8jcZWwh7XStptjbLsda6h2b3C0IuiH_hfZnUA58/edit#heading=h.qm48l6dargmd plus a couple pet peeves of mine.
cc @pandas-dev/pandas-core | https://api.github.com/repos/pandas-dev/pandas/pulls/27478 | 2019-07-19T15:10:58Z | 2019-08-01T18:04:12Z | 2019-08-01T18:04:12Z | 2019-08-01T20:50:46Z |
DOC: Fix typos in docstrings for functions referring. | diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index efe88d6b19b10..1cba0e7354182 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -46,7 +46,7 @@ def register(explicit=True):
See Also
--------
- deregister_matplotlib_converter
+ deregister_matplotlib_converters
"""
plot_backend = _get_plot_backend("matplotlib")
plot_backend.register(explicit=explicit)
@@ -65,7 +65,7 @@ def deregister():
See Also
--------
- deregister_matplotlib_converters
+ register_matplotlib_converters
"""
plot_backend = _get_plot_backend("matplotlib")
plot_backend.deregister()
| Fixed two typos in docstrings for functions referring. | https://api.github.com/repos/pandas-dev/pandas/pulls/27474 | 2019-07-19T09:09:50Z | 2019-07-19T11:55:20Z | 2019-07-19T11:55:20Z | 2019-07-19T11:55:20Z |
Groupby transform cleanups | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index fa7b945492d5d..c352a36bf6de1 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -39,7 +39,7 @@ Backwards incompatible API changes
.. _whatsnew_1000.api.other:
--
+- :class:`pandas.core.groupby.GroupBy.transform` now raises on invalid operation names (:issue:`27489`).
-
Other API changes
diff --git a/pandas/core/base.py b/pandas/core/base.py
index a2691f66592e9..89a3d9cfea5ab 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -4,6 +4,7 @@
import builtins
from collections import OrderedDict
import textwrap
+from typing import Optional
import warnings
import numpy as np
@@ -566,7 +567,7 @@ def is_any_frame():
else:
result = None
- f = self._is_cython_func(arg)
+ f = self._get_cython_func(arg)
if f and not args and not kwargs:
return getattr(self, f)(), None
@@ -653,7 +654,7 @@ def _shallow_copy(self, obj=None, obj_type=None, **kwargs):
kwargs[attr] = getattr(self, attr)
return obj_type(obj, **kwargs)
- def _is_cython_func(self, arg):
+ def _get_cython_func(self, arg: str) -> Optional[str]:
"""
if we define an internal function for this argument, return it
"""
diff --git a/pandas/core/groupby/base.py b/pandas/core/groupby/base.py
index 5c4f1fa3fbddf..fc3bb69afd0cb 100644
--- a/pandas/core/groupby/base.py
+++ b/pandas/core/groupby/base.py
@@ -98,6 +98,103 @@ def _gotitem(self, key, ndim, subset=None):
dataframe_apply_whitelist = common_apply_whitelist | frozenset(["dtypes", "corrwith"])
-cython_transforms = frozenset(["cumprod", "cumsum", "shift", "cummin", "cummax"])
+# cythonized transformations or canned "agg+broadcast", which do not
+# require postprocessing of the result by transform.
+cythonized_kernels = frozenset(["cumprod", "cumsum", "shift", "cummin", "cummax"])
cython_cast_blacklist = frozenset(["rank", "count", "size", "idxmin", "idxmax"])
+
+# List of aggregation/reduction functions.
+# These map each group to a single numeric value
+reduction_kernels = frozenset(
+ [
+ "all",
+ "any",
+ "count",
+ "first",
+ "idxmax",
+ "idxmin",
+ "last",
+ "mad",
+ "max",
+ "mean",
+ "median",
+ "min",
+ "ngroup",
+ "nth",
+ "nunique",
+ "prod",
+ # as long as `quantile`'s signature accepts only
+ # a single quantile value, it's a reduction.
+ # GH#27526 might change that.
+ "quantile",
+ "sem",
+ "size",
+ "skew",
+ "std",
+ "sum",
+ "var",
+ ]
+)
+
+# List of transformation functions.
+# a transformation is a function that, for each group,
+# produces a result that has the same shape as the group.
+transformation_kernels = frozenset(
+ [
+ "backfill",
+ "bfill",
+ "corrwith",
+ "cumcount",
+ "cummax",
+ "cummin",
+ "cumprod",
+ "cumsum",
+ "diff",
+ "ffill",
+ "fillna",
+ "pad",
+ "pct_change",
+ "rank",
+ "shift",
+ "tshift",
+ ]
+)
+
+# these are all the public methods on Grouper which don't belong
+# in either of the above lists
+groupby_other_methods = frozenset(
+ [
+ "agg",
+ "aggregate",
+ "apply",
+ "boxplot",
+ # corr and cov return ngroups*ncolumns rows, so they
+ # are neither a transformation nor a reduction
+ "corr",
+ "cov",
+ "describe",
+ "dtypes",
+ "expanding",
+ "filter",
+ "get_group",
+ "groups",
+ "head",
+ "hist",
+ "indices",
+ "ndim",
+ "ngroups",
+ "ohlc",
+ "pipe",
+ "plot",
+ "resample",
+ "rolling",
+ "tail",
+ "take",
+ "transform",
+ ]
+)
+# Valid values of `name` for `groupby.transform(name)`
+# NOTE: do NOT edit this directly. New additions should be inserted
+# into the appropriate list above.
+transform_kernel_whitelist = reduction_kernels | transformation_kernels
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index b886b7e305ed0..1fef65349976b 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -573,13 +573,19 @@ def _transform_general(self, func, *args, **kwargs):
def transform(self, func, *args, **kwargs):
# optimized transforms
- func = self._is_cython_func(func) or func
+ func = self._get_cython_func(func) or func
+
if isinstance(func, str):
- if func in base.cython_transforms:
- # cythonized transform
+ if not (func in base.transform_kernel_whitelist):
+ msg = "'{func}' is not a valid function name for transform(name)"
+ raise ValueError(msg.format(func=func))
+ if func in base.cythonized_kernels:
+ # cythonized transformation or canned "reduction+broadcast"
return getattr(self, func)(*args, **kwargs)
else:
- # cythonized aggregation and merge
+ # If func is a reduction, we need to broadcast the
+ # result to the whole group. Compute func result
+ # and deal with possible broadcasting below.
result = getattr(self, func)(*args, **kwargs)
else:
return self._transform_general(func, *args, **kwargs)
@@ -590,7 +596,7 @@ def transform(self, func, *args, **kwargs):
obj = self._obj_with_exclusions
- # nuiscance columns
+ # nuisance columns
if not result.columns.equals(obj.columns):
return self._transform_general(func, *args, **kwargs)
@@ -853,7 +859,7 @@ def aggregate(self, func_or_funcs=None, *args, **kwargs):
if relabeling:
ret.columns = columns
else:
- cyfunc = self._is_cython_func(func_or_funcs)
+ cyfunc = self._get_cython_func(func_or_funcs)
if cyfunc and not args and not kwargs:
return getattr(self, cyfunc)()
@@ -1005,15 +1011,19 @@ def _aggregate_named(self, func, *args, **kwargs):
@Substitution(klass="Series", selected="A.")
@Appender(_transform_template)
def transform(self, func, *args, **kwargs):
- func = self._is_cython_func(func) or func
+ func = self._get_cython_func(func) or func
- # if string function
if isinstance(func, str):
- if func in base.cython_transforms:
- # cythonized transform
+ if not (func in base.transform_kernel_whitelist):
+ msg = "'{func}' is not a valid function name for transform(name)"
+ raise ValueError(msg.format(func=func))
+ if func in base.cythonized_kernels:
+ # cythonized transform or canned "agg+broadcast"
return getattr(self, func)(*args, **kwargs)
else:
- # cythonized aggregation and merge
+ # If func is a reduction, we need to broadcast the
+ # result to the whole group. Compute func result
+ # and deal with possible broadcasting below.
return self._transform_fast(
lambda: getattr(self, func)(*args, **kwargs), func
)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 9aba9723e0546..3d4dbd3f8d887 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -261,7 +261,7 @@ class providing the base-class of operations.
* f must return a value that either has the same shape as the input
subframe or can be broadcast to the shape of the input subframe.
- For example, f returns a scalar it will be broadcast to have the
+ For example, if `f` returns a scalar it will be broadcast to have the
same shape as the input subframe.
* if this is a DataFrame, f must support application column-by-column
in the subframe. If f also supports application to the entire subframe,
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index fdf7cbd68d8cb..66878c3b1026c 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -1046,7 +1046,7 @@ def _downsample(self, how, **kwargs):
**kwargs : kw args passed to how function
"""
self._set_binner()
- how = self._is_cython_func(how) or how
+ how = self._get_cython_func(how) or how
ax = self.ax
obj = self._selected_obj
@@ -1194,7 +1194,7 @@ def _downsample(self, how, **kwargs):
if self.kind == "timestamp":
return super()._downsample(how, **kwargs)
- how = self._is_cython_func(how) or how
+ how = self._get_cython_func(how) or how
ax = self.ax
if is_subperiod(ax.freq, self.freq):
diff --git a/pandas/tests/groupby/conftest.py b/pandas/tests/groupby/conftest.py
index bdf93756b7559..72e60c5099304 100644
--- a/pandas/tests/groupby/conftest.py
+++ b/pandas/tests/groupby/conftest.py
@@ -2,6 +2,7 @@
import pytest
from pandas import DataFrame, MultiIndex
+from pandas.core.groupby.base import reduction_kernels
from pandas.util import testing as tm
@@ -102,3 +103,10 @@ def three_group():
"F": np.random.randn(11),
}
)
+
+
+@pytest.fixture(params=sorted(reduction_kernels))
+def reduction_func(request):
+ """yields the string names of all groupby reduction functions, one at a time.
+ """
+ return request.param
diff --git a/pandas/tests/groupby/test_transform.py b/pandas/tests/groupby/test_transform.py
index 9a8b7cf18f2c0..d3972e6ba9008 100644
--- a/pandas/tests/groupby/test_transform.py
+++ b/pandas/tests/groupby/test_transform.py
@@ -1003,6 +1003,55 @@ def test_ffill_not_in_axis(func, key, val):
assert_frame_equal(result, expected)
+def test_transform_invalid_name_raises():
+ # GH#27486
+ df = DataFrame(dict(a=[0, 1, 1, 2]))
+ g = df.groupby(["a", "b", "b", "c"])
+ with pytest.raises(ValueError, match="not a valid function name"):
+ g.transform("some_arbitrary_name")
+
+ # method exists on the object, but is not a valid transformation/agg
+ assert hasattr(g, "aggregate") # make sure the method exists
+ with pytest.raises(ValueError, match="not a valid function name"):
+ g.transform("aggregate")
+
+ # Test SeriesGroupBy
+ g = df["a"].groupby(["a", "b", "b", "c"])
+ with pytest.raises(ValueError, match="not a valid function name"):
+ g.transform("some_arbitrary_name")
+
+
+@pytest.mark.parametrize(
+ "obj",
+ [
+ DataFrame(
+ dict(a=[0, 0, 0, 1, 1, 1], b=range(6)), index=["A", "B", "C", "D", "E", "F"]
+ ),
+ Series([0, 0, 0, 1, 1, 1], index=["A", "B", "C", "D", "E", "F"]),
+ ],
+)
+def test_transform_agg_by_name(reduction_func, obj):
+ func = reduction_func
+ g = obj.groupby(np.repeat([0, 1], 3))
+
+ if func == "ngroup": # GH#27468
+ pytest.xfail("TODO: g.transform('ngroup') doesn't work")
+ if func == "size": # GH#27469
+ pytest.xfail("TODO: g.transform('size') doesn't work")
+
+ args = {"nth": [0], "quantile": [0.5]}.get(func, [])
+
+ result = g.transform(func, *args)
+
+ # this is the *definition* of a transformation
+ tm.assert_index_equal(result.index, obj.index)
+ if hasattr(obj, "columns"):
+ tm.assert_index_equal(result.columns, obj.columns)
+
+ # verify that values were broadcasted across each group
+ assert len(set(DataFrame(result).iloc[-3:, -1])) == 1
+
+
def test_transform_lambda_with_datetimetz():
# GH 27496
df = DataFrame(
diff --git a/pandas/tests/groupby/test_whitelist.py b/pandas/tests/groupby/test_whitelist.py
index ee380c6108c38..05d745ccc0e8e 100644
--- a/pandas/tests/groupby/test_whitelist.py
+++ b/pandas/tests/groupby/test_whitelist.py
@@ -9,6 +9,11 @@
import pytest
from pandas import DataFrame, Index, MultiIndex, Series, date_range
+from pandas.core.groupby.base import (
+ groupby_other_methods,
+ reduction_kernels,
+ transformation_kernels,
+)
from pandas.util import testing as tm
AGG_FUNCTIONS = [
@@ -376,3 +381,49 @@ def test_groupby_selection_with_methods(df):
tm.assert_frame_equal(
g.filter(lambda x: len(x) == 3), g_exp.filter(lambda x: len(x) == 3)
)
+
+
+def test_all_methods_categorized(mframe):
+ grp = mframe.groupby(mframe.iloc[:, 0])
+ names = {_ for _ in dir(grp) if not _.startswith("_")} - set(mframe.columns)
+ new_names = set(names)
+ new_names -= reduction_kernels
+ new_names -= transformation_kernels
+ new_names -= groupby_other_methods
+
+ assert not (reduction_kernels & transformation_kernels)
+ assert not (reduction_kernels & groupby_other_methods)
+ assert not (transformation_kernels & groupby_other_methods)
+
+ # new public method?
+ if new_names:
+ msg = """
+There are uncatgeorized methods defined on the Grouper class:
+{names}.
+
+Was a new method recently added?
+
+Every public method On Grouper must appear in exactly one the
+following three lists defined in pandas.core.groupby.base:
+- `reduction_kernels`
+- `transformation_kernels`
+- `groupby_other_methods`
+see the comments in pandas/core/groupby/base.py for guidance on
+how to fix this test.
+ """
+ raise AssertionError(msg.format(names=names))
+
+ # removed a public method?
+ all_categorized = reduction_kernels | transformation_kernels | groupby_other_methods
+ print(names)
+ print(all_categorized)
+ if not (names == all_categorized):
+ msg = """
+Some methods which are supposed to be on the Grouper class
+are missing:
+{names}.
+
+They're still defined in one of the lists that live in pandas/core/groupby/base.py.
+If you removed a method, you should update them
+"""
+ raise AssertionError(msg.format(names=all_categorized - names))
| - [x] whatsnew
Related #27389 (several issues there)
closes #27486 | https://api.github.com/repos/pandas-dev/pandas/pulls/27467 | 2019-07-19T04:31:40Z | 2019-07-25T22:09:45Z | 2019-07-25T22:09:45Z | 2019-07-25T23:45:09Z |
CLN: remove unused row_bool_subset | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 27ee685acfde7..e8a1e8173e463 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -688,50 +688,6 @@ def generate_bins_dt64(ndarray[int64_t] values, const int64_t[:] binner,
return bins
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def row_bool_subset(const float64_t[:, :] values,
- ndarray[uint8_t, cast=True] mask):
- cdef:
- Py_ssize_t i, j, n, k, pos = 0
- ndarray[float64_t, ndim=2] out
-
- n, k = (<object>values).shape
- assert (n == len(mask))
-
- out = np.empty((mask.sum(), k), dtype=np.float64)
-
- for i in range(n):
- if mask[i]:
- for j in range(k):
- out[pos, j] = values[i, j]
- pos += 1
-
- return out
-
-
-@cython.boundscheck(False)
-@cython.wraparound(False)
-def row_bool_subset_object(ndarray[object, ndim=2] values,
- ndarray[uint8_t, cast=True] mask):
- cdef:
- Py_ssize_t i, j, n, k, pos = 0
- ndarray[object, ndim=2] out
-
- n, k = (<object>values).shape
- assert (n == len(mask))
-
- out = np.empty((mask.sum(), k), dtype=object)
-
- for i in range(n):
- if mask[i]:
- for j in range(k):
- out[pos, j] = values[i, j]
- pos += 1
-
- return out
-
-
@cython.boundscheck(False)
@cython.wraparound(False)
def get_level_sorter(const int64_t[:] label,
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index e341a66bb7459..cc8aec4cc243b 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -20,7 +20,6 @@
ensure_float64,
ensure_int64,
ensure_int_or_float,
- ensure_object,
ensure_platform_int,
is_bool_dtype,
is_categorical_dtype,
@@ -567,15 +566,8 @@ def _cython_operation(self, kind, values, how, axis, min_count=-1, **kwargs):
result[mask] = np.nan
if kind == "aggregate" and self._filter_empty_groups and not counts.all():
- if result.ndim == 2:
- try:
- result = lib.row_bool_subset(result, (counts > 0).view(np.uint8))
- except ValueError:
- result = lib.row_bool_subset_object(
- ensure_object(result), (counts > 0).view(np.uint8)
- )
- else:
- result = result[counts > 0]
+ assert result.ndim != 2
+ result = result[counts > 0]
if vdim == 1 and arity == 1:
result = result[:, 0]
| @jreback can you confirm my hunch that the `result.ndim == 2` block in groupby.ops was for Panel?
Tracing back the blame, row_bool_subset was introduced [here](https://github.com/pandas-dev/pandas/commit/b15ae85e1f647821755d35bedd6143ad1e9c3678) in 2012 to close #152 and it started being called only for result.ndim == 2 [here](https://github.com/pandas-dev/pandas/commit/9bfdc3c037e926143e708463d76fcdae3fb709f2) | https://api.github.com/repos/pandas-dev/pandas/pulls/27466 | 2019-07-19T04:29:29Z | 2019-07-20T19:29:22Z | 2019-07-20T19:29:22Z | 2020-04-05T17:36:18Z |
CLN: remove unused parts of skiplist (most of it) | diff --git a/pandas/_libs/skiplist.pxd b/pandas/_libs/skiplist.pxd
index a273d2c445d18..e827223bbe0a7 100644
--- a/pandas/_libs/skiplist.pxd
+++ b/pandas/_libs/skiplist.pxd
@@ -1,7 +1,5 @@
# -*- coding: utf-8 -*-
-
-from cython cimport Py_ssize_t
-
+# See GH#27465 for reference on related-but-unused cython code
cdef extern from "src/skiplist.h":
ctypedef struct node_t:
@@ -24,22 +22,3 @@ cdef extern from "src/skiplist.h":
double skiplist_get(skiplist_t*, int, int*) nogil
int skiplist_insert(skiplist_t*, double) nogil
int skiplist_remove(skiplist_t*, double) nogil
-
-
-# Note: Node is declared here so that IndexableSkiplist can be exposed;
-# Node itself not intended to be exposed.
-cdef class Node:
- cdef public:
- double value
- list next
- list width
-
-
-cdef class IndexableSkiplist:
- cdef:
- Py_ssize_t size, maxlevels
- Node head
-
- cpdef get(self, Py_ssize_t i)
- cpdef insert(self, double value)
- cpdef remove(self, double value)
diff --git a/pandas/_libs/skiplist.pyx b/pandas/_libs/skiplist.pyx
index 2fdee72f9d588..eb750a478415a 100644
--- a/pandas/_libs/skiplist.pyx
+++ b/pandas/_libs/skiplist.pyx
@@ -5,144 +5,3 @@
# Link: http://code.activestate.com/recipes/576930/
# Cython version: Wes McKinney
-from random import random
-
-from libc.math cimport log
-
-import numpy as np
-
-
-# MSVC does not have log2!
-
-cdef double Log2(double x):
- return log(x) / log(2.)
-
-
-# TODO: optimize this, make less messy
-
-cdef class Node:
- # cdef public:
- # double value
- # list next
- # list width
-
- def __init__(self, double value, list next, list width):
- self.value = value
- self.next = next
- self.width = width
-
-
-# Singleton terminator node
-NIL = Node(np.inf, [], [])
-
-
-cdef class IndexableSkiplist:
- """
- Sorted collection supporting O(lg n) insertion, removal, and
- lookup by rank.
- """
- # cdef:
- # Py_ssize_t size, maxlevels
- # Node head
-
- def __init__(self, expected_size=100):
- self.size = 0
- self.maxlevels = int(1 + Log2(expected_size))
- self.head = Node(np.NaN, [NIL] * self.maxlevels, [1] * self.maxlevels)
-
- def __len__(self):
- return self.size
-
- def __getitem__(self, i):
- return self.get(i)
-
- cpdef get(self, Py_ssize_t i):
- cdef:
- Py_ssize_t level
- Node node
-
- node = self.head
- i += 1
-
- for level in range(self.maxlevels - 1, -1, -1):
- while node.width[level] <= i:
- i -= node.width[level]
- node = node.next[level]
-
- return node.value
-
- cpdef insert(self, double value):
- cdef:
- Py_ssize_t level, steps, d
- Node node, prevnode, newnode, next_at_level, tmp
- list chain, steps_at_level
-
- # find first node on each level where node.next[levels].value > value
- chain = [None] * self.maxlevels
- steps_at_level = [0] * self.maxlevels
- node = self.head
-
- for level in range(self.maxlevels - 1, -1, -1):
- next_at_level = node.next[level]
-
- while next_at_level.value <= value:
- steps_at_level[level] = (steps_at_level[level] +
- node.width[level])
- node = next_at_level
- next_at_level = node.next[level]
-
- chain[level] = node
-
- # insert a link to the newnode at each level
- d = min(self.maxlevels, 1 - int(Log2(random())))
- newnode = Node(value, [None] * d, [None] * d)
- steps = 0
-
- for level in range(d):
- prevnode = chain[level]
- newnode.next[level] = prevnode.next[level]
- prevnode.next[level] = newnode
- newnode.width[level] = (prevnode.width[level] - steps)
- prevnode.width[level] = steps + 1
- steps += steps_at_level[level]
-
- for level in range(d, self.maxlevels):
- (<Node>chain[level]).width[level] += 1
-
- self.size += 1
-
- cpdef remove(self, double value):
- cdef:
- Py_ssize_t level, d
- Node node, prevnode, tmpnode, next_at_level
- list chain
-
- # find first node on each level where node.next[levels].value >= value
- chain = [None] * self.maxlevels
- node = self.head
-
- for level in range(self.maxlevels - 1, -1, -1):
- next_at_level = node.next[level]
- while next_at_level.value < value:
- node = next_at_level
- next_at_level = node.next[level]
-
- chain[level] = node
-
- if value != (<Node>(<Node>(<Node>chain[0]).next)[0]).value:
- raise KeyError('Not Found')
-
- # remove one link at each level
- d = len((<Node>(<Node>(<Node>chain[0]).next)[0]).next)
-
- for level in range(d):
- prevnode = chain[level]
- tmpnode = prevnode.next[level]
- prevnode.width[level] += tmpnode.width[level] - 1
- prevnode.next[level] = tmpnode.next[level]
-
- for level in range(d, self.maxlevels):
- tmpnode = chain[level]
- tmpnode.width[level] -= 1
-
- self.size -= 1
| As a follow-up I'll take a look at either removing the now-empty .pyx file or trying to port the c file to pyx | https://api.github.com/repos/pandas-dev/pandas/pulls/27465 | 2019-07-19T04:13:50Z | 2019-09-11T02:02:07Z | 2019-09-11T02:02:07Z | 2019-09-11T15:36:20Z |
Remove .. versionadded:: 0.18 | diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index bc3b7b4c70fd1..2edd242d8cad9 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -1422,8 +1422,6 @@ The :meth:`~DataFrame.rename` method also provides an ``inplace`` named
parameter that is by default ``False`` and copies the underlying data. Pass
``inplace=True`` to rename the data in place.
-.. versionadded:: 0.18.0
-
Finally, :meth:`~Series.rename` also accepts a scalar or list-like
for altering the ``Series.name`` attribute.
diff --git a/doc/source/getting_started/dsintro.rst b/doc/source/getting_started/dsintro.rst
index 2fb0b163642c5..9e18951fe3f4c 100644
--- a/doc/source/getting_started/dsintro.rst
+++ b/doc/source/getting_started/dsintro.rst
@@ -251,8 +251,6 @@ Series can also have a ``name`` attribute:
The Series ``name`` will be assigned automatically in many cases, in particular
when taking 1D slices of DataFrame as you will see below.
-.. versionadded:: 0.18.0
-
You can rename a Series with the :meth:`pandas.Series.rename` method.
.. ipython:: python
diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index c15991fabfd3b..b0d08f23dbc82 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -601,8 +601,6 @@ This allows for *formulaic evaluation*. The assignment target can be a
new column name or an existing column name, and it must be a valid Python
identifier.
-.. versionadded:: 0.18.0
-
The ``inplace`` keyword determines whether this assignment will performed
on the original ``DataFrame`` or return a copy with the new column.
@@ -652,9 +650,7 @@ The equivalent in standard Python would be
df['a'] = 1
df
-.. versionadded:: 0.18.0
-
-The ``query`` method gained the ``inplace`` keyword which determines
+The ``query`` method has a ``inplace`` keyword which determines
whether the query modifies the original frame.
.. ipython:: python
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 147f07e36efb8..141d1708d882d 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -827,13 +827,10 @@ and that the transformed data contains no NAs.
.. _groupby.transform.window_resample:
-New syntax to window and resample operations
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.18.1
+Window and resample operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Working with the resample, expanding or rolling operations on the groupby
-level used to require the application of helper functions. However,
-now it is possible to use ``resample()``, ``expanding()`` and
+It is possible to use ``resample()``, ``expanding()`` and
``rolling()`` as methods on groupbys.
The example below will apply the ``rolling()`` method on the samples of
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index 888266c3cfa55..2c27ec12f6923 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -67,8 +67,6 @@ of multi-axis indexing.
* A ``callable`` function with one argument (the calling Series or DataFrame) and
that returns valid output for indexing (one of the above).
- .. versionadded:: 0.18.1
-
See more at :ref:`Selection by Label <indexing.label>`.
* ``.iloc`` is primarily integer position based (from ``0`` to
@@ -538,8 +536,6 @@ A list of indexers where any element is out of bounds will raise an
Selection by callable
---------------------
-.. versionadded:: 0.18.1
-
``.loc``, ``.iloc``, and also ``[]`` indexing can accept a ``callable`` as indexer.
The ``callable`` must be a function with one argument (the calling Series or DataFrame) that returns valid output for indexing.
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index eac86dda31507..8b21e4f29d994 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -87,8 +87,6 @@ delim_whitespace : boolean, default False
If this option is set to ``True``, nothing should be passed in for the
``delimiter`` parameter.
- .. versionadded:: 0.18.1 support for the Python parser.
-
Column and index locations and names
++++++++++++++++++++++++++++++++++++
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index 0470a6c0c2f42..06ad8632712c2 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -254,8 +254,6 @@ values will be set to ``NaN``.
df3
df3.unstack()
-.. versionadded:: 0.18.0
-
Alternatively, unstack takes an optional ``fill_value`` argument, for specifying
the value of missing data.
@@ -630,8 +628,6 @@ the prefix separator. You can specify ``prefix`` and ``prefix_sep`` in 3 ways:
from_dict = pd.get_dummies(df, prefix={'B': 'from_B', 'A': 'from_A'})
from_dict
-.. versionadded:: 0.18.0
-
Sometimes it will be useful to only keep k-1 levels of a categorical
variable to avoid collinearity when feeding the result to statistical models.
You can switch to this mode by turn on ``drop_first``.
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index 4f1fcdeb62f14..3255202d09a48 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -560,8 +560,6 @@ For example if they are separated by a ``'|'``:
String ``Index`` also supports ``get_dummies`` which returns a ``MultiIndex``.
-.. versionadded:: 0.18.1
-
.. ipython:: python
idx = pd.Index(['a', 'a|b', np.nan, 'a|c'])
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index ce02059cd421f..5bdff441d9e1f 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -255,8 +255,6 @@ option, see the Python `datetime documentation`_.
Assembling datetime from multiple DataFrame columns
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.18.1
-
You can also pass a ``DataFrame`` of integer or string columns to assemble into a ``Series`` of ``Timestamps``.
.. ipython:: python
@@ -1165,8 +1163,6 @@ following subsection.
Custom business hour
~~~~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.18.1
-
The ``CustomBusinessHour`` is a mixture of ``BusinessHour`` and ``CustomBusinessDay`` which
allows you to specify arbitrary holidays. ``CustomBusinessHour`` works as the same
as ``BusinessHour`` except that it skips specified custom holidays.
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c15f4ad8e1900..8fdfc960603c6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3087,8 +3087,6 @@ def query(self, expr, inplace=False, **kwargs):
See the documentation for :func:`eval` for complete details
on the keyword arguments accepted by :meth:`DataFrame.query`.
- .. versionadded:: 0.18.0
-
Returns
-------
DataFrame
@@ -5303,11 +5301,6 @@ def swaplevel(self, i=-2, j=-1, axis=0):
Returns
-------
DataFrame
-
- .. versionchanged:: 0.18.1
-
- The indexes ``i`` and ``j`` are now optional, and default to
- the two innermost levels of the index.
"""
result = self.copy()
@@ -8213,8 +8206,6 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation="linear"):
* nearest: `i` or `j` whichever is nearest.
* midpoint: (`i` + `j`) / 2.
- .. versionadded:: 0.18.0
-
Returns
-------
Series or DataFrame
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f28f58b070368..0229f36cedff8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -997,11 +997,6 @@ def swaplevel(self, i=-2, j=-1, axis=0):
Returns
-------
swapped : same type as caller (new object)
-
- .. versionchanged:: 0.18.1
-
- The indexes ``i`` and ``j`` are now optional, and default to
- the two innermost levels of the index.
"""
axis = self._get_axis_number(axis)
result = self.copy()
@@ -9140,10 +9135,6 @@ def _where(
If `cond` is callable, it is computed on the %(klass)s and
should return boolean %(klass)s or array. The callable must
not change input %(klass)s (though pandas doesn't check it).
-
- .. versionadded:: 0.18.1
- A callable can be used as cond.
-
other : scalar, %(klass)s, or callable
Entries where `cond` is %(cond_rev)s are replaced with
corresponding value from `other`.
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 33de8e41b2f65..55bde6ad299ee 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4904,11 +4904,6 @@ def isin(self, values, level=None):
----------
values : set or list-like
Sought values.
-
- .. versionadded:: 0.18.1
-
- Support for values as a set.
-
level : str or int, optional
Name or position of the index level to use (if the index is a
`MultiIndex`).
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index b673c119c0498..1a7d98b6f9c61 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2197,11 +2197,6 @@ def swaplevel(self, i=-2, j=-1):
MultiIndex
A new MultiIndex.
- .. versionchanged:: 0.18.1
-
- The indexes ``i`` and ``j`` are now optional, and default to
- the two innermost levels of the index.
-
See Also
--------
Series.swaplevel : Swap levels i and j in a MultiIndex.
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index b4a3e6ed71bf4..8850cddc45b0d 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -781,8 +781,6 @@ def interpolate(
):
"""
Interpolate values according to different methods.
-
- .. versionadded:: 0.18.1
"""
result = self._upsample(None)
return result.interpolate(
@@ -1259,8 +1257,6 @@ def _upsample(self, method, limit=None, fill_value=None):
class PeriodIndexResamplerGroupby(_GroupByMixin, PeriodIndexResampler):
"""
Provides a resample of a groupby implementation.
-
- .. versionadded:: 0.18.1
"""
@property
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 23bf89b2bc1ac..2bdef766a3434 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -480,8 +480,6 @@ def crosstab(
- If passed 'columns' will normalize over each column.
- If margins is `True`, will also normalize margin values.
- .. versionadded:: 0.18.1
-
Returns
-------
DataFrame
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 540a06caec220..65f6c374d0293 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -781,9 +781,6 @@ def get_dummies(
drop_first : bool, default False
Whether to get k-1 dummies out of k categorical levels by removing the
first level.
-
- .. versionadded:: 0.18.0
-
dtype : dtype, default np.uint8
Data type for new columns. Only a single dtype is allowed.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 59ea8c6bd6c5d..1f63d8d92f1c0 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2350,8 +2350,6 @@ def quantile(self, q=0.5, interpolation="linear"):
q : float or array-like, default 0.5 (50% quantile)
0 <= q <= 1, the quantile(s) to compute.
interpolation : {'linear', 'lower', 'higher', 'midpoint', 'nearest'}
- .. versionadded:: 0.18.0
-
This optional parameter specifies the interpolation method to use,
when the desired quantile lies between two data points `i` and `j`:
@@ -3707,8 +3705,6 @@ def unstack(self, level=-1, fill_value=None):
fill_value : scalar value, default None
Value to use when replacing NaN values.
- .. versionadded:: 0.18.0
-
Returns
-------
DataFrame
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 7c293ca4e50b0..b188bd23cdf32 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -837,8 +837,6 @@ def str_extract(arr, pat, flags=0, expand=True):
If False, return a Series/Index if there is one capture group
or DataFrame if there are multiple capture groups.
- .. versionadded:: 0.18.0
-
Returns
-------
DataFrame or Series or Index
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index e9d2c3f07bfae..20c4b9422459c 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -577,9 +577,6 @@ def to_datetime(
Parameters
----------
arg : integer, float, string, datetime, list, tuple, 1-d array, Series
-
- .. versionadded:: 0.18.1
-
or DataFrame/dict-like
errors : {'ignore', 'raise', 'coerce'}, default 'raise'
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 86574208a3fc0..20d5453cc43a2 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -477,8 +477,6 @@ class Window(_Window):
"""
Provide rolling window calculations.
- .. versionadded:: 0.18.0
-
Parameters
----------
window : int, or offset
@@ -1984,8 +1982,6 @@ class Expanding(_Rolling_and_Expanding):
"""
Provide expanding transformations.
- .. versionadded:: 0.18.0
-
Parameters
----------
min_periods : int, default 1
@@ -2271,8 +2267,6 @@ class EWM(_Rolling):
r"""
Provide exponential weighted functions.
- .. versionadded:: 0.18.0
-
Parameters
----------
com : float, optional
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 0e8ed7b25d665..b9f0e8a787c19 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -102,8 +102,6 @@
Display DataFrame dimensions (number of rows by number of columns).
decimal : str, default '.'
Character recognized as decimal separator, e.g. ',' in Europe.
-
- .. versionadded:: 0.18.0
"""
_VALID_JUSTIFY_PARAMETERS = (
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 98349fe1e4792..b736b978c87a5 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -414,8 +414,6 @@ def format(self, formatter, subset=None):
"""
Format the text display value of cells.
- .. versionadded:: 0.18.0
-
Parameters
----------
formatter : str, callable, or dict
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 300f17bd25432..20aafb26fb7ea 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -277,9 +277,6 @@
following extensions: '.gz', '.bz2', '.zip', or '.xz' (otherwise no
decompression). If using 'zip', the ZIP file must contain only one data
file to be read in. Set to None for no decompression.
-
- .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
-
thousands : str, optional
Thousands separator.
decimal : str, default '.'
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index ac3e92c772517..07b9598f15902 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -1075,8 +1075,6 @@ def onOffset(self, dt):
class CustomBusinessHour(_CustomMixin, BusinessHourMixin, SingleConstructorOffset):
"""
DateOffset subclass representing possibly n custom business days.
-
- .. versionadded:: 0.18.1
"""
_prefix = "CBH"
| Removed ``.. versionadded:: 0.18.0`` and related text blocks. | https://api.github.com/repos/pandas-dev/pandas/pulls/27463 | 2019-07-19T01:50:26Z | 2019-07-20T19:20:06Z | 2019-07-20T19:20:06Z | 2019-07-20T20:26:12Z |
CLN: avoid runtime imports | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index e8a1e8173e463..6b4f45fabc665 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -157,13 +157,13 @@ def is_scalar(val: object) -> bool:
return (cnp.PyArray_IsAnyScalar(val)
# PyArray_IsAnyScalar is always False for bytearrays on Py3
- or isinstance(val, (Fraction, Number))
- # We differ from numpy, which claims that None is not scalar;
- # see np.isscalar
- or val is None
or PyDate_Check(val)
or PyDelta_Check(val)
or PyTime_Check(val)
+ # We differ from numpy, which claims that None is not scalar;
+ # see np.isscalar
+ or val is None
+ or isinstance(val, (Fraction, Number))
or util.is_period_object(val)
or is_decimal(val)
or is_interval(val)
@@ -1192,7 +1192,9 @@ def infer_dtype(value: object, skipna: object=None) -> str:
# e.g. categoricals
try:
values = getattr(value, '_values', getattr(value, 'values', value))
- except:
+ except TypeError:
+ # This gets hit if we have an EA, since cython expects `values`
+ # to be an ndarray
value = _try_infer_map(value)
if value is not None:
return value
@@ -1208,8 +1210,6 @@ def infer_dtype(value: object, skipna: object=None) -> str:
construct_1d_object_array_from_listlike)
values = construct_1d_object_array_from_listlike(value)
- values = getattr(values, 'values', values)
-
# make contiguous
values = values.ravel()
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 0934d8529fdf7..bca33513b0069 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -6,7 +6,6 @@
import pickle as pkl
import sys
-import pandas # noqa
from pandas import Index
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 2c38e071d3d44..0cfa6df99ff97 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1977,12 +1977,6 @@ def diff(arr, n, axis=0):
out_arr[res_indexer] = arr[res_indexer] - arr[lag_indexer]
if is_timedelta:
- from pandas import TimedeltaIndex
-
- out_arr = (
- TimedeltaIndex(out_arr.ravel().astype("int64"))
- .asi8.reshape(out_arr.shape)
- .astype("timedelta64[ns]")
- )
+ out_arr = out_arr.astype("int64").view("timedelta64[ns]")
return out_arr
diff --git a/pandas/core/arrays/array_.py b/pandas/core/arrays/array_.py
index 93ee570c1f971..314144db57712 100644
--- a/pandas/core/arrays/array_.py
+++ b/pandas/core/arrays/array_.py
@@ -212,7 +212,6 @@ def array(
"""
from pandas.core.arrays import (
period_array,
- ExtensionArray,
IntervalArray,
PandasArray,
DatetimeArray,
@@ -226,7 +225,7 @@ def array(
data = extract_array(data, extract_numpy=True)
- if dtype is None and isinstance(data, ExtensionArray):
+ if dtype is None and isinstance(data, ABCExtensionArray):
dtype = data.dtype
# this returns None for not-found dtypes.
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index df5cd12a479f0..0cbcbb1ce4ba4 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -6,7 +6,7 @@
from pandas._config import get_option
-from pandas._libs import algos as libalgos, lib
+from pandas._libs import algos as libalgos, hashtable as htable, lib
from pandas.compat.numpy import function as nv
from pandas.util._decorators import (
Appender,
@@ -50,7 +50,14 @@
from pandas.core import ops
from pandas.core.accessor import PandasDelegate, delegate_names
import pandas.core.algorithms as algorithms
-from pandas.core.algorithms import factorize, take, take_1d, unique1d
+from pandas.core.algorithms import (
+ _get_data_algo,
+ _hashtables,
+ factorize,
+ take,
+ take_1d,
+ unique1d,
+)
from pandas.core.base import NoNewAttributesMixin, PandasObject, _shared_docs
import pandas.core.common as com
from pandas.core.missing import interpolate_2d
@@ -1527,9 +1534,7 @@ def value_counts(self, dropna=True):
See Also
--------
Series.value_counts
-
"""
- from numpy import bincount
from pandas import Series, CategoricalIndex
code, cat = self._codes, self.categories
@@ -1538,9 +1543,9 @@ def value_counts(self, dropna=True):
if dropna or clean:
obs = code if clean else code[mask]
- count = bincount(obs, minlength=ncat or 0)
+ count = np.bincount(obs, minlength=ncat or 0)
else:
- count = bincount(np.where(mask, code, ncat))
+ count = np.bincount(np.where(mask, code, ncat))
ix = np.append(ix, -1)
ix = self._constructor(ix, dtype=self.dtype, fastpath=True)
@@ -2329,9 +2334,6 @@ def mode(self, dropna=True):
-------
modes : `Categorical` (sorted)
"""
-
- import pandas._libs.hashtable as htable
-
codes = self._codes
if dropna:
good = self._codes != -1
@@ -2671,8 +2673,6 @@ def _get_codes_for_values(values, categories):
"""
utility routine to turn values into codes given the specified categories
"""
- from pandas.core.algorithms import _get_data_algo, _hashtables
-
dtype_equal = is_dtype_equal(values.dtype, categories.dtype)
if dtype_equal:
@@ -2722,8 +2722,6 @@ def _recode_for_categories(codes, old_categories, new_categories):
>>> _recode_for_categories(codes, old_cat, new_cat)
array([ 1, 0, 0, -1])
"""
- from pandas.core.algorithms import take_1d
-
if len(old_categories) == 0:
# All null anyway, so just retain the nulls
return codes.copy()
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index df17388856117..98a745582e11b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -34,7 +34,12 @@
is_unsigned_integer_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
+from pandas.core.dtypes.generic import (
+ ABCDataFrame,
+ ABCIndexClass,
+ ABCPeriodArray,
+ ABCSeries,
+)
from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
@@ -1664,11 +1669,10 @@ def _ensure_datetimelike_to_i8(other, to_utc=False):
i8 1d array
"""
from pandas import Index
- from pandas.core.arrays import PeriodArray
if lib.is_scalar(other) and isna(other):
return iNaT
- elif isinstance(other, (PeriodArray, ABCIndexClass, DatetimeLikeArrayMixin)):
+ elif isinstance(other, (ABCPeriodArray, ABCIndexClass, DatetimeLikeArrayMixin)):
# convert tz if needed
if getattr(other, "tz", None) is not None:
if to_utc:
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 867122964fe59..62b1a8a184946 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -25,6 +25,7 @@
from pandas.core.dtypes.missing import isna, notna
from pandas.core import nanops, ops
+from pandas.core.algorithms import take
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
from pandas.core.tools.numeric import to_numeric
@@ -420,8 +421,6 @@ def __iter__(self):
yield self._data[i]
def take(self, indexer, allow_fill=False, fill_value=None):
- from pandas.api.extensions import take
-
# we always fill with 1 internally
# to avoid upcasting
data_fill_value = 1 if isna(fill_value) else fill_value
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index a0319fe96896a..7f1aad3ba3261 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -25,6 +25,7 @@
from pandas.core.dtypes.dtypes import IntervalDtype
from pandas.core.dtypes.generic import (
ABCDatetimeIndex,
+ ABCIndexClass,
ABCInterval,
ABCIntervalIndex,
ABCPeriodIndex,
@@ -35,7 +36,7 @@
from pandas.core.arrays.base import ExtensionArray, _extension_array_shared_docs
from pandas.core.arrays.categorical import Categorical
import pandas.core.common as com
-from pandas.core.indexes.base import Index, ensure_index
+from pandas.core.indexes.base import ensure_index
_VALID_CLOSED = {"left", "right", "both", "neither"}
_interval_shared_docs = {}
@@ -510,7 +511,7 @@ def __getitem__(self, value):
right = self.right[value]
# scalar
- if not isinstance(left, Index):
+ if not isinstance(left, ABCIndexClass):
if isna(left):
return self._fill_value
return Interval(left, right, self.closed)
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 9f428a4ac10b2..77c9a3bc98690 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -11,10 +11,11 @@
from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
from pandas.core.dtypes.inference import is_array_like, is_list_like
+from pandas.core.dtypes.missing import isna
from pandas import compat
from pandas.core import nanops
-from pandas.core.algorithms import searchsorted
+from pandas.core.algorithms import searchsorted, take, unique
from pandas.core.missing import backfill_1d, pad_1d
from .base import ExtensionArray, ExtensionOpsMixin
@@ -249,8 +250,6 @@ def nbytes(self):
return self._ndarray.nbytes
def isna(self):
- from pandas import isna
-
return isna(self._ndarray)
def fillna(self, value=None, method=None, limit=None):
@@ -281,8 +280,6 @@ def fillna(self, value=None, method=None, limit=None):
return new_values
def take(self, indices, allow_fill=False, fill_value=None):
- from pandas.core.algorithms import take
-
result = take(
self._ndarray, indices, allow_fill=allow_fill, fill_value=fill_value
)
@@ -298,8 +295,6 @@ def _values_for_factorize(self):
return self._ndarray, -1
def unique(self):
- from pandas import unique
-
return type(self)(unique(self._ndarray))
# ------------------------------------------------------------------------
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index 65976021f5053..9376b49112f6f 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -105,8 +105,6 @@ class SparseDtype(ExtensionDtype):
_metadata = ("_dtype", "_fill_value", "_is_na_fill_value")
def __init__(self, dtype: Dtype = np.float64, fill_value: Any = None) -> None:
- from pandas.core.dtypes.missing import na_value_for_dtype
- from pandas.core.dtypes.common import pandas_dtype, is_string_dtype, is_scalar
if isinstance(dtype, type(self)):
if fill_value is None:
@@ -178,20 +176,14 @@ def fill_value(self):
@property
def _is_na_fill_value(self):
- from pandas.core.dtypes.missing import isna
-
return isna(self.fill_value)
@property
def _is_numeric(self):
- from pandas.core.dtypes.common import is_object_dtype
-
return not is_object_dtype(self.subtype)
@property
def _is_boolean(self):
- from pandas.core.dtypes.common import is_bool_dtype
-
return is_bool_dtype(self.subtype)
@property
@@ -928,8 +920,6 @@ def values(self):
return self.to_dense()
def isna(self):
- from pandas import isna
-
# If null fill value, we want SparseDtype[bool, true]
# to preserve the same memory usage.
dtype = SparseDtype(bool, self._null_fill_value)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 9480e2e425f79..f7b3fe723c28c 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -32,6 +32,7 @@
from pandas.core import algorithms, common as com
from pandas.core.accessor import DirNamesMixin
+from pandas.core.algorithms import duplicated, unique1d, value_counts
from pandas.core.arrays import ExtensionArray
import pandas.core.nanops as nanops
@@ -1381,8 +1382,6 @@ def value_counts(
1.0 1
dtype: int64
"""
- from pandas.core.algorithms import value_counts
-
result = value_counts(
self,
sort=sort,
@@ -1400,8 +1399,6 @@ def unique(self):
result = values.unique()
else:
- from pandas.core.algorithms import unique1d
-
result = unique1d(values)
return result
@@ -1631,8 +1628,6 @@ def drop_duplicates(self, keep="first", inplace=False):
return result
def duplicated(self, keep="first"):
- from pandas.core.algorithms import duplicated
-
if isinstance(self, ABCIndexClass):
if self.is_unique:
return np.zeros(len(self), dtype=np.bool)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index d2dd0d03d9425..f9a19291b8ad9 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -254,7 +254,6 @@ def asarray_tuplesafe(values, dtype=None):
if result.ndim == 2:
# Avoid building an array of arrays:
- # TODO: verify whether any path hits this except #18819 (invalid)
values = [tuple(x) for x in values]
result = construct_1d_object_array_from_listlike(values)
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 44a3fefb1689a..220a02f2ca35b 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1368,7 +1368,7 @@ def maybe_cast_to_integer_array(arr, dtype, copy=False):
arr = np.asarray(arr)
if is_unsigned_integer_dtype(dtype) and (arr < 0).any():
- raise OverflowError("Trying to coerce negative values " "to unsigned integers")
+ raise OverflowError("Trying to coerce negative values to unsigned integers")
if is_integer_dtype(dtype) and (is_float_dtype(arr) or is_object_dtype(arr)):
raise ValueError("Trying to coerce float values to integers")
diff --git a/pandas/core/dtypes/concat.py b/pandas/core/dtypes/concat.py
index ac74ad5726a99..ee5aa88cf2907 100644
--- a/pandas/core/dtypes/concat.py
+++ b/pandas/core/dtypes/concat.py
@@ -361,9 +361,7 @@ def _maybe_unwrap(x):
new_codes = np.concatenate(codes)
if sort_categories and not ignore_order and ordered:
- raise TypeError(
- "Cannot use sort_categories=True with " "ordered Categoricals"
- )
+ raise TypeError("Cannot use sort_categories=True with ordered Categoricals")
if sort_categories and not categories.is_monotonic_increasing:
categories = categories.sort_values()
@@ -386,7 +384,7 @@ def _maybe_unwrap(x):
else:
# ordered - to show a proper error message
if all(c.ordered for c in to_union):
- msg = "to union ordered Categoricals, " "all categories must be the same"
+ msg = "to union ordered Categoricals, all categories must be the same"
raise TypeError(msg)
else:
raise TypeError("Categorical.ordered must be the same")
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 6728d048efb79..bba551bd30a2d 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -12,7 +12,7 @@
from pandas.core.dtypes.generic import ABCCategoricalIndex, ABCDateOffset, ABCIndexClass
from .base import ExtensionDtype
-from .inference import is_list_like
+from .inference import is_bool, is_list_like
str_type = str
@@ -149,7 +149,7 @@ def __repr__(self) -> str_type:
return str(self)
def __hash__(self) -> int:
- raise NotImplementedError("sub-classes should implement an __hash__ " "method")
+ raise NotImplementedError("sub-classes should implement an __hash__ method")
def __getstate__(self) -> Dict[str_type, Any]:
# pickle support; we don't want to pickle the cache
@@ -320,7 +320,7 @@ def _from_values_or_dtype(
raise ValueError(msg.format(dtype=dtype))
elif categories is not None or ordered is not None:
raise ValueError(
- "Cannot specify `categories` or `ordered` " "together with `dtype`."
+ "Cannot specify `categories` or `ordered` together with `dtype`."
)
elif is_categorical(values):
# If no "dtype" was passed, use the one from "values", but honor
@@ -490,8 +490,6 @@ def validate_ordered(ordered: OrderedType) -> None:
TypeError
If 'ordered' is not a boolean.
"""
- from pandas.core.dtypes.common import is_bool
-
if not is_bool(ordered):
raise TypeError("'ordered' must either be 'True' or 'False'")
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index bea73d72b91c9..6f599a6be6021 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -3,6 +3,8 @@
"""
import numpy as np
+from pandas._config import get_option
+
from pandas._libs import lib
import pandas._libs.missing as libmissing
from pandas._libs.tslibs import NaT, iNaT
@@ -203,8 +205,6 @@ def _use_inf_as_na(key):
* http://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
"""
- from pandas._config import get_option
-
flag = get_option(key)
if flag:
globals()["_isna"] = _isna_old
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index ce14cb22a88ce..d3dacee0468c6 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -33,8 +33,6 @@
from pandas.core.dtypes.dtypes import DatetimeTZDtype
from pandas.core.dtypes.missing import isna, na_value_for_dtype, notna
-import pandas.core.common as com
-
bn = import_optional_dependency("bottleneck", raise_on_missing=False, on_version="warn")
_BOTTLENECK_INSTALLED = bn is not None
_USE_BOTTLENECK = False
@@ -281,12 +279,12 @@ def _get_values(
mask = _maybe_get_mask(values, skipna, mask)
if is_datetime64tz_dtype(values):
- # com.values_from_object returns M8[ns] dtype instead of tz-aware,
+ # lib.values_from_object returns M8[ns] dtype instead of tz-aware,
# so this case must be handled separately from the rest
dtype = values.dtype
values = getattr(values, "_values", values)
else:
- values = com.values_from_object(values)
+ values = lib.values_from_object(values)
dtype = values.dtype
if is_datetime_or_timedelta_dtype(values) or is_datetime64tz_dtype(values):
@@ -742,7 +740,7 @@ def nanvar(values, axis=None, skipna=True, ddof=1, mask=None):
>>> nanops.nanvar(s)
1.0
"""
- values = com.values_from_object(values)
+ values = lib.values_from_object(values)
dtype = values.dtype
mask = _maybe_get_mask(values, skipna, mask)
if is_any_int_dtype(values):
@@ -943,7 +941,7 @@ def nanskew(values, axis=None, skipna=True, mask=None):
>>> nanops.nanskew(s)
1.7320508075688787
"""
- values = com.values_from_object(values)
+ values = lib.values_from_object(values)
mask = _maybe_get_mask(values, skipna, mask)
if not is_float_dtype(values.dtype):
values = values.astype("f8")
@@ -1022,7 +1020,7 @@ def nankurt(values, axis=None, skipna=True, mask=None):
>>> nanops.nankurt(s)
-1.2892561983471076
"""
- values = com.values_from_object(values)
+ values = lib.values_from_object(values)
mask = _maybe_get_mask(values, skipna, mask)
if not is_float_dtype(values.dtype):
values = values.astype("f8")
diff --git a/pandas/core/resample.py b/pandas/core/resample.py
index 8850cddc45b0d..edb05360a832c 100644
--- a/pandas/core/resample.py
+++ b/pandas/core/resample.py
@@ -7,7 +7,7 @@
import numpy as np
from pandas._libs import lib
-from pandas._libs.tslibs import NaT, Timestamp
+from pandas._libs.tslibs import NaT, Period, Timestamp
from pandas._libs.tslibs.frequencies import is_subperiod, is_superperiod
from pandas._libs.tslibs.period import IncompatibleFrequency
from pandas.compat.numpy import function as nv
@@ -16,7 +16,6 @@
from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries
-import pandas as pd
import pandas.core.algorithms as algos
from pandas.core.generic import _shared_docs
from pandas.core.groupby.base import GroupByMixin
@@ -25,7 +24,7 @@
from pandas.core.groupby.grouper import Grouper
from pandas.core.groupby.ops import BinGrouper
from pandas.core.indexes.datetimes import DatetimeIndex, date_range
-from pandas.core.indexes.period import PeriodIndex
+from pandas.core.indexes.period import PeriodIndex, period_range
from pandas.core.indexes.timedeltas import TimedeltaIndex, timedelta_range
from pandas.tseries.frequencies import to_offset
@@ -138,7 +137,7 @@ def _typ(self):
"""
Masquerade for compat as a Series or a DataFrame.
"""
- if isinstance(self._selected_obj, pd.Series):
+ if isinstance(self._selected_obj, ABCSeries):
return "series"
return "dataframe"
@@ -858,7 +857,9 @@ def size(self):
# a copy of 0-len objects. GH14962
result = self._downsample("size")
if not len(self.ax) and isinstance(self._selected_obj, ABCDataFrame):
- result = pd.Series([], index=result.index, dtype="int64")
+ from pandas import Series
+
+ result = Series([], index=result.index, dtype="int64")
return result
def quantile(self, q=0.5, **kwargs):
@@ -1559,9 +1560,7 @@ def _get_time_period_bins(self, ax):
binner = labels = PeriodIndex(data=[], freq=freq, name=ax.name)
return binner, [], labels
- labels = binner = pd.period_range(
- start=ax[0], end=ax[-1], freq=freq, name=ax.name
- )
+ labels = binner = period_range(start=ax[0], end=ax[-1], freq=freq, name=ax.name)
end_stamps = (labels + freq).asfreq(freq, "s").to_timestamp()
if ax.tzinfo:
@@ -1604,11 +1603,11 @@ def _get_period_bins(self, ax):
)
# Get offset for bin edge (not label edge) adjustment
- start_offset = pd.Period(start, self.freq) - pd.Period(p_start, self.freq)
+ start_offset = Period(start, self.freq) - Period(p_start, self.freq)
bin_shift = start_offset.n % freq_mult
start = p_start
- labels = binner = pd.period_range(
+ labels = binner = period_range(
start=start, end=end, freq=self.freq, name=ax.name
)
@@ -1728,7 +1727,7 @@ def _get_period_range_edges(first, last, offset, closed="left", base=0):
-------
A tuple of length 2, containing the adjusted pd.Period objects.
"""
- if not all(isinstance(obj, pd.Period) for obj in [first, last]):
+ if not all(isinstance(obj, Period) for obj in [first, last]):
raise TypeError("'first' and 'last' must be instances of type Period")
# GH 23882
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index c1a07c129f7cd..3a2e6a8b7ff62 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1292,8 +1292,6 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how="inner", **kwargs)
indexers into the left_keys, right_keys
"""
- from functools import partial
-
assert len(left_keys) == len(
right_keys
), "left_key and right_keys must be the same length"
@@ -1767,7 +1765,6 @@ def flip(xs):
def _get_multiindex_indexer(join_keys, index, sort):
- from functools import partial
# bind `sort` argument
fkeys = partial(_factorize_keys, sort=sort)
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index 65f6c374d0293..8d5b521ef7799 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -852,7 +852,6 @@ def get_dummies(
2 0.0 0.0 1.0
"""
from pandas.core.reshape.concat import concat
- from itertools import cycle
dtypes_to_encode = ["object", "category"]
@@ -881,7 +880,7 @@ def check_len(item, name):
check_len(prefix_sep, "prefix_sep")
if isinstance(prefix, str):
- prefix = cycle([prefix])
+ prefix = itertools.cycle([prefix])
if isinstance(prefix, dict):
prefix = [prefix[col] for col in data_to_encode.columns]
@@ -890,7 +889,7 @@ def check_len(item, name):
# validate separators
if isinstance(prefix_sep, str):
- prefix_sep = cycle([prefix_sep])
+ prefix_sep = itertools.cycle([prefix_sep])
elif isinstance(prefix_sep, dict):
prefix_sep = [prefix_sep[col] for col in data_to_encode.columns]
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index b188bd23cdf32..22b7435b5d29d 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -763,7 +763,7 @@ def _str_extract_noexpand(arr, pat, flags=0):
Index.
"""
- from pandas import DataFrame, Index
+ from pandas import DataFrame
regex = re.compile(pat, flags=flags)
groups_or_na = _groups_or_na_fun(regex)
@@ -772,7 +772,7 @@ def _str_extract_noexpand(arr, pat, flags=0):
result = np.array([groups_or_na(val)[0] for val in arr], dtype=object)
name = _get_single_group_name(regex)
else:
- if isinstance(arr, Index):
+ if isinstance(arr, ABCIndexClass):
raise ValueError("only one regex group is supported with Index")
name = None
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
@@ -2001,7 +2001,7 @@ def _wrap_result(
# infer from ndim if expand is not specified
expand = result.ndim != 1
- elif expand is True and not isinstance(self._orig, Index):
+ elif expand is True and not isinstance(self._orig, ABCIndexClass):
# required when expand=True is explicitly specified
# not needed when inferred
@@ -2034,7 +2034,7 @@ def cons_row(x):
# Wait until we are sure result is a Series or Index before
# checking attributes (GH 12180)
- if isinstance(self._orig, Index):
+ if isinstance(self._orig, ABCIndexClass):
# if result is a boolean np.array, return the np.array
# instead of wrapping it into a boolean Index (GH 8875)
if is_bool_dtype(result):
@@ -2082,10 +2082,10 @@ def _get_series_list(self, others, ignore_index=False):
# Once str.cat defaults to alignment, this function can be simplified;
# will not need `ignore_index` and the second boolean output anymore
- from pandas import Index, Series, DataFrame
+ from pandas import Series, DataFrame
# self._orig is either Series or Index
- idx = self._orig if isinstance(self._orig, Index) else self._orig.index
+ idx = self._orig if isinstance(self._orig, ABCIndexClass) else self._orig.index
err_msg = (
"others must be Series, Index, DataFrame, np.ndarray or "
@@ -2097,14 +2097,14 @@ def _get_series_list(self, others, ignore_index=False):
# `idx` of the calling Series/Index - i.e. must have matching length.
# Objects with an index (i.e. Series/Index/DataFrame) keep their own
# index, *unless* ignore_index is set to True.
- if isinstance(others, Series):
+ if isinstance(others, ABCSeries):
warn = not others.index.equals(idx)
# only reconstruct Series when absolutely necessary
los = [
Series(others.values, index=idx) if ignore_index and warn else others
]
return (los, warn)
- elif isinstance(others, Index):
+ elif isinstance(others, ABCIndexClass):
warn = not others.equals(idx)
los = [Series(others.values, index=(idx if ignore_index else others))]
return (los, warn)
@@ -2137,12 +2137,14 @@ def _get_series_list(self, others, ignore_index=False):
# only allowing Series/Index/np.ndarray[1-dim] will greatly
# simply this function post-deprecation.
if not (
- isinstance(nxt, (Series, Index))
+ isinstance(nxt, (Series, ABCIndexClass))
or (isinstance(nxt, np.ndarray) and nxt.ndim == 1)
):
depr_warn = True
- if not isinstance(nxt, (DataFrame, Series, Index, np.ndarray)):
+ if not isinstance(
+ nxt, (DataFrame, Series, ABCIndexClass, np.ndarray)
+ ):
# safety for non-persistent list-likes (e.g. iterators)
# do not map indexed/typed objects; info needed below
nxt = list(nxt)
@@ -2150,7 +2152,7 @@ def _get_series_list(self, others, ignore_index=False):
# known types for which we can avoid deep inspection
no_deep = (
isinstance(nxt, np.ndarray) and nxt.ndim == 1
- ) or isinstance(nxt, (Series, Index))
+ ) or isinstance(nxt, (Series, ABCIndexClass))
# nested list-likes are forbidden:
# -> elements of nxt must not be list-like
is_legal = (no_deep and nxt.dtype == object) or all(
@@ -2323,7 +2325,7 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
if sep is None:
sep = ""
- if isinstance(self._orig, Index):
+ if isinstance(self._orig, ABCIndexClass):
data = Series(self._orig, index=self._orig)
else: # Series
data = self._orig
@@ -2409,7 +2411,7 @@ def cat(self, others=None, sep=None, na_rep=None, join=None):
# no NaNs - can just concatenate
result = cat_safe(all_cols, sep)
- if isinstance(self._orig, Index):
+ if isinstance(self._orig, ABCIndexClass):
# add dtype for case that result is all-NA
result = Index(result, dtype=object, name=self._orig.name)
else: # Series
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 07b9598f15902..960fdd89c4147 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -9,6 +9,7 @@
from pandas._libs.tslibs import (
NaT,
OutOfBoundsDatetime,
+ Period,
Timedelta,
Timestamp,
ccalendar,
@@ -33,7 +34,6 @@
from pandas.errors import AbstractMethodError
from pandas.util._decorators import Appender, Substitution, cache_readonly
-from pandas.core.dtypes.generic import ABCPeriod
from pandas.core.dtypes.inference import is_list_like
from pandas.core.tools.datetimes import to_datetime
@@ -2537,7 +2537,7 @@ def __add__(self, other):
return type(self)(self.n + other.n)
else:
return _delta_to_tick(self.delta + other.delta)
- elif isinstance(other, ABCPeriod):
+ elif isinstance(other, Period):
return other + self
try:
return self.apply(other)
| Edits to lib.is_scalar are to put slower python-space checks later so they may get short-circuited.
Everything else should be self-explanatory. | https://api.github.com/repos/pandas-dev/pandas/pulls/27461 | 2019-07-19T00:57:27Z | 2019-07-22T11:52:32Z | 2019-07-22T11:52:32Z | 2019-07-22T14:08:08Z |
Pinned date and fixed contributors directive | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 78644a3da4229..42e756635e739 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1,7 +1,7 @@
.. _whatsnew_0250:
-What's new in 0.25.0 (April XX, 2019)
--------------------------------------
+What's new in 0.25.0 (July 18, 2019)
+------------------------------------
.. warning::
@@ -1267,4 +1267,4 @@ Other
Contributors
~~~~~~~~~~~~
-.. contributors:: v0.24.x..HEAD
+.. contributors:: 0.24.x..HEAD
| @TomAugspurger | https://api.github.com/repos/pandas-dev/pandas/pulls/27455 | 2019-07-18T15:07:13Z | 2019-07-18T15:56:36Z | 2019-07-18T15:56:36Z | 2019-07-18T15:56:41Z |
PERF: restore performance for unsorted CategoricalDtype comparison | diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 7721c90c9b4b4..6728d048efb79 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -406,6 +406,12 @@ def __eq__(self, other: Any) -> bool:
# but same order is not necessary. There is no distinction between
# ordered=False and ordered=None: CDT(., False) and CDT(., None)
# will be equal if they have the same categories.
+ if (
+ self.categories.dtype == other.categories.dtype
+ and self.categories.equals(other.categories)
+ ):
+ # Check and see if they happen to be identical categories
+ return True
return hash(self) == hash(other)
def __repr__(self):
| Fixes a performance regression introduced in #26403 very shortly before `0.25.0rc0` was cut, which can be seen [here](https://qwhelan.github.io/pandas/#indexing.CategoricalIndexIndexing.time_getitem_slice?machine=T470&num_cpu=4&os=Linux%205.0.0-20-generic&ram=20305904)

When comparing `CategoricalDtype`s with `ordered=False` (the default), a hash is currently done that is relatively slow even for a small number of categories. If we check if the categories happen to be the same and in the same order, we see a significant speedup in the equal case. The `.equals()` function pre-exits, so overhead in the non-equal case should be minimal.
`asv` results:
```
before after ratio
[a4c19e7a] [6810ff2f]
<unsorted_cats~1> <unsorted_cats>
- 179±0.4μs 25.6±0.4μs 0.14 indexing.CategoricalIndexIndexing.time_getitem_slice('monotonic_decr')
- 179±2μs 25.2±1μs 0.14 indexing.CategoricalIndexIndexing.time_getitem_slice('monotonic_incr')
- 180±0.4μs 25.1±0.3μs 0.14 indexing.CategoricalIndexIndexing.time_getitem_slice('non_monotonic')
```
- [ ] closes #xxxx
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27448 | 2019-07-18T06:39:11Z | 2019-07-18T10:53:29Z | 2019-07-18T10:53:29Z | 2019-07-18T10:53:56Z |
CLN: simplify maybe_convert_objects, soft_convert_objects | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 44a3fefb1689a..fd8536e38eee7 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -6,6 +6,7 @@
from pandas._libs import lib, tslib, tslibs
from pandas._libs.tslibs import NaT, OutOfBoundsDatetime, Period, iNaT
+from pandas.util._validators import validate_bool_kwarg
from .common import (
_INT64_DTYPE,
@@ -696,9 +697,7 @@ def astype_nansafe(arr, dtype, copy=True, skipna=False):
elif np.issubdtype(arr.dtype, np.floating) and np.issubdtype(dtype, np.integer):
if not np.isfinite(arr).all():
- raise ValueError(
- "Cannot convert non-finite values (NA or inf) to " "integer"
- )
+ raise ValueError("Cannot convert non-finite values (NA or inf) to integer")
elif is_object_dtype(arr):
@@ -719,9 +718,7 @@ def astype_nansafe(arr, dtype, copy=True, skipna=False):
return astype_nansafe(to_timedelta(arr).values, dtype, copy=copy)
if dtype.name in ("datetime64", "timedelta64"):
- msg = (
- "The '{dtype}' dtype has no unit. " "Please pass in '{dtype}[ns]' instead."
- )
+ msg = "The '{dtype}' dtype has no unit. Please pass in '{dtype}[ns]' instead."
raise ValueError(msg.format(dtype=dtype.name))
if copy or is_object_dtype(arr) or is_object_dtype(dtype):
@@ -731,50 +728,33 @@ def astype_nansafe(arr, dtype, copy=True, skipna=False):
return arr.view(dtype)
-def maybe_convert_objects(
- values, convert_dates=True, convert_numeric=True, convert_timedeltas=True, copy=True
-):
- """ if we have an object dtype, try to coerce dates and/or numbers """
-
- # if we have passed in a list or scalar
- if isinstance(values, (list, tuple)):
- values = np.array(values, dtype=np.object_)
- if not hasattr(values, "dtype"):
- values = np.array([values], dtype=np.object_)
+def maybe_convert_objects(values: np.ndarray, convert_numeric: bool = True):
+ """
+ If we have an object dtype array, try to coerce dates and/or numbers.
- # convert dates
- if convert_dates and values.dtype == np.object_:
+ Parameters
+ ----------
+ values : ndarray
+ convert_numeric : bool, default True
- # we take an aggressive stance and convert to datetime64[ns]
- if convert_dates == "coerce":
- new_values = maybe_cast_to_datetime(values, "M8[ns]", errors="coerce")
+ Returns
+ -------
+ ndarray or DatetimeIndex
+ """
+ validate_bool_kwarg(convert_numeric, "convert_numeric")
- # if we are all nans then leave me alone
- if not isna(new_values).all():
- values = new_values
+ orig_values = values
- else:
- values = lib.maybe_convert_objects(values, convert_datetime=convert_dates)
+ # convert dates
+ if is_object_dtype(values.dtype):
+ values = lib.maybe_convert_objects(values, convert_datetime=True)
# convert timedeltas
- if convert_timedeltas and values.dtype == np.object_:
-
- if convert_timedeltas == "coerce":
- from pandas.core.tools.timedeltas import to_timedelta
-
- new_values = to_timedelta(values, errors="coerce")
-
- # if we are all nans then leave me alone
- if not isna(new_values).all():
- values = new_values
-
- else:
- values = lib.maybe_convert_objects(
- values, convert_timedelta=convert_timedeltas
- )
+ if is_object_dtype(values.dtype):
+ values = lib.maybe_convert_objects(values, convert_timedelta=True)
# convert to numeric
- if values.dtype == np.object_:
+ if is_object_dtype(values.dtype):
if convert_numeric:
try:
new_values = lib.maybe_convert_numeric(
@@ -791,33 +771,38 @@ def maybe_convert_objects(
# soft-conversion
values = lib.maybe_convert_objects(values)
- values = values.copy() if copy else values
+ if values is orig_values:
+ values = values.copy()
return values
def soft_convert_objects(
- values, datetime=True, numeric=True, timedelta=True, coerce=False, copy=True
+ values: np.ndarray,
+ datetime: bool = True,
+ numeric: bool = True,
+ timedelta: bool = True,
+ coerce: bool = False,
+ copy: bool = True,
):
""" if we have an object dtype, try to coerce dates and/or numbers """
+ validate_bool_kwarg(datetime, "datetime")
+ validate_bool_kwarg(numeric, "numeric")
+ validate_bool_kwarg(timedelta, "timedelta")
+ validate_bool_kwarg(coerce, "coerce")
+ validate_bool_kwarg(copy, "copy")
+
conversion_count = sum((datetime, numeric, timedelta))
if conversion_count == 0:
- raise ValueError(
- "At least one of datetime, numeric or timedelta must " "be True."
- )
+ raise ValueError("At least one of datetime, numeric or timedelta must be True.")
elif conversion_count > 1 and coerce:
raise ValueError(
"Only one of 'datetime', 'numeric' or "
"'timedelta' can be True when when coerce=True."
)
- if isinstance(values, (list, tuple)):
- # List or scalar
- values = np.array(values, dtype=np.object_)
- elif not hasattr(values, "dtype"):
- values = np.array([values], dtype=np.object_)
- elif not is_object_dtype(values.dtype):
+ if not is_object_dtype(values.dtype):
# If not object, do not attempt conversion
values = values.copy() if copy else values
return values
@@ -843,13 +828,13 @@ def soft_convert_objects(
# GH 20380, when datetime is beyond year 2262, hence outside
# bound of nanosecond-resolution 64-bit integers.
try:
- values = lib.maybe_convert_objects(values, convert_datetime=datetime)
+ values = lib.maybe_convert_objects(values, convert_datetime=True)
except OutOfBoundsDatetime:
pass
if timedelta and is_object_dtype(values.dtype):
# Object check to ensure only run if previous did not convert
- values = lib.maybe_convert_objects(values, convert_timedelta=timedelta)
+ values = lib.maybe_convert_objects(values, convert_timedelta=True)
if numeric and is_object_dtype(values.dtype):
try:
@@ -1368,7 +1353,7 @@ def maybe_cast_to_integer_array(arr, dtype, copy=False):
arr = np.asarray(arr)
if is_unsigned_integer_dtype(dtype) and (arr < 0).any():
- raise OverflowError("Trying to coerce negative values " "to unsigned integers")
+ raise OverflowError("Trying to coerce negative values to unsigned integers")
if is_integer_dtype(dtype) and (is_float_dtype(arr) or is_object_dtype(arr)):
raise ValueError("Trying to coerce float values to integers")
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 0229f36cedff8..ecc421df3fbb1 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6033,6 +6033,11 @@ def _convert(
-------
converted : same as input object
"""
+ validate_bool_kwarg(datetime, "datetime")
+ validate_bool_kwarg(numeric, "numeric")
+ validate_bool_kwarg(timedelta, "timedelta")
+ validate_bool_kwarg(coerce, "coerce")
+ validate_bool_kwarg(copy, "copy")
return self._constructor(
self._data.convert(
datetime=datetime,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 26aca34f20594..ace57938f948c 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -18,7 +18,6 @@
find_common_type,
infer_dtype_from,
infer_dtype_from_scalar,
- maybe_convert_objects,
maybe_downcast_to_dtype,
maybe_infer_dtype_type,
maybe_promote,
@@ -669,7 +668,14 @@ def _astype(self, dtype, copy=False, errors="raise", values=None, **kwargs):
)
return newb
- def convert(self, copy=True, **kwargs):
+ def convert(
+ self,
+ copy: bool = True,
+ datetime: bool = True,
+ numeric: bool = True,
+ timedelta: bool = True,
+ coerce: bool = False,
+ ):
""" attempt to coerce any object types to better types return a copy
of the block (if copy = True) by definition we are not an ObjectBlock
here!
@@ -827,9 +833,7 @@ def replace(
convert=convert,
)
if convert:
- blocks = [
- b.convert(by_item=True, numeric=False, copy=not inplace) for b in blocks
- ]
+ blocks = [b.convert(numeric=False, copy=not inplace) for b in blocks]
return blocks
def _replace_single(self, *args, **kwargs):
@@ -2779,37 +2783,31 @@ def is_bool(self):
"""
return lib.is_bool_array(self.values.ravel())
- # TODO: Refactor when convert_objects is removed since there will be 1 path
- def convert(self, *args, **kwargs):
+ def convert(
+ self,
+ copy: bool = True,
+ datetime: bool = True,
+ numeric: bool = True,
+ timedelta: bool = True,
+ coerce: bool = False,
+ ):
""" attempt to coerce any object types to better types return a copy of
the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
can return multiple blocks!
"""
- if args:
- raise NotImplementedError
- by_item = kwargs.get("by_item", True)
-
- new_inputs = ["coerce", "datetime", "numeric", "timedelta"]
- new_style = False
- for kw in new_inputs:
- new_style |= kw in kwargs
-
- if new_style:
- fn = soft_convert_objects
- fn_inputs = new_inputs
- else:
- fn = maybe_convert_objects
- fn_inputs = ["convert_dates", "convert_numeric", "convert_timedeltas"]
- fn_inputs += ["copy"]
-
- fn_kwargs = {key: kwargs[key] for key in fn_inputs if key in kwargs}
-
# operate column-by-column
def f(m, v, i):
shape = v.shape
- values = fn(v.ravel(), **fn_kwargs)
+ values = soft_convert_objects(
+ v.ravel(),
+ datetime=datetime,
+ numeric=numeric,
+ timedelta=timedelta,
+ coerce=coerce,
+ copy=copy,
+ )
if isinstance(values, np.ndarray):
# TODO: allow EA once reshape is supported
values = values.reshape(shape)
@@ -2817,7 +2815,7 @@ def f(m, v, i):
values = _block_shape(values, ndim=self.ndim)
return values
- if by_item and not self._is_single_block:
+ if self.ndim == 2:
blocks = self.split_and_operate(None, f, False)
else:
values = f(None, self.values.ravel(), None)
@@ -3041,7 +3039,7 @@ def re_replacer(s):
# convert
block = self.make_block(new_values)
if convert:
- block = block.convert(by_item=True, numeric=False)
+ block = block.convert(numeric=False)
return block
def _replace_coerce(
@@ -3080,9 +3078,7 @@ def _replace_coerce(
mask=mask,
)
if convert:
- block = [
- b.convert(by_item=True, numeric=False, copy=True) for b in block
- ]
+ block = [b.convert(numeric=False, copy=True) for b in block]
return block
return self
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 2e7280eeae0e2..394c0773409f2 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -1551,7 +1551,6 @@ def index(self):
def convert(self, **kwargs):
""" convert the whole block as one """
- kwargs["by_item"] = False
return self.apply("convert", **kwargs)
@property
diff --git a/pandas/tests/dtypes/cast/test_convert_objects.py b/pandas/tests/dtypes/cast/test_convert_objects.py
index 45980dbd82736..a28d554acd312 100644
--- a/pandas/tests/dtypes/cast/test_convert_objects.py
+++ b/pandas/tests/dtypes/cast/test_convert_objects.py
@@ -5,9 +5,8 @@
@pytest.mark.parametrize("data", [[1, 2], ["apply", "banana"]])
-@pytest.mark.parametrize("copy", [True, False])
-def test_maybe_convert_objects_copy(data, copy):
+def test_maybe_convert_objects_copy(data):
arr = np.array(data)
- out = maybe_convert_objects(arr, copy=copy)
+ out = maybe_convert_objects(arr)
- assert (arr is out) is (not copy)
+ assert arr is not out
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 655e484bc34d1..b56251aae884e 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -584,10 +584,6 @@ def _compare(old_mgr, new_mgr):
new_mgr = mgr.convert()
_compare(mgr, new_mgr)
- mgr = create_mgr("a, b: object; f: i8; g: f8")
- new_mgr = mgr.convert()
- _compare(mgr, new_mgr)
-
# convert
mgr = create_mgr("a,b,foo: object; f: i8; g: f8")
mgr.set("a", np.array(["1"] * N, dtype=np.object_))
| Block.replace has code for dispatching to either maybe_convert_objects or soft_convert_objects, but the former is only reached from the tests. So rip that out and use soft_convert_objects.
Both of the functions have branches that are unreachable, so rip those out, along with adding types and validation. | https://api.github.com/repos/pandas-dev/pandas/pulls/27444 | 2019-07-18T02:46:26Z | 2019-07-20T23:49:48Z | 2019-07-20T23:49:48Z | 2019-07-20T23:52:50Z |
TST: Add test for operations on DataFrame with Interval CategoricalIndex | diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 7c022106c9104..706bc122c6d9e 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -642,3 +642,14 @@ def test_arith_non_pandas_object(self):
val3 = np.random.rand(*df.shape)
added = pd.DataFrame(df.values + val3, index=df.index, columns=df.columns)
tm.assert_frame_equal(df.add(val3), added)
+
+ def test_operations_with_interval_categories_index(self, all_arithmetic_operators):
+ # GH#27415
+ op = all_arithmetic_operators
+ ind = pd.CategoricalIndex(pd.interval_range(start=0.0, end=2.0))
+ data = [1, 2]
+ df = pd.DataFrame([data], columns=ind)
+ num = 10
+ result = getattr(df, op)(num)
+ expected = pd.DataFrame([[getattr(n, op)(num) for n in data]], columns=ind)
+ tm.assert_frame_equal(result, expected)
| - [x] closes #27415
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27443 | 2019-07-18T01:43:53Z | 2019-07-23T22:06:16Z | 2019-07-23T22:06:16Z | 2019-07-23T22:06:19Z |
CLN: Trim unused/unnecessary code | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index f483cf520754b..44a3fefb1689a 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -571,18 +571,6 @@ def maybe_upcast(values, fill_value=np.nan, dtype=None, copy=False):
return values, fill_value
-def maybe_cast_item(obj, item, dtype):
- chunk = obj[item]
-
- if chunk.values.dtype != dtype:
- if dtype in (np.object_, np.bool_):
- obj[item] = chunk.astype(np.object_)
- elif not issubclass(dtype, (np.integer, np.bool_)): # pragma: no cover
- raise ValueError(
- "Unexpected dtype encountered: {dtype}".format(dtype=dtype)
- )
-
-
def invalidate_string_dtypes(dtype_set):
"""Change string like dtypes to object for
``DataFrame.select_dtypes()``.
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index d0e4bd9b4482a..f2571573bd1bc 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -898,41 +898,6 @@ def is_dtype_equal(source, target):
return False
-def is_dtype_union_equal(source, target):
- """
- Check whether two arrays have compatible dtypes to do a union.
- numpy types are checked with ``is_dtype_equal``. Extension types are
- checked separately.
-
- Parameters
- ----------
- source : The first dtype to compare
- target : The second dtype to compare
-
- Returns
- -------
- boolean
- Whether or not the two dtypes are equal.
-
- >>> is_dtype_equal("int", int)
- True
-
- >>> is_dtype_equal(CategoricalDtype(['a', 'b'],
- ... CategoricalDtype(['b', 'c']))
- True
-
- >>> is_dtype_equal(CategoricalDtype(['a', 'b'],
- ... CategoricalDtype(['b', 'c'], ordered=True))
- False
- """
- source = _get_dtype(source)
- target = _get_dtype(target)
- if is_categorical_dtype(source) and is_categorical_dtype(target):
- # ordered False for both
- return source.ordered is target.ordered
- return is_dtype_equal(source, target)
-
-
def is_any_int_dtype(arr_or_dtype):
"""Check whether the provided array or dtype is of an integer dtype.
@@ -1498,60 +1463,6 @@ def is_numeric(x):
)
-def is_datetimelike_v_object(a, b):
- """
- Check if we are comparing a datetime-like object to an object instance.
-
- Parameters
- ----------
- a : array-like, scalar
- The first object to check.
- b : array-like, scalar
- The second object to check.
-
- Returns
- -------
- boolean
- Whether we return a comparing a datetime-like to an object instance.
-
- Examples
- --------
- >>> obj = object()
- >>> dt = np.datetime64(pd.datetime(2017, 1, 1))
- >>>
- >>> is_datetimelike_v_object(obj, obj)
- False
- >>> is_datetimelike_v_object(dt, dt)
- False
- >>> is_datetimelike_v_object(obj, dt)
- True
- >>> is_datetimelike_v_object(dt, obj) # symmetric check
- True
- >>> is_datetimelike_v_object(np.array([dt]), obj)
- True
- >>> is_datetimelike_v_object(np.array([obj]), dt)
- True
- >>> is_datetimelike_v_object(np.array([dt]), np.array([obj]))
- True
- >>> is_datetimelike_v_object(np.array([obj]), np.array([obj]))
- False
- >>> is_datetimelike_v_object(np.array([dt]), np.array([1]))
- False
- >>> is_datetimelike_v_object(np.array([dt]), np.array([dt]))
- False
- """
-
- if not hasattr(a, "dtype"):
- a = np.asarray(a)
- if not hasattr(b, "dtype"):
- b = np.asarray(b)
-
- is_datetimelike = needs_i8_conversion
- return (is_datetimelike(a) and is_object_dtype(b)) or (
- is_datetimelike(b) and is_object_dtype(a)
- )
-
-
def needs_i8_conversion(arr_or_dtype):
"""
Check whether the array or dtype should be converted to int64.
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 27ae918b015fe..36548f3515a48 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -527,23 +527,6 @@ def test_is_datetimelike_v_numeric():
assert com.is_datetimelike_v_numeric(np.array([dt]), np.array([1]))
-def test_is_datetimelike_v_object():
- obj = object()
- dt = np.datetime64(pd.datetime(2017, 1, 1))
-
- assert not com.is_datetimelike_v_object(dt, dt)
- assert not com.is_datetimelike_v_object(obj, obj)
- assert not com.is_datetimelike_v_object(np.array([dt]), np.array([1]))
- assert not com.is_datetimelike_v_object(np.array([dt]), np.array([dt]))
- assert not com.is_datetimelike_v_object(np.array([obj]), np.array([obj]))
-
- assert com.is_datetimelike_v_object(dt, obj)
- assert com.is_datetimelike_v_object(obj, dt)
- assert com.is_datetimelike_v_object(np.array([dt]), obj)
- assert com.is_datetimelike_v_object(np.array([obj]), dt)
- assert com.is_datetimelike_v_object(np.array([dt]), np.array([obj]))
-
-
def test_needs_i8_conversion():
assert not com.needs_i8_conversion(str)
assert not com.needs_i8_conversion(np.int64)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 037c885e4733f..cf8452cdd0c59 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -33,8 +33,6 @@
is_categorical_dtype,
is_datetime64_dtype,
is_datetime64tz_dtype,
- is_datetimelike_v_numeric,
- is_datetimelike_v_object,
is_extension_array_dtype,
is_interval_dtype,
is_list_like,
@@ -1172,12 +1170,7 @@ def assert_series_equal(
# we want to check only if we have compat dtypes
# e.g. integer and M|m are NOT compat, but we can simply check
# the values in that case
- if (
- is_datetimelike_v_numeric(left, right)
- or is_datetimelike_v_object(left, right)
- or needs_i8_conversion(left)
- or needs_i8_conversion(right)
- ):
+ if needs_i8_conversion(left) or needs_i8_conversion(right):
# datetimelike may have different objects (e.g. datetime.datetime
# vs Timestamp) but will compare equal
| https://api.github.com/repos/pandas-dev/pandas/pulls/27440 | 2019-07-17T21:50:07Z | 2019-07-18T10:56:16Z | 2019-07-18T10:56:16Z | 2019-07-18T14:32:11Z | |
BUG: maybe_convert_objects mixed datetimes and timedeltas | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 1936404b75602..03bd1b996955a 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -942,6 +942,7 @@ cdef class Seen:
cdef:
bint int_ # seen_int
+ bint nat_ # seen nat
bint bool_ # seen_bool
bint null_ # seen_null
bint uint_ # seen_uint (unsigned integer)
@@ -965,6 +966,7 @@ cdef class Seen:
initial methods to convert to numeric fail.
"""
self.int_ = 0
+ self.nat_ = 0
self.bool_ = 0
self.null_ = 0
self.uint_ = 0
@@ -1044,11 +1046,13 @@ cdef class Seen:
@property
def is_bool(self):
- return not (self.datetime_ or self.numeric_ or self.timedelta_)
+ return not (self.datetime_ or self.numeric_ or self.timedelta_
+ or self.nat_)
@property
def is_float_or_complex(self):
- return not (self.bool_ or self.datetime_ or self.timedelta_)
+ return not (self.bool_ or self.datetime_ or self.timedelta_
+ or self.nat_)
cdef _try_infer_map(v):
@@ -1947,12 +1951,11 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
seen.null_ = 1
floats[i] = complexes[i] = fnan
elif val is NaT:
+ seen.nat_ = 1
if convert_datetime:
idatetimes[i] = NPY_NAT
- seen.datetime_ = 1
if convert_timedelta:
itimedeltas[i] = NPY_NAT
- seen.timedelta_ = 1
if not (convert_datetime or convert_timedelta):
seen.object_ = 1
break
@@ -2046,11 +2049,20 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
else:
if not seen.bool_:
if seen.datetime_:
- if not seen.numeric_:
+ if not seen.numeric_ and not seen.timedelta_:
return datetimes
elif seen.timedelta_:
if not seen.numeric_:
return timedeltas
+ elif seen.nat_:
+ if not seen.numeric_:
+ if convert_datetime and convert_timedelta:
+ # TODO: array full of NaT ambiguity resolve here needed
+ pass
+ elif convert_datetime:
+ return datetimes
+ elif convert_timedelta:
+ return timedeltas
else:
if seen.complex_:
return complexes
@@ -2077,11 +2089,20 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
else:
if not seen.bool_:
if seen.datetime_:
- if not seen.numeric_:
+ if not seen.numeric_ and not seen.timedelta_:
return datetimes
elif seen.timedelta_:
if not seen.numeric_:
return timedeltas
+ elif seen.nat_:
+ if not seen.numeric_:
+ if convert_datetime and convert_timedelta:
+ # TODO: array full of NaT ambiguity resolve here needed
+ pass
+ elif convert_datetime:
+ return datetimes
+ elif convert_timedelta:
+ return timedeltas
else:
if seen.complex_:
if not seen.int_:
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 4d688976cd50b..ff48ae9b3c2e5 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -531,6 +531,25 @@ def test_maybe_convert_objects_uint64(self):
exp = np.array([2 ** 63, -1], dtype=object)
tm.assert_numpy_array_equal(lib.maybe_convert_objects(arr), exp)
+ def test_maybe_convert_objects_datetime(self):
+ # GH27438
+ arr = np.array(
+ [np.datetime64("2000-01-01"), np.timedelta64(1, "s")], dtype=object
+ )
+ exp = arr.copy()
+ out = lib.maybe_convert_objects(arr, convert_datetime=1, convert_timedelta=1)
+ tm.assert_numpy_array_equal(out, exp)
+
+ arr = np.array([pd.NaT, np.timedelta64(1, "s")], dtype=object)
+ exp = np.array([np.timedelta64("NaT"), np.timedelta64(1, "s")], dtype="m8[ns]")
+ out = lib.maybe_convert_objects(arr, convert_datetime=1, convert_timedelta=1)
+ tm.assert_numpy_array_equal(out, exp)
+
+ arr = np.array([np.timedelta64(1, "s"), np.nan], dtype=object)
+ exp = arr.copy()
+ out = lib.maybe_convert_objects(arr, convert_datetime=1, convert_timedelta=1)
+ tm.assert_numpy_array_equal(out, exp)
+
def test_mixed_dtypes_remain_object_array(self):
# GH14956
array = np.array([datetime(2015, 1, 1, tzinfo=pytz.utc), 1], dtype=object)
| - [X] corresponding [discussion](https://github.com/pandas-dev/pandas/issues/27417#issuecomment-512014358) 27417
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
In order to stick to `maybe_convert_objects` realization it was needed to introduce `Seen().nat_` param. Since we want to distinguish following situations: were seen `NaT` or `datetime` or `timedelta`
NOTE: in current realization array full of `NaT` will be converted to `np.datetime64`. | https://api.github.com/repos/pandas-dev/pandas/pulls/27438 | 2019-07-17T18:47:51Z | 2019-07-24T12:05:48Z | 2019-07-24T12:05:48Z | 2019-07-25T19:56:41Z |
BUG: fix+test quantile with empty DataFrame, closes #23925 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index eed48f9e46897..bccb6f991d646 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -108,7 +108,7 @@ Timezones
Numeric
^^^^^^^
-
+- Bug in :meth:`DataFrame.quantile` with zero-column :class:`DataFrame` incorrectly raising (:issue:`23925`)
-
-
@@ -191,10 +191,6 @@ ExtensionArray
-
-Other
-^^^^^
-
-
.. _whatsnew_1000.contributors:
Contributors
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4beaf301988b4..46dc9204b86f5 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -8245,6 +8245,13 @@ def quantile(self, q=0.5, axis=0, numeric_only=True, interpolation="linear"):
if is_transposed:
data = data.T
+ if len(data.columns) == 0:
+ # GH#23925 _get_numeric_data may have dropped all columns
+ cols = Index([], name=self.columns.name)
+ if is_list_like(q):
+ return self._constructor([], index=q, columns=cols)
+ return self._constructor_sliced([], index=cols, name=q)
+
result = data._data.quantile(
qs=q, axis=1, interpolation=interpolation, transposed=is_transposed
)
diff --git a/pandas/tests/frame/test_quantile.py b/pandas/tests/frame/test_quantile.py
index 236cadf67735d..e5e881dece34a 100644
--- a/pandas/tests/frame/test_quantile.py
+++ b/pandas/tests/frame/test_quantile.py
@@ -439,7 +439,7 @@ def test_quantile_nat(self):
)
tm.assert_frame_equal(res, exp)
- def test_quantile_empty(self):
+ def test_quantile_empty_no_rows(self):
# floats
df = DataFrame(columns=["a", "b"], dtype="float64")
@@ -467,3 +467,17 @@ def test_quantile_empty(self):
# FIXME (gives NaNs instead of NaT in 0.18.1 or 0.19.0)
# res = df.quantile(0.5, numeric_only=False)
+
+ def test_quantile_empty_no_columns(self):
+ # GH#23925 _get_numeric_data may drop all columns
+ df = pd.DataFrame(pd.date_range("1/1/18", periods=5))
+ df.columns.name = "captain tightpants"
+ result = df.quantile(0.5)
+ expected = pd.Series([], index=[], name=0.5)
+ expected.index.name = "captain tightpants"
+ tm.assert_series_equal(result, expected)
+
+ result = df.quantile([0.5])
+ expected = pd.DataFrame([], index=[0.5], columns=[])
+ expected.columns.name = "captain tightpants"
+ tm.assert_frame_equal(result, expected)
| - [x] closes #23925
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27436 | 2019-07-17T18:17:01Z | 2019-07-24T12:07:14Z | 2019-07-24T12:07:14Z | 2019-07-24T13:20:10Z |
PLT: Delegating to plotting backend only plots of Series and DataFrame methods | diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 5e67d9a587914..0610780edb28d 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -375,7 +375,7 @@ def boxplot(
>>> type(boxplot)
<class 'numpy.ndarray'>
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.boxplot(
data,
column=column,
@@ -1533,7 +1533,7 @@ def hexbin(self, x, y, C=None, reduce_C_function=None, gridsize=None, **kwargs):
return self(kind="hexbin", x=x, y=y, C=C, **kwargs)
-def _get_plot_backend():
+def _get_plot_backend(backend=None):
"""
Return the plotting backend to use (e.g. `pandas.plotting._matplotlib`).
@@ -1546,7 +1546,7 @@ def _get_plot_backend():
The backend is imported lazily, as matplotlib is a soft dependency, and
pandas can be used without it being installed.
"""
- backend_str = pandas.get_option("plotting.backend")
+ backend_str = backend or pandas.get_option("plotting.backend")
if backend_str == "matplotlib":
backend_str = "pandas.plotting._matplotlib"
return importlib.import_module(backend_str)
diff --git a/pandas/plotting/_misc.py b/pandas/plotting/_misc.py
index 435562f7d1262..efe88d6b19b10 100644
--- a/pandas/plotting/_misc.py
+++ b/pandas/plotting/_misc.py
@@ -24,7 +24,7 @@ def table(ax, data, rowLabels=None, colLabels=None, **kwargs):
-------
matplotlib table object
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.table(
ax=ax, data=data, rowLabels=None, colLabels=None, **kwargs
)
@@ -48,7 +48,7 @@ def register(explicit=True):
--------
deregister_matplotlib_converter
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
plot_backend.register(explicit=explicit)
@@ -67,7 +67,7 @@ def deregister():
--------
deregister_matplotlib_converters
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
plot_backend.deregister()
@@ -124,7 +124,7 @@ def scatter_matrix(
>>> df = pd.DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
>>> scatter_matrix(df, alpha=0.2)
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.scatter_matrix(
frame=frame,
alpha=alpha,
@@ -202,7 +202,7 @@ def radviz(frame, class_column, ax=None, color=None, colormap=None, **kwds):
... })
>>> rad_viz = pd.plotting.radviz(df, 'Category') # doctest: +SKIP
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.radviz(
frame=frame,
class_column=class_column,
@@ -249,7 +249,7 @@ def andrews_curves(
-------
class:`matplotlip.axis.Axes`
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.andrews_curves(
frame=frame,
class_column=class_column,
@@ -307,7 +307,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):
>>> s = pd.Series(np.random.uniform(size=100))
>>> fig = pd.plotting.bootstrap_plot(s) # doctest: +SKIP
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.bootstrap_plot(
series=series, fig=fig, size=size, samples=samples, **kwds
)
@@ -374,7 +374,7 @@ def parallel_coordinates(
color=('#556270', '#4ECDC4', '#C7F464'))
>>> plt.show()
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.parallel_coordinates(
frame=frame,
class_column=class_column,
@@ -405,7 +405,7 @@ def lag_plot(series, lag=1, ax=None, **kwds):
-------
class:`matplotlib.axis.Axes`
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.lag_plot(series=series, lag=lag, ax=ax, **kwds)
@@ -424,7 +424,7 @@ def autocorrelation_plot(series, ax=None, **kwds):
-------
class:`matplotlib.axis.Axes`
"""
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.autocorrelation_plot(series=series, ax=ax, **kwds)
@@ -451,7 +451,7 @@ def tsplot(series, plotf, ax=None, **kwargs):
FutureWarning,
stacklevel=2,
)
- plot_backend = _get_plot_backend()
+ plot_backend = _get_plot_backend("matplotlib")
return plot_backend.tsplot(series=series, plotf=plotf, ax=ax, **kwargs)
| - [X] xref #26747
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27432 | 2019-07-17T15:03:44Z | 2019-07-18T13:17:37Z | 2019-07-18T13:17:37Z | 2019-07-18T13:17:37Z |
CLN/REF: stop allowing iNaT in DatetimeBlock | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f0e7893435f2b..549920e230e8a 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2184,7 +2184,7 @@ def _holder(self):
@property
def fill_value(self):
- return tslibs.iNaT
+ return np.datetime64("NaT", "ns")
def get_values(self, dtype=None):
"""
@@ -2266,14 +2266,9 @@ def _can_hold_element(self, element):
if self.is_datetimetz:
return tz_compare(element.tzinfo, self.dtype.tz)
return element.tzinfo is None
- elif is_integer(element):
- return element == tslibs.iNaT
return is_valid_nat_for_dtype(element, self.dtype)
- def _coerce_values(self, values):
- return values.view("i8")
-
def _try_coerce_args(self, other):
"""
Coerce other to dtype 'i8'. NaN and NaT convert to
@@ -2290,16 +2285,15 @@ def _try_coerce_args(self, other):
base-type other
"""
if is_valid_nat_for_dtype(other, self.dtype):
- other = tslibs.iNaT
- elif is_integer(other) and other == tslibs.iNaT:
- pass
+ other = np.datetime64("NaT", "ns")
elif isinstance(other, (datetime, np.datetime64, date)):
other = self._box_func(other)
if getattr(other, "tz") is not None:
raise TypeError("cannot coerce a Timestamp with a tz on a naive Block")
- other = other.asm8.view("i8")
+ other = other.asm8
elif hasattr(other, "dtype") and is_datetime64_dtype(other):
- other = other.astype("i8", copy=False).view("i8")
+ # TODO: can we get here with non-nano?
+ pass
else:
# coercion issues
# let higher levels handle
@@ -2458,8 +2452,7 @@ def _slice(self, slicer):
return self.values[slicer]
def _coerce_values(self, values):
- # asi8 is a view, needs copy
- return _block_shape(values.view("i8"), ndim=self.ndim)
+ return _block_shape(values, ndim=self.ndim)
def _try_coerce_args(self, other):
"""
@@ -2484,21 +2477,17 @@ def _try_coerce_args(self, other):
other = self._holder(other, dtype=self.dtype)
elif is_valid_nat_for_dtype(other, self.dtype):
- other = tslibs.iNaT
- elif is_integer(other) and other == tslibs.iNaT:
- pass
+ other = np.datetime64("NaT", "ns")
elif isinstance(other, self._holder):
- if other.tz != self.values.tz:
+ if not tz_compare(other.tz, self.values.tz):
raise ValueError("incompatible or non tz-aware value")
- other = _block_shape(other.asi8, ndim=self.ndim)
+
elif isinstance(other, (np.datetime64, datetime, date)):
other = tslibs.Timestamp(other)
- tz = getattr(other, "tz", None)
# test we can have an equal time zone
- if tz is None or str(tz) != str(self.values.tz):
+ if not tz_compare(other.tz, self.values.tz):
raise ValueError("incompatible or non tz-aware value")
- other = other.value
else:
raise TypeError(other)
@@ -2654,8 +2643,8 @@ def fillna(self, value, **kwargs):
def _try_coerce_args(self, other):
"""
- Coerce values and other to int64, with null values converted to
- iNaT. values is always ndarray-like, other may not be
+ Coerce values and other to datetime64[ns], with null values
+ converted to datetime64("NaT", "ns").
Parameters
----------
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 260da862a4f2b..4db0f75586ead 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1360,14 +1360,6 @@ def _nanpercentile_1d(values, mask, q, na_value, interpolation):
quantiles : scalar or array
"""
# mask is Union[ExtensionArray, ndarray]
- if values.dtype.kind == "m":
- # need to cast to integer to avoid rounding errors in numpy
- result = _nanpercentile_1d(values.view("i8"), mask, q, na_value, interpolation)
-
- # Note: we have to do do `astype` and not view because in general we
- # have float result at this point, not i8
- return result.astype(values.dtype)
-
values = values[~mask]
if len(values) == 0:
@@ -1399,6 +1391,16 @@ def nanpercentile(values, q, axis, na_value, mask, ndim, interpolation):
-------
quantiles : scalar or array
"""
+ if values.dtype.kind in ["m", "M"]:
+ # need to cast to integer to avoid rounding errors in numpy
+ result = nanpercentile(
+ values.view("i8"), q, axis, na_value.view("i8"), mask, ndim, interpolation
+ )
+
+ # Note: we have to do do `astype` and not view because in general we
+ # have float result at this point, not i8
+ return result.astype(values.dtype)
+
if not lib.is_scalar(mask) and mask.any():
if ndim == 1:
return _nanpercentile_1d(
diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py
index 0cb7db0e47123..756a6159fc7c5 100644
--- a/pandas/tests/frame/test_indexing.py
+++ b/pandas/tests/frame/test_indexing.py
@@ -1150,6 +1150,7 @@ def test_fancy_index_int_labels_exceptions(self, float_frame):
with pytest.raises(KeyError, match=msg):
float_frame.ix[:, ["E"]] = 1
+ # FIXME: don't leave commented-out
# partial setting now allows this GH2578
# pytest.raises(KeyError, float_frame.ix.__setitem__,
# (slice(None, None), 'E'), 1)
@@ -1676,9 +1677,11 @@ def test_setitem_single_column_mixed_datetime(self):
)
assert_series_equal(result, expected)
- # set an allowable datetime64 type
+ # GH#16674 iNaT is treated as an integer when given by the user
df.loc["b", "timestamp"] = iNaT
- assert isna(df.loc["b", "timestamp"])
+ assert not isna(df.loc["b", "timestamp"])
+ assert df["timestamp"].dtype == np.object_
+ assert df.loc["b", "timestamp"] == iNaT
# allow this syntax
df.loc["c", "timestamp"] = np.nan
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index b56251aae884e..f87d6dba72e68 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -338,7 +338,7 @@ def test_try_coerce_arg(self):
vals = (np.datetime64("2010-10-10"), datetime(2010, 10, 10), date(2010, 10, 10))
for val in vals:
coerced = block._try_coerce_args(val)
- assert np.int64 == type(coerced)
+ assert np.datetime64 == type(coerced)
assert pd.Timestamp("2010-10-10") == pd.Timestamp(coerced)
| Sits on top of #27411
xref #16674. The broader goal is de-special-casing code in internals.
| https://api.github.com/repos/pandas-dev/pandas/pulls/27428 | 2019-07-17T00:21:26Z | 2019-07-23T23:55:40Z | 2019-07-23T23:55:40Z | 2019-07-24T00:23:23Z |
Merged mypy.ini into setup.cfg | diff --git a/mypy.ini b/mypy.ini
deleted file mode 100644
index cba20d2775fbe..0000000000000
--- a/mypy.ini
+++ /dev/null
@@ -1,6 +0,0 @@
-[mypy]
-ignore_missing_imports=True
-no_implicit_optional=True
-
-[mypy-pandas.conftest,pandas.tests.*]
-ignore_errors=True
\ No newline at end of file
diff --git a/setup.cfg b/setup.cfg
index e559ece2a759a..7f0062428c442 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -166,3 +166,10 @@ skip=
asv_bench/benchmarks/dtypes.py
asv_bench/benchmarks/strings.py
asv_bench/benchmarks/period.py
+
+[mypy]
+ignore_missing_imports=True
+no_implicit_optional=True
+
+[mypy-pandas.conftest,pandas.tests.*]
+ignore_errors=True
\ No newline at end of file
| Figure the mypy.ini is small enough now don't need a dedicated config file for it | https://api.github.com/repos/pandas-dev/pandas/pulls/27427 | 2019-07-16T23:14:53Z | 2019-07-17T11:45:08Z | 2019-07-17T11:45:08Z | 2019-10-11T03:40:07Z |
Reallow usecols to reference OOB indices - reverts 25623 | diff --git a/doc/source/whatsnew/v0.25.0.rst b/doc/source/whatsnew/v0.25.0.rst
index 7397ae8fda80c..3445beaa78317 100644
--- a/doc/source/whatsnew/v0.25.0.rst
+++ b/doc/source/whatsnew/v0.25.0.rst
@@ -1087,7 +1087,6 @@ I/O
- Bug in :meth:`DataFrame.to_html` where header numbers would ignore display options when rounding (:issue:`17280`)
- Bug in :func:`read_hdf` where reading a table from an HDF5 file written directly with PyTables fails with a ``ValueError`` when using a sub-selection via the ``start`` or ``stop`` arguments (:issue:`11188`)
- Bug in :func:`read_hdf` not properly closing store after a ``KeyError`` is raised (:issue:`25766`)
-- Bug in ``read_csv`` which would not raise ``ValueError`` if a column index in ``usecols`` was out of bounds (:issue:`25623`)
- Improved the explanation for the failure when value labels are repeated in Stata dta files and suggested work-arounds (:issue:`25772`)
- Improved :meth:`pandas.read_stata` and :class:`pandas.io.stata.StataReader` to read incorrectly formatted 118 format files saved by Stata (:issue:`25960`)
- Improved the ``col_space`` parameter in :meth:`DataFrame.to_html` to accept a string so CSS length values can be set correctly (:issue:`25941`)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 6cc47b984914a..300f17bd25432 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1947,12 +1947,6 @@ def __init__(self, src, **kwds):
):
_validate_usecols_names(usecols, self.orig_names)
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- elif self.usecols_dtype == "integer":
- indices = range(self._reader.table_width)
- _validate_usecols_names(usecols, indices)
-
if len(self.names) > len(usecols):
self.names = [
n
@@ -2258,7 +2252,7 @@ def __init__(self, f, **kwds):
self.skipinitialspace = kwds["skipinitialspace"]
self.lineterminator = kwds["lineterminator"]
self.quoting = kwds["quoting"]
- self.usecols, self.usecols_dtype = _validate_usecols_arg(kwds["usecols"])
+ self.usecols, _ = _validate_usecols_arg(kwds["usecols"])
self.skip_blank_lines = kwds["skip_blank_lines"]
self.warn_bad_lines = kwds["warn_bad_lines"]
@@ -2665,13 +2659,6 @@ def _infer_columns(self):
if clear_buffer:
self._clear_buffer()
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- if self.usecols_dtype == "integer":
- for col in columns:
- indices = range(len(col))
- _validate_usecols_names(self.usecols, indices)
-
if names is not None:
if (self.usecols is not None and len(names) != len(self.usecols)) or (
self.usecols is None and len(names) != len(columns[0])
@@ -2706,11 +2693,6 @@ def _infer_columns(self):
ncols = len(line)
num_original_columns = ncols
- # GH 25623
- # validate that column indices in usecols are not out of bounds
- if self.usecols_dtype == "integer":
- _validate_usecols_names(self.usecols, range(ncols))
-
if not names:
if self.prefix:
columns = [
diff --git a/pandas/tests/io/parser/test_usecols.py b/pandas/tests/io/parser/test_usecols.py
index 47c4f93fbf59c..afe19608ea5c6 100644
--- a/pandas/tests/io/parser/test_usecols.py
+++ b/pandas/tests/io/parser/test_usecols.py
@@ -22,25 +22,6 @@
)
-@pytest.mark.parametrize(
- "names,usecols,missing",
- [
- (None, [0, 3], r"\[3\]"),
- (["a", "b", "c"], [0, -1, 2], r"\[-1\]"),
- (None, [3], r"\[3\]"),
- (["a"], [3], r"\[3\]"),
- ],
-)
-def test_usecols_out_of_bounds(all_parsers, names, usecols, missing):
- # See gh-25623
- data = "a,b,c\n1,2,3\n4,5,6"
- parser = all_parsers
-
- mssg = _msg_validate_usecols_names.format(missing)
- with pytest.raises(ValueError, match=mssg):
- parser.read_csv(StringIO(data), usecols=usecols, names=names)
-
-
def test_raise_on_mixed_dtype_usecols(all_parsers):
# See gh-12678
data = """a,b,c
| - [x] closes #27252
reverts #25623
@heckeop
@gfyoung I know you asked for a FutureWarning to be raised but I didn't want to mess around with the validation_function in place here so just did a simple revert for sake of time / effort
| https://api.github.com/repos/pandas-dev/pandas/pulls/27426 | 2019-07-16T21:58:23Z | 2019-07-17T11:52:36Z | 2019-07-17T11:52:36Z | 2020-01-16T00:35:03Z |
BUG: fix+test fillna with non-nano datetime64; closes #27419 | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 897a82f9a1968..26aca34f20594 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2250,7 +2250,12 @@ def _astype(self, dtype, **kwargs):
def _can_hold_element(self, element):
tipo = maybe_infer_dtype_type(element)
if tipo is not None:
- return is_dtype_equal(tipo, self.dtype)
+ if self.is_datetimetz:
+ # require exact match, since non-nano does not exist
+ return is_dtype_equal(tipo, self.dtype)
+
+ # GH#27419 if we get a non-nano datetime64 object
+ return is_datetime64_dtype(tipo)
elif element is NaT:
return True
elif isinstance(element, datetime):
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index f8a44b7f5639e..c5fc52b9b0c41 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -437,6 +437,17 @@ def test_datetime64_tz_fillna(self):
)
assert_series_equal(df.fillna(method="bfill"), exp)
+ def test_datetime64_non_nano_fillna(self):
+ # GH#27419
+ ser = Series([Timestamp("2010-01-01"), pd.NaT, Timestamp("2000-01-01")])
+ val = np.datetime64("1975-04-05", "ms")
+
+ result = ser.fillna(val)
+ expected = Series(
+ [Timestamp("2010-01-01"), Timestamp("1975-04-05"), Timestamp("2000-01-01")]
+ )
+ tm.assert_series_equal(result, expected)
+
def test_fillna_consistency(self):
# GH 16402
# fillna with a tz aware to a tz-naive, should result in object
| - [x] closes #27419
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27425 | 2019-07-16T20:45:28Z | 2019-07-17T11:54:25Z | 2019-07-17T11:54:25Z | 2019-07-17T14:05:18Z |
Removed ABCs from pandas._typing | diff --git a/pandas/_typing.py b/pandas/_typing.py
index a1224a609579e..45c43fa958caa 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -1,34 +1,29 @@
from pathlib import Path
-from typing import IO, AnyStr, TypeVar, Union
+from typing import IO, TYPE_CHECKING, AnyStr, TypeVar, Union
import numpy as np
-from pandas._libs import Timestamp
-from pandas._libs.tslibs.period import Period
-from pandas._libs.tslibs.timedeltas import Timedelta
+# To prevent import cycles place any internal imports in the branch below
+# and use a string literal forward reference to it in subsequent types
+# https://mypy.readthedocs.io/en/latest/common_issues.html#import-cycles
+if TYPE_CHECKING:
+ from pandas._libs import Period, Timedelta, Timestamp # noqa: F401
+ from pandas.core.arrays.base import ExtensionArray # noqa: F401
+ from pandas.core.dtypes.dtypes import ExtensionDtype # noqa: F401
+ from pandas.core.indexes.base import Index # noqa: F401
+ from pandas.core.frame import DataFrame # noqa: F401
+ from pandas.core.series import Series # noqa: F401
+ from pandas.core.sparse.series import SparseSeries # noqa: F401
-from pandas.core.dtypes.dtypes import ExtensionDtype
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCExtensionArray,
- ABCIndexClass,
- ABCSeries,
- ABCSparseSeries,
-)
AnyArrayLike = TypeVar(
- "AnyArrayLike",
- ABCExtensionArray,
- ABCIndexClass,
- ABCSeries,
- ABCSparseSeries,
- np.ndarray,
+ "AnyArrayLike", "ExtensionArray", "Index", "Series", "SparseSeries", np.ndarray
)
-ArrayLike = TypeVar("ArrayLike", ABCExtensionArray, np.ndarray)
-DatetimeLikeScalar = TypeVar("DatetimeLikeScalar", Period, Timestamp, Timedelta)
-Dtype = Union[str, np.dtype, ExtensionDtype]
+ArrayLike = TypeVar("ArrayLike", "ExtensionArray", np.ndarray)
+DatetimeLikeScalar = TypeVar("DatetimeLikeScalar", "Period", "Timestamp", "Timedelta")
+Dtype = Union[str, np.dtype, "ExtensionDtype"]
FilePathOrBuffer = Union[str, Path, IO[AnyStr]]
-FrameOrSeries = TypeVar("FrameOrSeries", ABCSeries, ABCDataFrame)
+FrameOrSeries = TypeVar("FrameOrSeries", "Series", "DataFrame")
Scalar = Union[str, int, float]
Axis = Union[str, int]
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index f2571573bd1bc..054c97056a117 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -167,12 +167,13 @@ def ensure_int_or_float(arr: ArrayLike, copy=False) -> np.array:
If the array is explicitly of type uint64 the type
will remain unchanged.
"""
+ # TODO: GH27506 potential bug with ExtensionArrays
try:
- return arr.astype("int64", copy=copy, casting="safe")
+ return arr.astype("int64", copy=copy, casting="safe") # type: ignore
except TypeError:
pass
try:
- return arr.astype("uint64", copy=copy, casting="safe")
+ return arr.astype("uint64", copy=copy, casting="safe") # type: ignore
except TypeError:
return arr.astype("float64", copy=copy)
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 7372dada3b48a..66290ae54e626 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -906,35 +906,35 @@ def get_indexer(
)
raise InvalidIndexError(msg)
- target = ensure_index(target)
+ target_as_index = ensure_index(target)
- if isinstance(target, IntervalIndex):
+ if isinstance(target_as_index, IntervalIndex):
# equal indexes -> 1:1 positional match
- if self.equals(target):
+ if self.equals(target_as_index):
return np.arange(len(self), dtype="intp")
# different closed or incompatible subtype -> no matches
common_subtype = find_common_type(
- [self.dtype.subtype, target.dtype.subtype]
+ [self.dtype.subtype, target_as_index.dtype.subtype]
)
- if self.closed != target.closed or is_object_dtype(common_subtype):
- return np.repeat(np.intp(-1), len(target))
+ if self.closed != target_as_index.closed or is_object_dtype(common_subtype):
+ return np.repeat(np.intp(-1), len(target_as_index))
- # non-overlapping -> at most one match per interval in target
+ # non-overlapping -> at most one match per interval in target_as_index
# want exact matches -> need both left/right to match, so defer to
# left/right get_indexer, compare elementwise, equality -> match
- left_indexer = self.left.get_indexer(target.left)
- right_indexer = self.right.get_indexer(target.right)
+ left_indexer = self.left.get_indexer(target_as_index.left)
+ right_indexer = self.right.get_indexer(target_as_index.right)
indexer = np.where(left_indexer == right_indexer, left_indexer, -1)
- elif not is_object_dtype(target):
+ elif not is_object_dtype(target_as_index):
# homogeneous scalar index: use IntervalTree
- target = self._maybe_convert_i8(target)
- indexer = self._engine.get_indexer(target.values)
+ target_as_index = self._maybe_convert_i8(target_as_index)
+ indexer = self._engine.get_indexer(target_as_index.values)
else:
# heterogeneous scalar index: defer elementwise to get_loc
# (non-overlapping so get_loc guarantees scalar of KeyError)
indexer = []
- for key in target:
+ for key in target_as_index:
try:
loc = self.get_loc(key)
except KeyError:
@@ -947,21 +947,26 @@ def get_indexer(
def get_indexer_non_unique(
self, target: AnyArrayLike
) -> Tuple[np.ndarray, np.ndarray]:
- target = ensure_index(target)
+ target_as_index = ensure_index(target)
- # check that target IntervalIndex is compatible
- if isinstance(target, IntervalIndex):
+ # check that target_as_index IntervalIndex is compatible
+ if isinstance(target_as_index, IntervalIndex):
common_subtype = find_common_type(
- [self.dtype.subtype, target.dtype.subtype]
+ [self.dtype.subtype, target_as_index.dtype.subtype]
)
- if self.closed != target.closed or is_object_dtype(common_subtype):
+ if self.closed != target_as_index.closed or is_object_dtype(common_subtype):
# different closed or incompatible subtype -> no matches
- return np.repeat(-1, len(target)), np.arange(len(target))
+ return (
+ np.repeat(-1, len(target_as_index)),
+ np.arange(len(target_as_index)),
+ )
- if is_object_dtype(target) or isinstance(target, IntervalIndex):
- # target might contain intervals: defer elementwise to get_loc
+ if is_object_dtype(target_as_index) or isinstance(
+ target_as_index, IntervalIndex
+ ):
+ # target_as_index might contain intervals: defer elementwise to get_loc
indexer, missing = [], []
- for i, key in enumerate(target):
+ for i, key in enumerate(target_as_index):
try:
locs = self.get_loc(key)
if isinstance(locs, slice):
@@ -973,8 +978,10 @@ def get_indexer_non_unique(
indexer.append(locs)
indexer = np.concatenate(indexer)
else:
- target = self._maybe_convert_i8(target)
- indexer, missing = self._engine.get_indexer_non_unique(target.values)
+ target_as_index = self._maybe_convert_i8(target_as_index)
+ indexer, missing = self._engine.get_indexer_non_unique(
+ target_as_index.values
+ )
return ensure_platform_int(indexer), ensure_platform_int(missing)
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 5098ab3c7220f..79a0f5f24af12 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -240,7 +240,7 @@ def _prep_values(self, values: Optional[np.ndarray] = None) -> np.ndarray:
return values
- def _wrap_result(self, result, block=None, obj=None) -> FrameOrSeries:
+ def _wrap_result(self, result, block=None, obj=None):
"""
Wrap a single result.
"""
diff --git a/setup.cfg b/setup.cfg
index 7f0062428c442..716ff5d9d8853 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -77,7 +77,9 @@ filterwarnings =
[coverage:run]
branch = False
-omit = */tests/*
+omit =
+ */tests/*
+ pandas/_typing.py
plugins = Cython.Coverage
[coverage:report]
| - [X] closes #27146
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
This reveals a few new typing issues so won't pass CI but submitting as draft
Here's one way I think that works to avoid import cycles. Basically any internal imports in pandas._typing need to be set in a TYPE_CHECKING block to prevent their runtime evaluation from causing import cycles.
To illustrate the effectiveness like this if you put something like this at the end of pandas._typing:
```python
def func(obj: FrameOrSeries) -> None:
reveal_type(obj)
```
It would previously give you
```sh
pandas/_typing.py:43: note: Revealed type is 'Any'
```
Whereas now gives:
```sh
pandas/_typing.py:44: note: Revealed type is 'pandas.core.series.Series*'
pandas/_typing.py:44: note: Revealed type is 'pandas.core.frame.DataFrame*'
```
@topper-123 any way to check VSCode to see if this works with code completion?
In the long run this may not be needed with PEP 563:
https://www.python.org/dev/peps/pep-0563/
But I think this hack gets us there in the meantime | https://api.github.com/repos/pandas-dev/pandas/pulls/27424 | 2019-07-16T20:33:16Z | 2019-07-24T16:42:51Z | 2019-07-24T16:42:50Z | 2019-07-24T16:43:05Z |
CLN: fix build warning in c_timestamp.pyx | diff --git a/pandas/_libs/tslibs/c_timestamp.pyx b/pandas/_libs/tslibs/c_timestamp.pyx
index 2d3ea3e14775e..906dabba09486 100644
--- a/pandas/_libs/tslibs/c_timestamp.pyx
+++ b/pandas/_libs/tslibs/c_timestamp.pyx
@@ -19,7 +19,7 @@ from cpython cimport (PyObject_RichCompareBool, PyObject_RichCompare,
import numpy as np
cimport numpy as cnp
-from numpy cimport int64_t, int8_t
+from numpy cimport int64_t, int8_t, uint8_t, ndarray
cnp.import_array()
from cpython.datetime cimport (datetime,
@@ -320,7 +320,7 @@ cdef class _Timestamp(datetime):
cdef:
int64_t val
dict kwds
- int8_t out[1]
+ ndarray[uint8_t, cast=True] out
int month_kw
freq = self.freq
| ```
pandas/_libs/tslibs/c_timestamp.c: In function ‘__pyx_f_6pandas_5_libs_6tslibs_11c_timestamp_10_Timestamp__get_start_end_field’:
pandas/_libs/tslibs/c_timestamp.c:7257:25: warning: ‘__pyx_t_10’ may be used uninitialized in this function [-Wmaybe-uninitialized]
7257 | __pyx_t_5numpy_int8_t __pyx_t_10[1];
|
```
Retrying fix dropped from https://github.com/pandas-dev/pandas/pull/27371
a review [comment](https://github.com/pandas-dev/pandas/pull/27371#discussion_r303179721) there disliked reintroducing `ndarray` here because the file had been converted to using memoryviews. `get_start_end_field` returns a view of type `npy_bool` so`out` should therefore be either a memoryview or an ndarray, and bool memoryviews will only be supported in cython 0.30
(see cython/cython/pull/2676).
| https://api.github.com/repos/pandas-dev/pandas/pulls/27423 | 2019-07-16T20:30:10Z | 2019-07-20T20:36:14Z | 2019-07-20T20:36:14Z | 2019-07-20T21:39:05Z |
add some type annotations io/formats/format.py | diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 0e8ed7b25d665..ff31a3b4e6a1f 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -6,6 +6,7 @@
from functools import partial
from io import StringIO
from shutil import get_terminal_size
+from typing import TYPE_CHECKING, List, Optional, TextIO, Tuple, Union, cast
from unicodedata import east_asian_width
import numpy as np
@@ -47,6 +48,9 @@
from pandas.io.common import _expand_user, _stringify_path
from pandas.io.formats.printing import adjoin, justify, pprint_thing
+if TYPE_CHECKING:
+ from pandas import Series, DataFrame, Categorical
+
common_docstring = """
Parameters
----------
@@ -129,14 +133,21 @@
class CategoricalFormatter:
- def __init__(self, categorical, buf=None, length=True, na_rep="NaN", footer=True):
+ def __init__(
+ self,
+ categorical: "Categorical",
+ buf: Optional[TextIO] = None,
+ length: bool = True,
+ na_rep: str = "NaN",
+ footer: bool = True,
+ ):
self.categorical = categorical
self.buf = buf if buf is not None else StringIO("")
self.na_rep = na_rep
self.length = length
self.footer = footer
- def _get_footer(self):
+ def _get_footer(self) -> str:
footer = ""
if self.length:
@@ -153,7 +164,7 @@ def _get_footer(self):
return str(footer)
- def _get_formatted_values(self):
+ def _get_formatted_values(self) -> List[str]:
return format_array(
self.categorical._internal_get_values(),
None,
@@ -161,7 +172,7 @@ def _get_formatted_values(self):
na_rep=self.na_rep,
)
- def to_string(self):
+ def to_string(self) -> str:
categorical = self.categorical
if len(categorical) == 0:
@@ -172,10 +183,10 @@ def to_string(self):
fmt_values = self._get_formatted_values()
- result = ["{i}".format(i=i) for i in fmt_values]
- result = [i.strip() for i in result]
- result = ", ".join(result)
- result = ["[" + result + "]"]
+ fmt_values = ["{i}".format(i=i) for i in fmt_values]
+ fmt_values = [i.strip() for i in fmt_values]
+ values = ", ".join(fmt_values)
+ result = ["[" + values + "]"]
if self.footer:
footer = self._get_footer()
if footer:
@@ -187,17 +198,17 @@ def to_string(self):
class SeriesFormatter:
def __init__(
self,
- series,
- buf=None,
- length=True,
- header=True,
- index=True,
- na_rep="NaN",
- name=False,
- float_format=None,
- dtype=True,
- max_rows=None,
- min_rows=None,
+ series: "Series",
+ buf: Optional[TextIO] = None,
+ length: bool = True,
+ header: bool = True,
+ index: bool = True,
+ na_rep: str = "NaN",
+ name: bool = False,
+ float_format: Optional[str] = None,
+ dtype: bool = True,
+ max_rows: Optional[int] = None,
+ min_rows: Optional[int] = None,
):
self.series = series
self.buf = buf if buf is not None else StringIO()
@@ -217,7 +228,7 @@ def __init__(
self._chk_truncate()
- def _chk_truncate(self):
+ def _chk_truncate(self) -> None:
from pandas.core.reshape.concat import concat
min_rows = self.min_rows
@@ -227,6 +238,7 @@ def _chk_truncate(self):
truncate_v = max_rows and (len(self.series) > max_rows)
series = self.series
if truncate_v:
+ max_rows = cast(int, max_rows)
if min_rows:
# if min_rows is set (not None or 0), set max_rows to minimum
# of both
@@ -237,13 +249,13 @@ def _chk_truncate(self):
else:
row_num = max_rows // 2
series = concat((series.iloc[:row_num], series.iloc[-row_num:]))
- self.tr_row_num = row_num
+ self.tr_row_num = row_num # type: Optional[int]
else:
self.tr_row_num = None
self.tr_series = series
self.truncate_v = truncate_v
- def _get_footer(self):
+ def _get_footer(self) -> str:
name = self.series.name
footer = ""
@@ -281,7 +293,7 @@ def _get_footer(self):
return str(footer)
- def _get_formatted_index(self):
+ def _get_formatted_index(self) -> Tuple[List[str], bool]:
index = self.tr_series.index
is_multi = isinstance(index, ABCMultiIndex)
@@ -293,13 +305,13 @@ def _get_formatted_index(self):
fmt_index = index.format(name=True)
return fmt_index, have_header
- def _get_formatted_values(self):
+ def _get_formatted_values(self) -> List[str]:
values_to_format = self.tr_series._formatting_values()
return format_array(
values_to_format, None, float_format=self.float_format, na_rep=self.na_rep
)
- def to_string(self):
+ def to_string(self) -> str:
series = self.tr_series
footer = self._get_footer()
@@ -314,6 +326,7 @@ def to_string(self):
if self.truncate_v:
n_header_rows = 0
row_num = self.tr_row_num
+ row_num = cast(int, row_num)
width = self.adj.len(fmt_values[row_num - 1])
if width > 3:
dot_str = "..."
@@ -501,7 +514,7 @@ def __init__(
self._chk_truncate()
self.adj = _get_adjustment()
- def _chk_truncate(self):
+ def _chk_truncate(self) -> None:
"""
Checks whether the frame should be truncated. If so, slices
the frame up.
@@ -577,7 +590,7 @@ def _chk_truncate(self):
self.truncate_v = truncate_v
self.is_truncated = self.truncate_h or self.truncate_v
- def _to_str_columns(self):
+ def _to_str_columns(self) -> List[List[str]]:
"""
Render a DataFrame to a list of columns (as lists of strings).
"""
@@ -667,7 +680,7 @@ def _to_str_columns(self):
strcols[ix].insert(row_num + n_header_rows, dot_str)
return strcols
- def to_string(self):
+ def to_string(self) -> None:
"""
Render a DataFrame to a console-friendly tabular output.
"""
@@ -803,7 +816,7 @@ def to_latex(
else:
raise TypeError("buf is not a file name and it has no write " "method")
- def _format_col(self, i):
+ def _format_col(self, i: int) -> List[str]:
frame = self.tr_frame
formatter = self._get_formatter(i)
values_to_format = frame.iloc[:, i]._formatting_values()
@@ -816,7 +829,12 @@ def _format_col(self, i):
decimal=self.decimal,
)
- def to_html(self, classes=None, notebook=False, border=None):
+ def to_html(
+ self,
+ classes: Optional[Union[str, List, Tuple]] = None,
+ notebook: bool = False,
+ border: Optional[int] = None,
+ ) -> None:
"""
Render a DataFrame to a html table.
@@ -845,7 +863,7 @@ def to_html(self, classes=None, notebook=False, border=None):
else:
raise TypeError("buf is not a file name and it has no write " " method")
- def _get_formatted_column_labels(self, frame):
+ def _get_formatted_column_labels(self, frame: "DataFrame") -> List[List[str]]:
from pandas.core.index import _sparsify
columns = frame.columns
@@ -887,22 +905,22 @@ def space_format(x, y):
return str_columns
@property
- def has_index_names(self):
+ def has_index_names(self) -> bool:
return _has_names(self.frame.index)
@property
- def has_column_names(self):
+ def has_column_names(self) -> bool:
return _has_names(self.frame.columns)
@property
- def show_row_idx_names(self):
+ def show_row_idx_names(self) -> bool:
return all((self.has_index_names, self.index, self.show_index_names))
@property
- def show_col_idx_names(self):
+ def show_col_idx_names(self) -> bool:
return all((self.has_column_names, self.show_index_names, self.header))
- def _get_formatted_index(self, frame):
+ def _get_formatted_index(self, frame: "DataFrame") -> List[str]:
# Note: this is only used by to_string() and to_latex(), not by
# to_html().
index = frame.index
@@ -941,8 +959,8 @@ def _get_formatted_index(self, frame):
else:
return adjoined
- def _get_column_name_list(self):
- names = []
+ def _get_column_name_list(self) -> List[str]:
+ names = [] # type: List[str]
columns = self.frame.columns
if isinstance(columns, ABCMultiIndex):
names.extend("" if name is None else name for name in columns.names)
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index c2f4ee2c4a68b..91e90a78d87a7 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -4,11 +4,11 @@
from collections import OrderedDict
from textwrap import dedent
-from typing import Dict, List, Optional, Tuple, Union
+from typing import Any, Dict, Iterable, List, Optional, Tuple, Union
from pandas._config import get_option
-from pandas.core.dtypes.generic import ABCIndex, ABCMultiIndex
+from pandas.core.dtypes.generic import ABCMultiIndex
from pandas import option_context
@@ -37,7 +37,7 @@ def __init__(
self,
formatter: DataFrameFormatter,
classes: Optional[Union[str, List, Tuple]] = None,
- border: Optional[bool] = None,
+ border: Optional[int] = None,
) -> None:
self.fmt = formatter
self.classes = classes
@@ -79,7 +79,7 @@ def row_levels(self) -> int:
# not showing (row) index
return 0
- def _get_columns_formatted_values(self) -> ABCIndex:
+ def _get_columns_formatted_values(self) -> Iterable:
return self.columns
@property
@@ -90,12 +90,12 @@ def is_truncated(self) -> bool:
def ncols(self) -> int:
return len(self.fmt.tr_frame.columns)
- def write(self, s: str, indent: int = 0) -> None:
+ def write(self, s: Any, indent: int = 0) -> None:
rs = pprint_thing(s)
self.elements.append(" " * indent + rs)
def write_th(
- self, s: str, header: bool = False, indent: int = 0, tags: Optional[str] = None
+ self, s: Any, header: bool = False, indent: int = 0, tags: Optional[str] = None
) -> None:
"""
Method for writting a formatted <th> cell.
@@ -125,11 +125,11 @@ def write_th(
self._write_cell(s, kind="th", indent=indent, tags=tags)
- def write_td(self, s: str, indent: int = 0, tags: Optional[str] = None) -> None:
+ def write_td(self, s: Any, indent: int = 0, tags: Optional[str] = None) -> None:
self._write_cell(s, kind="td", indent=indent, tags=tags)
def _write_cell(
- self, s: str, kind: str = "td", indent: int = 0, tags: Optional[str] = None
+ self, s: Any, kind: str = "td", indent: int = 0, tags: Optional[str] = None
) -> None:
if tags is not None:
start_tag = "<{kind} {tags}>".format(kind=kind, tags=tags)
@@ -162,7 +162,7 @@ def _write_cell(
def write_tr(
self,
- line: List[str],
+ line: Iterable,
indent: int = 0,
indent_delta: int = 0,
header: bool = False,
| typing not added to all function signatures in io/formats/format.py in this pass to reduce the diff
~~this PR also contains a refactor to simplify the addition of type annotations; `truncate_h` and `tr_col_num` are dependant variables. `truncate_h` can be `True` or `False`. if `True`, `tr_col_num` is an `int` else if `truncate_h` is `False`, `tr_col_num` is `None`~~
~~same applies for `truncate_v` and `tr_row_num`~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/27418 | 2019-07-16T15:43:54Z | 2019-07-21T01:06:45Z | 2019-07-21T01:06:45Z | 2019-07-22T11:48:22Z |
CI: limit pytest version on 3.6 | diff --git a/ci/deps/azure-37-numpydev.yaml b/ci/deps/azure-37-numpydev.yaml
index c56dc819a90b1..5cf897c98da10 100644
--- a/ci/deps/azure-37-numpydev.yaml
+++ b/ci/deps/azure-37-numpydev.yaml
@@ -17,4 +17,5 @@ dependencies:
- "--pre"
- "numpy"
- "scipy"
- - pytest-azurepipelines
+ # https://github.com/pandas-dev/pandas/issues/27421
+ - pytest-azurepipelines<1.0.0
diff --git a/ci/deps/azure-macos-35.yaml b/ci/deps/azure-macos-35.yaml
index 0b96dd9762ef5..98859b596ab2a 100644
--- a/ci/deps/azure-macos-35.yaml
+++ b/ci/deps/azure-macos-35.yaml
@@ -29,4 +29,6 @@ dependencies:
- pytest-xdist
- pytest-mock
- hypothesis>=3.58.0
- - pytest-azurepipelines
+ # https://github.com/pandas-dev/pandas/issues/27421
+ - pytest-azurepipelines<1.0.0
+
| closes #27406 | https://api.github.com/repos/pandas-dev/pandas/pulls/27416 | 2019-07-16T12:25:23Z | 2019-07-16T19:19:30Z | 2019-07-16T19:19:30Z | 2019-07-16T19:19:30Z |
CLN/REF: Unify Arithmetic Methods | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index df17388856117..87cda22e3b676 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -39,7 +39,7 @@
from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
from pandas._typing import DatetimeLikeScalar
-from pandas.core import missing, nanops
+from pandas.core import missing, nanops, ops
from pandas.core.algorithms import checked_add_with_arr, take, unique1d, value_counts
import pandas.core.common as com
@@ -926,6 +926,21 @@ def _is_unique(self):
# ------------------------------------------------------------------
# Arithmetic Methods
+ # pow is invalid for all three subclasses; TimedeltaArray will override
+ # the multiplication and division ops
+ __pow__ = ops.make_invalid_op("__pow__")
+ __rpow__ = ops.make_invalid_op("__rpow__")
+ __mul__ = ops.make_invalid_op("__mul__")
+ __rmul__ = ops.make_invalid_op("__rmul__")
+ __truediv__ = ops.make_invalid_op("__truediv__")
+ __rtruediv__ = ops.make_invalid_op("__rtruediv__")
+ __floordiv__ = ops.make_invalid_op("__floordiv__")
+ __rfloordiv__ = ops.make_invalid_op("__rfloordiv__")
+ __mod__ = ops.make_invalid_op("__mod__")
+ __rmod__ = ops.make_invalid_op("__rmod__")
+ __divmod__ = ops.make_invalid_op("__divmod__")
+ __rdivmod__ = ops.make_invalid_op("__rdivmod__")
+
def _add_datetimelike_scalar(self, other):
# Overriden by TimedeltaArray
raise TypeError(
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e084f99ec5a2c..f8ce3258110e5 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -9,7 +9,7 @@
from pandas._libs import algos as libalgos, index as libindex, lib
import pandas._libs.join as libjoin
from pandas._libs.lib import is_datetime_array
-from pandas._libs.tslibs import OutOfBoundsDatetime, Timedelta, Timestamp
+from pandas._libs.tslibs import OutOfBoundsDatetime, Timestamp
from pandas._libs.tslibs.timezones import tz_compare
from pandas.compat import set_function_name
from pandas.compat.numpy import function as nv
@@ -55,7 +55,6 @@
ABCPandasArray,
ABCPeriodIndex,
ABCSeries,
- ABCTimedeltaArray,
ABCTimedeltaIndex,
)
from pandas.core.dtypes.missing import array_equivalent, isna
@@ -126,28 +125,8 @@ def cmp_method(self, other):
def _make_arithmetic_op(op, cls):
def index_arithmetic_method(self, other):
- if isinstance(other, (ABCSeries, ABCDataFrame)):
- return NotImplemented
- elif isinstance(other, ABCTimedeltaIndex):
- # Defer to subclass implementation
+ if isinstance(other, (ABCSeries, ABCDataFrame, ABCTimedeltaIndex)):
return NotImplemented
- elif isinstance(
- other, (np.ndarray, ABCTimedeltaArray)
- ) and is_timedelta64_dtype(other):
- # GH#22390; wrap in Series for op, this will in turn wrap in
- # TimedeltaIndex, but will correctly raise TypeError instead of
- # NullFrequencyError for add/sub ops
- from pandas import Series
-
- other = Series(other)
- out = op(self, other)
- return Index(out, name=self.name)
-
- # handle time-based others
- if isinstance(other, (ABCDateOffset, np.timedelta64, timedelta)):
- return self._evaluate_with_timedelta_like(other, op)
-
- other = self._validate_for_numeric_binop(other, op)
from pandas import Series
@@ -5336,32 +5315,6 @@ def drop(self, labels, errors="raise"):
# --------------------------------------------------------------------
# Generated Arithmetic, Comparison, and Unary Methods
- def _evaluate_with_timedelta_like(self, other, op):
- # Timedelta knows how to operate with np.array, so dispatch to that
- # operation and then wrap the results
- if self._is_numeric_dtype and op.__name__ in ["add", "sub", "radd", "rsub"]:
- raise TypeError(
- "Operation {opname} between {cls} and {other} "
- "is invalid".format(
- opname=op.__name__, cls=self.dtype, other=type(other).__name__
- )
- )
-
- other = Timedelta(other)
- values = self.values
-
- with np.errstate(all="ignore"):
- result = op(values, other)
-
- attrs = self._get_attributes_dict()
- attrs = self._maybe_update_attributes(attrs)
- if op == divmod:
- return Index(result[0], **attrs), Index(result[1], **attrs)
- return Index(result, **attrs)
-
- def _evaluate_with_datetime_like(self, other, op):
- raise TypeError("can only perform ops with datetime like values")
-
@classmethod
def _add_comparison_methods(cls):
"""
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 731ab9c416345..0fb8f6823ac18 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -62,6 +62,16 @@ def method(self, *args, **kwargs):
return method
+def _make_wrapped_arith_op(opname):
+ def method(self, other):
+ meth = getattr(self._data, opname)
+ result = meth(maybe_unwrap_index(other))
+ return wrap_arithmetic_op(self, other, result)
+
+ method.__name__ = opname
+ return method
+
+
class DatetimeIndexOpsMixin(ExtensionOpsMixin):
"""
common ops mixin to support a unified interface datetimelike Index
@@ -531,6 +541,19 @@ def __rsub__(self, other):
cls.__rsub__ = __rsub__
+ __pow__ = _make_wrapped_arith_op("__pow__")
+ __rpow__ = _make_wrapped_arith_op("__rpow__")
+ __mul__ = _make_wrapped_arith_op("__mul__")
+ __rmul__ = _make_wrapped_arith_op("__rmul__")
+ __floordiv__ = _make_wrapped_arith_op("__floordiv__")
+ __rfloordiv__ = _make_wrapped_arith_op("__rfloordiv__")
+ __mod__ = _make_wrapped_arith_op("__mod__")
+ __rmod__ = _make_wrapped_arith_op("__rmod__")
+ __divmod__ = _make_wrapped_arith_op("__divmod__")
+ __rdivmod__ = _make_wrapped_arith_op("__rdivmod__")
+ __truediv__ = _make_wrapped_arith_op("__truediv__")
+ __rtruediv__ = _make_wrapped_arith_op("__rtruediv__")
+
def isin(self, values, level=None):
"""
Compute boolean array of whether each index value is found in the
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index 5a2dece98150f..19d0d2341dac1 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -30,8 +30,6 @@
from pandas.core.indexes.datetimelike import (
DatetimeIndexOpsMixin,
DatetimelikeDelegateMixin,
- maybe_unwrap_index,
- wrap_arithmetic_op,
)
from pandas.core.indexes.numeric import Int64Index
from pandas.core.ops import get_op_result_name
@@ -39,18 +37,6 @@
from pandas.tseries.frequencies import to_offset
-def _make_wrapped_arith_op(opname):
-
- meth = getattr(TimedeltaArray, opname)
-
- def method(self, other):
- result = meth(self._data, maybe_unwrap_index(other))
- return wrap_arithmetic_op(self, other, result)
-
- method.__name__ = opname
- return method
-
-
class TimedeltaDelegateMixin(DatetimelikeDelegateMixin):
# Most attrs are dispatched via datetimelike_{ops,methods}
# Some are "raw" methods, the result is not not re-boxed in an Index
@@ -320,17 +306,6 @@ def _format_native_types(self, na_rep="NaT", date_format=None, **kwargs):
# -------------------------------------------------------------------
# Wrapping TimedeltaArray
- __mul__ = _make_wrapped_arith_op("__mul__")
- __rmul__ = _make_wrapped_arith_op("__rmul__")
- __floordiv__ = _make_wrapped_arith_op("__floordiv__")
- __rfloordiv__ = _make_wrapped_arith_op("__rfloordiv__")
- __mod__ = _make_wrapped_arith_op("__mod__")
- __rmod__ = _make_wrapped_arith_op("__rmod__")
- __divmod__ = _make_wrapped_arith_op("__divmod__")
- __rdivmod__ = _make_wrapped_arith_op("__rdivmod__")
- __truediv__ = _make_wrapped_arith_op("__truediv__")
- __rtruediv__ = _make_wrapped_arith_op("__rtruediv__")
-
# Compat for frequency inference, see GH#23789
_is_monotonic_increasing = Index.is_monotonic_increasing
_is_monotonic_decreasing = Index.is_monotonic_decreasing
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 230abd6b301a6..50da5e4057210 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -425,8 +425,8 @@ def masked_arith_op(x, y, op):
# For Series `x` is 1D so ravel() is a no-op; calling it anyway makes
# the logic valid for both Series and DataFrame ops.
xrav = x.ravel()
- assert isinstance(x, (np.ndarray, ABCSeries)), type(x)
- if isinstance(y, (np.ndarray, ABCSeries, ABCIndexClass)):
+ assert isinstance(x, np.ndarray), type(x)
+ if isinstance(y, np.ndarray):
dtype = find_common_type([x.dtype, y.dtype])
result = np.empty(x.size, dtype=dtype)
@@ -444,7 +444,7 @@ def masked_arith_op(x, y, op):
if mask.any():
with np.errstate(all="ignore"):
- result[mask] = op(xrav[mask], com.values_from_object(yrav[mask]))
+ result[mask] = op(xrav[mask], yrav[mask])
else:
assert is_scalar(y), type(y)
| 2 goals here.
First is to end up with a single implementation for our arithmetic ops. By implementing the missing methods directly on DTA/PA, we make it so we don't need to special-case inside the `Index.__foo__` methods, so can dispatch to Series more directly.
Second is to get the wrapper defined in_arith_method_SERIES to the point where we can use it block-wise instead of column-wise, to address the recent perf issues. This makes progress towards that goal by tightening up the allowed types in masked_arith_op
Oh, and we also get rid of a usage of values_from_object, which is adjacent to #27165 and #27167 | https://api.github.com/repos/pandas-dev/pandas/pulls/27413 | 2019-07-16T04:10:14Z | 2019-07-20T20:06:05Z | 2019-07-20T20:06:05Z | 2019-07-20T20:08:01Z |
CLN: fix compiler warnings in tzconversion.pyx | diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx
index 26a64c13f6de1..dd0c6fc75b06f 100644
--- a/pandas/_libs/tslibs/tzconversion.pyx
+++ b/pandas/_libs/tslibs/tzconversion.pyx
@@ -96,6 +96,8 @@ timedelta-like}
result[i] = _tz_convert_tzlocal_utc(v, tz, to_utc=True)
return result
+ # silence false-positive compiler warning
+ ambiguous_array = np.empty(0, dtype=bool)
if isinstance(ambiguous, str):
if ambiguous == 'infer':
infer_dst = True
@@ -159,6 +161,8 @@ timedelta-like}
if v_right + deltas[pos_right] == val:
result_b[i] = v_right
+ # silence false-positive compiler warning
+ dst_hours = np.empty(0, dtype=np.int64)
if infer_dst:
dst_hours = np.empty(n, dtype=np.int64)
dst_hours[:] = NPY_NAT
| Fixes
```
pandas/_libs/tslibs/tzconversion.c:3157:21: warning: ‘__pyx_pybuffernd_dst_hours.diminfo[0].strides’ may be used uninitialized in this function [-Wmaybe-uninitialized]
3157 | __Pyx_LocalBuf_ND __pyx_pybuffernd_dst_hours;
| ^~~~~~~~~~~~~~~~~~~~~~~~~~
pandas/_libs/tslibs/tzconversion.c:3149:21: warning: ‘__pyx_pybuffernd_ambiguous_array.diminfo[0].strides’ may be used uninitialized in this function [-Wmaybe-uninitialized]
3149 | __Pyx_LocalBuf_ND __pyx_pybuffernd_ambiguous_array;
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/27412 | 2019-07-16T04:00:59Z | 2019-07-16T19:42:50Z | 2019-07-16T19:42:50Z | 2019-07-16T20:17:40Z |
REF: stop allowing iNaT in TimedeltaBlock methods | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 897a82f9a1968..c0d3368c652ec 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2597,6 +2597,7 @@ class TimeDeltaBlock(DatetimeLikeBlockMixin, IntBlock):
is_timedelta = True
_can_hold_na = True
is_numeric = False
+ fill_value = np.timedelta64("NaT", "ns")
def __init__(self, values, placement, ndim=None):
if values.dtype != _TD_DTYPE:
@@ -2617,15 +2618,11 @@ def _box_func(self):
def _can_hold_element(self, element):
tipo = maybe_infer_dtype_type(element)
if tipo is not None:
- # TODO: remove the np.int64 support once coerce_values and
- # _try_coerce_args both coerce to m8[ns] and not i8.
- return issubclass(tipo.type, (np.timedelta64, np.int64))
+ return issubclass(tipo.type, np.timedelta64)
elif element is NaT:
return True
elif isinstance(element, (timedelta, np.timedelta64)):
return True
- elif is_integer(element):
- return element == tslibs.iNaT
return is_valid_nat_for_dtype(element, self.dtype)
def fillna(self, value, **kwargs):
@@ -2645,9 +2642,6 @@ def fillna(self, value, **kwargs):
value = Timedelta(value, unit="s")
return super().fillna(value, **kwargs)
- def _coerce_values(self, values):
- return values.view("i8")
-
def _try_coerce_args(self, other):
"""
Coerce values and other to int64, with null values converted to
@@ -2663,13 +2657,12 @@ def _try_coerce_args(self, other):
"""
if is_valid_nat_for_dtype(other, self.dtype):
- other = tslibs.iNaT
- elif is_integer(other) and other == tslibs.iNaT:
- pass
+ other = np.timedelta64("NaT", "ns")
elif isinstance(other, (timedelta, np.timedelta64)):
- other = Timedelta(other).value
+ other = Timedelta(other).to_timedelta64()
elif hasattr(other, "dtype") and is_timedelta64_dtype(other):
- other = other.astype("i8", copy=False).view("i8")
+ # TODO: can we get here with non-nano dtype?
+ pass
else:
# coercion issues
# let higher levels handle
@@ -2683,7 +2676,7 @@ def _try_coerce_result(self, result):
mask = isna(result)
if result.dtype.kind in ["i", "f"]:
result = result.astype("m8[ns]")
- result[mask] = tslibs.iNaT
+ result[mask] = np.timedelta64("NaT", "ns")
elif isinstance(result, (np.integer, np.float)):
result = self._box_func(result)
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index ce14cb22a88ce..aa255d03f9db7 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -1362,6 +1362,14 @@ def _nanpercentile_1d(values, mask, q, na_value, interpolation):
quantiles : scalar or array
"""
# mask is Union[ExtensionArray, ndarray]
+ if values.dtype.kind == "m":
+ # need to cast to integer to avoid rounding errors in numpy
+ result = _nanpercentile_1d(values.view("i8"), mask, q, na_value, interpolation)
+
+ # Note: we have to do do `astype` and not view because in general we
+ # have float result at this point, not i8
+ return result.astype(values.dtype)
+
values = values[~mask]
if len(values) == 0:
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index f8a44b7f5639e..adb23fc6b94ea 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -780,9 +780,11 @@ def test_timedelta64_nan(self):
td1[0] = td[0]
assert not isna(td1[0])
+ # GH#16674 iNaT is treated as an integer when given by the user
td1[1] = iNaT
- assert isna(td1[1])
- assert td1[1].value == iNaT
+ assert not isna(td1[1])
+ assert td1.dtype == np.object_
+ assert td1[1] == iNaT
td1[1] = td[1]
assert not isna(td1[1])
@@ -792,6 +794,7 @@ def test_timedelta64_nan(self):
td1[2] = td[2]
assert not isna(td1[2])
+ # FIXME: don't leave commented-out
# boolean setting
# this doesn't work, not sure numpy even supports it
# result = td[(td>np.timedelta64(timedelta(days=3))) &
| Working towards minimizing the amount of special-casing we do within internals | https://api.github.com/repos/pandas-dev/pandas/pulls/27411 | 2019-07-16T01:37:56Z | 2019-07-22T21:13:57Z | 2019-07-22T21:13:57Z | 2019-07-22T21:31:32Z |
CLN: docstring | diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index a127d092b7b1a..f8417c3f01eac 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -41,9 +41,8 @@ class Grouper:
level and/or axis parameters are given, a level of the index of the target
object.
- These are local specifications and will override 'global' settings,
- that is the parameters axis and level which are passed to the groupby
- itself.
+ If `axis` and/or `level` are passed as keywords to both `Grouper` and
+ `groupby`, the values passed to `Grouper` take precedence.
Parameters
----------
| https://api.github.com/repos/pandas-dev/pandas/pulls/27410 | 2019-07-15T23:00:54Z | 2019-07-16T15:25:40Z | 2019-07-16T15:25:40Z | 2019-07-16T20:33:19Z | |
TST: Fix integer ops comparison test | diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 0fe07caed5b85..8065c33a9e116 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -314,11 +314,11 @@ def test_rpow_one_to_na(self):
class TestComparisonOps(BaseOpsUtil):
- def _compare_other(self, s, data, op_name, other):
+ def _compare_other(self, data, op_name, other):
op = self.get_op_from_name(op_name)
# array
- result = op(s, other)
+ result = pd.Series(op(data, other))
expected = pd.Series(op(data._data, other))
# fill the nan locations
@@ -340,14 +340,12 @@ def _compare_other(self, s, data, op_name, other):
def test_compare_scalar(self, data, all_compare_operators):
op_name = all_compare_operators
- s = pd.Series(data)
- self._compare_other(s, data, op_name, 0)
+ self._compare_other(data, op_name, 0)
def test_compare_array(self, data, all_compare_operators):
op_name = all_compare_operators
- s = pd.Series(data)
other = pd.Series([0] * len(data))
- self._compare_other(s, data, op_name, other)
+ self._compare_other(data, op_name, other)
class TestCasting(object):
diff --git a/setup.cfg b/setup.cfg
index 4726a0ddb2fb2..2e07182196d5b 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -90,7 +90,7 @@ known_post_core=pandas.tseries,pandas.io,pandas.plotting
sections=FUTURE,STDLIB,THIRDPARTY,PRE_CORE,DTYPES,FIRSTPARTY,POST_CORE,LOCALFOLDER
known_first_party=pandas
-known_third_party=Cython,numpy,python-dateutil,pytz,pyarrow
+known_third_party=Cython,numpy,python-dateutil,pytz,pyarrow,pytest
multi_line_output=4
force_grid_wrap=0
combine_as_imports=True
| The `op(Series[integer], other)` path was being tested twice.
The `op(IntegerArray, other)` path was not being tested.
Closes https://github.com/pandas-dev/pandas/issues/22096 | https://api.github.com/repos/pandas-dev/pandas/pulls/23619 | 2018-11-10T21:23:39Z | 2018-11-11T14:24:43Z | 2018-11-11T14:24:43Z | 2018-11-14T21:03:52Z |
BUG: Index.str.partition not nan-safe (#23558) | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 98941b6d353bb..fc58c8eac61c9 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1268,8 +1268,8 @@ Numeric
Strings
^^^^^^^
--
--
+- Bug in :meth:`Index.str.partition` was not nan-safe (:issue:`23558`).
+- Bug in :meth:`Index.str.split` was not nan-safe (:issue:`23677`).
-
Interval
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 0088a698f49e0..e89c8fa579687 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2273,7 +2273,7 @@ def to_object_array_tuples(rows: list):
k = 0
for i in range(n):
- tmp = len(rows[i])
+ tmp = 1 if checknull(rows[i]) else len(rows[i])
if tmp > k:
k = tmp
@@ -2287,7 +2287,7 @@ def to_object_array_tuples(rows: list):
except Exception:
# upcast any subclasses to tuple
for i in range(n):
- row = tuple(rows[i])
+ row = (rows[i],) if checknull(rows[i]) else tuple(rows[i])
for j in range(len(row)):
result[i, j] = row[j]
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 7cd9182b4dff4..42f0cebea83a0 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -2330,24 +2330,35 @@ def test_split_to_dataframe(self):
s.str.split('_', expand="not_a_boolean")
def test_split_to_multiindex_expand(self):
- idx = Index(['nosplit', 'alsonosplit'])
+ # https://github.com/pandas-dev/pandas/issues/23677
+
+ idx = Index(['nosplit', 'alsonosplit', np.nan])
result = idx.str.split('_', expand=True)
exp = idx
tm.assert_index_equal(result, exp)
assert result.nlevels == 1
- idx = Index(['some_equal_splits', 'with_no_nans'])
+ idx = Index(['some_equal_splits', 'with_no_nans', np.nan, None])
result = idx.str.split('_', expand=True)
- exp = MultiIndex.from_tuples([('some', 'equal', 'splits'), (
- 'with', 'no', 'nans')])
+ exp = MultiIndex.from_tuples([('some', 'equal', 'splits'),
+ ('with', 'no', 'nans'),
+ [np.nan, np.nan, np.nan],
+ [None, None, None]])
tm.assert_index_equal(result, exp)
assert result.nlevels == 3
- idx = Index(['some_unequal_splits', 'one_of_these_things_is_not'])
+ idx = Index(['some_unequal_splits',
+ 'one_of_these_things_is_not',
+ np.nan, None])
result = idx.str.split('_', expand=True)
- exp = MultiIndex.from_tuples([('some', 'unequal', 'splits', NA, NA, NA
- ), ('one', 'of', 'these', 'things',
- 'is', 'not')])
+ exp = MultiIndex.from_tuples([('some', 'unequal', 'splits',
+ NA, NA, NA),
+ ('one', 'of', 'these',
+ 'things', 'is', 'not'),
+ (np.nan, np.nan, np.nan,
+ np.nan, np.nan, np.nan),
+ (None, None, None,
+ None, None, None)])
tm.assert_index_equal(result, exp)
assert result.nlevels == 6
@@ -2441,50 +2452,54 @@ def test_split_with_name(self):
tm.assert_index_equal(res, exp)
def test_partition_series(self):
- values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h'])
+ # https://github.com/pandas-dev/pandas/issues/23558
+
+ values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h', None])
result = values.str.partition('_', expand=False)
exp = Series([('a', '_', 'b_c'), ('c', '_', 'd_e'), NA,
- ('f', '_', 'g_h')])
+ ('f', '_', 'g_h'), None])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('_', expand=False)
exp = Series([('a_b', '_', 'c'), ('c_d', '_', 'e'), NA,
- ('f_g', '_', 'h')])
+ ('f_g', '_', 'h'), None])
tm.assert_series_equal(result, exp)
# more than one char
- values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h'])
+ values = Series(['a__b__c', 'c__d__e', NA, 'f__g__h', None])
result = values.str.partition('__', expand=False)
exp = Series([('a', '__', 'b__c'), ('c', '__', 'd__e'), NA,
- ('f', '__', 'g__h')])
+ ('f', '__', 'g__h'), None])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('__', expand=False)
exp = Series([('a__b', '__', 'c'), ('c__d', '__', 'e'), NA,
- ('f__g', '__', 'h')])
+ ('f__g', '__', 'h'), None])
tm.assert_series_equal(result, exp)
# None
- values = Series(['a b c', 'c d e', NA, 'f g h'])
+ values = Series(['a b c', 'c d e', NA, 'f g h', None])
result = values.str.partition(expand=False)
exp = Series([('a', ' ', 'b c'), ('c', ' ', 'd e'), NA,
- ('f', ' ', 'g h')])
+ ('f', ' ', 'g h'), None])
tm.assert_series_equal(result, exp)
result = values.str.rpartition(expand=False)
exp = Series([('a b', ' ', 'c'), ('c d', ' ', 'e'), NA,
- ('f g', ' ', 'h')])
+ ('f g', ' ', 'h'), None])
tm.assert_series_equal(result, exp)
- # Not splited
- values = Series(['abc', 'cde', NA, 'fgh'])
+ # Not split
+ values = Series(['abc', 'cde', NA, 'fgh', None])
result = values.str.partition('_', expand=False)
- exp = Series([('abc', '', ''), ('cde', '', ''), NA, ('fgh', '', '')])
+ exp = Series([('abc', '', ''), ('cde', '', ''), NA,
+ ('fgh', '', ''), None])
tm.assert_series_equal(result, exp)
result = values.str.rpartition('_', expand=False)
- exp = Series([('', '', 'abc'), ('', '', 'cde'), NA, ('', '', 'fgh')])
+ exp = Series([('', '', 'abc'), ('', '', 'cde'), NA,
+ ('', '', 'fgh'), None])
tm.assert_series_equal(result, exp)
# unicode
@@ -2508,57 +2523,65 @@ def test_partition_series(self):
assert result == [v.rpartition('_') for v in values]
def test_partition_index(self):
- values = Index(['a_b_c', 'c_d_e', 'f_g_h'])
+ # https://github.com/pandas-dev/pandas/issues/23558
+
+ values = Index(['a_b_c', 'c_d_e', 'f_g_h', np.nan, None])
result = values.str.partition('_', expand=False)
- exp = Index(np.array([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_',
- 'g_h')]))
+ exp = Index(np.array([('a', '_', 'b_c'), ('c', '_', 'd_e'),
+ ('f', '_', 'g_h'), np.nan, None]))
tm.assert_index_equal(result, exp)
assert result.nlevels == 1
result = values.str.rpartition('_', expand=False)
- exp = Index(np.array([('a_b', '_', 'c'), ('c_d', '_', 'e'), (
- 'f_g', '_', 'h')]))
+ exp = Index(np.array([('a_b', '_', 'c'), ('c_d', '_', 'e'),
+ ('f_g', '_', 'h'), np.nan, None]))
tm.assert_index_equal(result, exp)
assert result.nlevels == 1
result = values.str.partition('_')
- exp = Index([('a', '_', 'b_c'), ('c', '_', 'd_e'), ('f', '_', 'g_h')])
+ exp = Index([('a', '_', 'b_c'), ('c', '_', 'd_e'),
+ ('f', '_', 'g_h'), (np.nan, np.nan, np.nan),
+ (None, None, None)])
tm.assert_index_equal(result, exp)
assert isinstance(result, MultiIndex)
assert result.nlevels == 3
result = values.str.rpartition('_')
- exp = Index([('a_b', '_', 'c'), ('c_d', '_', 'e'), ('f_g', '_', 'h')])
+ exp = Index([('a_b', '_', 'c'), ('c_d', '_', 'e'),
+ ('f_g', '_', 'h'), (np.nan, np.nan, np.nan),
+ (None, None, None)])
tm.assert_index_equal(result, exp)
assert isinstance(result, MultiIndex)
assert result.nlevels == 3
def test_partition_to_dataframe(self):
- values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h'])
+ # https://github.com/pandas-dev/pandas/issues/23558
+
+ values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h', None])
result = values.str.partition('_')
- exp = DataFrame({0: ['a', 'c', np.nan, 'f'],
- 1: ['_', '_', np.nan, '_'],
- 2: ['b_c', 'd_e', np.nan, 'g_h']})
+ exp = DataFrame({0: ['a', 'c', np.nan, 'f', None],
+ 1: ['_', '_', np.nan, '_', None],
+ 2: ['b_c', 'd_e', np.nan, 'g_h', None]})
tm.assert_frame_equal(result, exp)
result = values.str.rpartition('_')
- exp = DataFrame({0: ['a_b', 'c_d', np.nan, 'f_g'],
- 1: ['_', '_', np.nan, '_'],
- 2: ['c', 'e', np.nan, 'h']})
+ exp = DataFrame({0: ['a_b', 'c_d', np.nan, 'f_g', None],
+ 1: ['_', '_', np.nan, '_', None],
+ 2: ['c', 'e', np.nan, 'h', None]})
tm.assert_frame_equal(result, exp)
- values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h'])
+ values = Series(['a_b_c', 'c_d_e', NA, 'f_g_h', None])
result = values.str.partition('_', expand=True)
- exp = DataFrame({0: ['a', 'c', np.nan, 'f'],
- 1: ['_', '_', np.nan, '_'],
- 2: ['b_c', 'd_e', np.nan, 'g_h']})
+ exp = DataFrame({0: ['a', 'c', np.nan, 'f', None],
+ 1: ['_', '_', np.nan, '_', None],
+ 2: ['b_c', 'd_e', np.nan, 'g_h', None]})
tm.assert_frame_equal(result, exp)
result = values.str.rpartition('_', expand=True)
- exp = DataFrame({0: ['a_b', 'c_d', np.nan, 'f_g'],
- 1: ['_', '_', np.nan, '_'],
- 2: ['c', 'e', np.nan, 'h']})
+ exp = DataFrame({0: ['a_b', 'c_d', np.nan, 'f_g', None],
+ 1: ['_', '_', np.nan, '_', None],
+ 2: ['c', 'e', np.nan, 'h', None]})
tm.assert_frame_equal(result, exp)
def test_partition_with_name(self):
| - [x] closes #23558 and
closes #23677
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/23618 | 2018-11-10T21:17:00Z | 2018-11-18T18:32:51Z | 2018-11-18T18:32:51Z | 2018-11-18T18:32:56Z |
Integer NA docs | diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 2b42eebf762e1..3d89fe171a343 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -215,8 +215,28 @@ arrays. For example:
s2.dtype
This trade-off is made largely for memory and performance reasons, and also so
-that the resulting ``Series`` continues to be "numeric". One possibility is to
-use ``dtype=object`` arrays instead.
+that the resulting ``Series`` continues to be "numeric".
+
+If you need to represent integers with possibly missing values, use one of
+the nullable-integer extension dtypes provided by pandas
+
+* :class:`Int8Dtype`
+* :class:`Int16Dtype`
+* :class:`Int32Dtype`
+* :class:`Int64Dtype`
+
+.. ipython:: python
+
+ s_int = pd.Series([1, 2, 3, 4, 5], index=list('abcde'),
+ dtype=pd.Int64Dtype())
+ s_int
+ s_int.dtype
+
+ s2_int = s_int.reindex(['a', 'b', 'c', 'f', 'u'])
+ s2_int
+ s2_int.dtype
+
+See :ref:`integer_na` for more.
``NA`` type promotions
~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/index.rst.template b/doc/source/index.rst.template
index 7fe26d68c6fd0..145acbc55b85f 100644
--- a/doc/source/index.rst.template
+++ b/doc/source/index.rst.template
@@ -143,6 +143,7 @@ See the package overview for more detail about what's in the library.
timeseries
timedeltas
categorical
+ integer_na
visualization
style
io
diff --git a/doc/source/integer_na.rst b/doc/source/integer_na.rst
new file mode 100644
index 0000000000000..befcf7016f155
--- /dev/null
+++ b/doc/source/integer_na.rst
@@ -0,0 +1,101 @@
+.. currentmodule:: pandas
+
+{{ header }}
+
+ .. _integer_na:
+
+**************************
+Nullable Integer Data Type
+**************************
+
+.. versionadded:: 0.24.0
+
+In :ref:`missing_data`, we saw that pandas primarily uses ``NaN`` to represent
+missing data. Because ``NaN`` is a float, this forces an array of integers with
+any missing values to become floating point. In some cases, this may not matter
+much. But if your integer column is, say, an identifier, casting to float can
+be problematic. Some integers cannot even be represented as floating point
+numbers.
+
+Pandas can represent integer data with possibly missing values using
+:class:`arrays.IntegerArray`. This is an :ref:`extension types <extending.extension-types>`
+implemented within pandas. It is not the default dtype for integers, and will not be inferred;
+you must explicitly pass the dtype into :meth:`array` or :class:`Series`:
+
+.. ipython:: python
+
+ arr = pd.array([1, 2, np.nan], dtype=pd.Int64Dtype())
+ arr
+
+Or the string alias ``"Int64"`` (note the capital ``"I"``, to differentiate from
+NumPy's ``'int64'`` dtype:
+
+.. ipython:: python
+
+ pd.array([1, 2, np.nan], dtype="Int64")
+
+This array can be stored in a :class:`DataFrame` or :class:`Series` like any
+NumPy array.
+
+.. ipython:: python
+
+ pd.Series(arr)
+
+You can also pass the list-like object to the :class:`Series` constructor
+with the dtype.
+
+.. ipython:: python
+
+ s = pd.Series([1, 2, np.nan], dtype="Int64")
+ s
+
+By default (if you don't specify ``dtype``), NumPy is used, and you'll end
+up with a ``float64`` dtype Series:
+
+.. ipython:: python
+
+ pd.Series([1, 2, np.nan])
+
+Operations involving an integer array will behave similar to NumPy arrays.
+Missing values will be propagated, and and the data will be coerced to another
+dtype if needed.
+
+.. ipython:: python
+
+ # arithmetic
+ s + 1
+
+ # comparison
+ s == 1
+
+ # indexing
+ s.iloc[1:3]
+
+ # operate with other dtypes
+ s + s.iloc[1:3].astype('Int8')
+
+ # coerce when needed
+ s + 0.01
+
+These dtypes can operate as part of of ``DataFrame``.
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})
+ df
+ df.dtypes
+
+
+These dtypes can be merged & reshaped & casted.
+
+.. ipython:: python
+
+ pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
+ df['A'].astype(float)
+
+Reduction and groupby operations such as 'sum' work as well.
+
+.. ipython:: python
+
+ df.sum()
+ df.groupby('B').A.sum()
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index 6a089decde3f5..a9234b83c78ab 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -19,32 +19,6 @@ pandas.
See the :ref:`cookbook<cookbook.missing_data>` for some advanced strategies.
-Missing data basics
--------------------
-
-When / why does data become missing?
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Some might quibble over our usage of *missing*. By "missing" we simply mean
-**NA** ("not available") or "not present for whatever reason". Many data sets simply arrive with
-missing data, either because it exists and was not collected or it never
-existed. For example, in a collection of financial time series, some of the time
-series might start on different dates. Thus, values prior to the start date
-would generally be marked as missing.
-
-In pandas, one of the most common ways that missing data is **introduced** into
-a data set is by reindexing. For example:
-
-.. ipython:: python
-
- df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
- columns=['one', 'two', 'three'])
- df['four'] = 'bar'
- df['five'] = df['one'] > 0
- df
- df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
- df2
-
Values considered "missing"
~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -62,6 +36,16 @@ arise and we wish to also consider that "missing" or "not available" or "NA".
.. _missing.isna:
+.. ipython:: python
+
+ df = pd.DataFrame(np.random.randn(5, 3), index=['a', 'c', 'e', 'f', 'h'],
+ columns=['one', 'two', 'three'])
+ df['four'] = 'bar'
+ df['five'] = df['one'] > 0
+ df
+ df2 = df.reindex(['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'])
+ df2
+
To make detecting missing values easier (and across different array dtypes),
pandas provides the :func:`isna` and
:func:`notna` functions, which are also methods on
@@ -90,6 +74,23 @@ Series and DataFrame objects:
df2['one'] == np.nan
+Integer Dtypes and Missing Data
+-------------------------------
+
+Because ``NaN`` is a float, a column of integers with even one missing values
+is cast to floating-point dtype (see :ref:`gotchas.intna` for more). Pandas
+provides a nullable integer array, which can be used by explicitly requesting
+the dtype:
+
+.. ipython:: python
+
+ pd.Series([1, 2, np.nan, 4], dtype=pd.Int64Dtype())
+
+Alternatively, the string alias ``dtype='Int64'`` (note the capital ``"I"``) can be
+used.
+
+See :ref:`integer_na` for more.
+
Datetimes
---------
@@ -751,3 +752,19 @@ However, these can be filled in using :meth:`~DataFrame.fillna` and it will work
reindexed[crit.fillna(False)]
reindexed[crit.fillna(True)]
+
+Pandas provides a nullable integer dtype, but you must explicitly request it
+when creating the series or column. Notice that we use a capital "I" in
+the ``dtype="Int64"``.
+
+.. ipython:: python
+
+ s = pd.Series(np.random.randn(5), index=[0, 2, 4, 6, 7],
+ dtype="Int64")
+ s > 0
+ (s > 0).dtype
+ crit = (s > 0).reindex(list(range(8)))
+ crit
+ crit.dtype
+
+See :ref:`integer_na` for more.
diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 1734666efe669..c0860ba7993bf 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -159,7 +159,9 @@ Reduction and groupby operations such as 'sum' work.
.. warning::
- The Integer NA support currently uses the captilized dtype version, e.g. ``Int8`` as compared to the traditional ``int8``. This may be changed at a future date.
+ The Integer NA support currently uses the capitalized dtype version, e.g. ``Int8`` as compared to the traditional ``int8``. This may be changed at a future date.
+
+See :ref:`integer_na` for more.
.. _whatsnew_0240.enhancements.array:
| Closes https://github.com/pandas-dev/pandas/issues/22003
In the issue @chris-b1 said
> More clear separation from numpy dtype - worry that 'int64' vs 'Int64' will be especially confusing for new people? Consider a different name altogether? (NullableInt64?)
Do people have thoughts on that?
Earlier @jreback suggested putting all this in `missing_data.rst`, rather than a new page. But that page is already quite long. I think it's OK to spread out a little.
I tried to track down references to missing data and integers in the docs to add links to integer-NA. Let me know if you see any I missed.
Note: this uses `pd.array` and some types added to the API in https://github.com/pandas-dev/pandas/pull/23581, so not all the examples will run. | https://api.github.com/repos/pandas-dev/pandas/pulls/23617 | 2018-11-10T21:14:15Z | 2019-01-01T14:22:33Z | 2019-01-01T14:22:33Z | 2019-01-01T14:22:37Z |
Fix order parameters and add decimal to to_string | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 1fb54ae55b90e..007f5b7feb060 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -24,7 +24,8 @@ New features
the user to override the engine's default behavior to include or omit the
dataframe's indexes from the resulting Parquet file. (:issue:`20768`)
- :meth:`DataFrame.corr` and :meth:`Series.corr` now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (:issue:`22684`)
-
+- :func:`DataFrame.to_string` now accepts ``decimal`` as an argument, allowing
+the user to specify which decimal separator should be used in the output. (:issue:`23614`)
.. _whatsnew_0240.enhancements.extension_array_operators:
@@ -1002,6 +1003,7 @@ Other API Changes
- :class:`DateOffset` attribute `_cacheable` and method `_should_cache` have been removed (:issue:`23118`)
- Comparing :class:`Timedelta` to be less or greater than unknown types now raises a ``TypeError`` instead of returning ``False`` (:issue:`20829`)
- :meth:`Index.hasnans` and :meth:`Series.hasnans` now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (:issue:`23294`).
+- The order of the arguments of :func:`DataFrame.to_html` and :func:`DataFrame.to_string` is rearranged to be consistent with each other. (:issue:`23614`)
.. _whatsnew_0240.deprecations:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 30ca7d82936c0..e313e0f37a445 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2035,24 +2035,21 @@ def to_parquet(self, fname, engine='auto', compression='snappy',
def to_string(self, buf=None, columns=None, col_space=None, header=True,
index=True, na_rep='NaN', formatters=None, float_format=None,
sparsify=None, index_names=True, justify=None,
- line_width=None, max_rows=None, max_cols=None,
- show_dimensions=False):
+ max_rows=None, max_cols=None, show_dimensions=False,
+ decimal='.', line_width=None):
"""
Render a DataFrame to a console-friendly tabular output.
-
%(shared_params)s
line_width : int, optional
Width to wrap a line in characters.
-
%(returns)s
-
See Also
--------
to_html : Convert DataFrame to HTML.
Examples
--------
- >>> d = {'col1' : [1, 2, 3], 'col2' : [4, 5, 6]}
+ >>> d = {'col1': [1, 2, 3], 'col2': [4, 5, 6]}
>>> df = pd.DataFrame(d)
>>> print(df.to_string())
col1 col2
@@ -2068,42 +2065,37 @@ def to_string(self, buf=None, columns=None, col_space=None, header=True,
sparsify=sparsify, justify=justify,
index_names=index_names,
header=header, index=index,
- line_width=line_width,
max_rows=max_rows,
max_cols=max_cols,
- show_dimensions=show_dimensions)
+ show_dimensions=show_dimensions,
+ decimal=decimal,
+ line_width=line_width)
formatter.to_string()
if buf is None:
result = formatter.buf.getvalue()
return result
- @Substitution(header='whether to print column labels, default True')
+ @Substitution(header='Whether to print column labels, default True')
@Substitution(shared_params=fmt.common_docstring,
returns=fmt.return_docstring)
def to_html(self, buf=None, columns=None, col_space=None, header=True,
index=True, na_rep='NaN', formatters=None, float_format=None,
- sparsify=None, index_names=True, justify=None, bold_rows=True,
- classes=None, escape=True, max_rows=None, max_cols=None,
- show_dimensions=False, notebook=False, decimal='.',
- border=None, table_id=None):
+ sparsify=None, index_names=True, justify=None, max_rows=None,
+ max_cols=None, show_dimensions=False, decimal='.',
+ bold_rows=True, classes=None, escape=True,
+ notebook=False, border=None, table_id=None):
"""
Render a DataFrame as an HTML table.
-
%(shared_params)s
- bold_rows : boolean, default True
- Make the row labels bold in the output
+ bold_rows : bool, default True
+ Make the row labels bold in the output.
classes : str or list or tuple, default None
- CSS class(es) to apply to the resulting html table
- escape : boolean, default True
+ CSS class(es) to apply to the resulting html table.
+ escape : bool, default True
Convert the characters <, >, and & to HTML-safe sequences.
notebook : {True, False}, default False
Whether the generated HTML is for IPython Notebook.
- decimal : string, default '.'
- Character recognized as decimal separator, e.g. ',' in Europe
-
- .. versionadded:: 0.18.0
-
border : int
A ``border=border`` attribute is included in the opening
`<table>` tag. Default ``pd.options.html.border``.
@@ -2114,9 +2106,7 @@ def to_html(self, buf=None, columns=None, col_space=None, header=True,
A css id is included in the opening `<table>` tag if specified.
.. versionadded:: 0.23.0
-
%(returns)s
-
See Also
--------
to_string : Convert DataFrame to a string.
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index 9857129f56b0c..b63e44c6c3437 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -88,6 +88,10 @@
Maximum number of columns to display in the console.
show_dimensions : bool, default False
Display DataFrame dimensions (number of rows by number of columns).
+ decimal : str, default '.'
+ Character recognized as decimal separator, e.g. ',' in Europe.
+
+ .. versionadded:: 0.18.0
"""
_VALID_JUSTIFY_PARAMETERS = ("left", "right", "center", "justify",
@@ -101,8 +105,6 @@
String representation of the dataframe.
"""
-docstring_to_string = common_docstring + return_docstring
-
class CategoricalFormatter(object):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index b8ca8cb73c7e9..0814df8240e13 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -1458,6 +1458,12 @@ def test_to_string_format_na(self):
'4 4.0 bar')
assert result == expected
+ def test_to_string_decimal(self):
+ # Issue #23614
+ df = DataFrame({'A': [6.0, 3.1, 2.2]})
+ expected = ' A\n0 6,0\n1 3,1\n2 2,2'
+ assert df.to_string(decimal=',') == expected
+
def test_to_string_line_width(self):
df = DataFrame(123, lrange(10, 15), lrange(30))
s = df.to_string(line_width=80)
| - [x] closes #23612
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
Add an extra argument:`decimal` to `pandas.DataFrame.to_string`. | https://api.github.com/repos/pandas-dev/pandas/pulls/23614 | 2018-11-10T16:44:20Z | 2018-11-15T16:08:33Z | 2018-11-15T16:08:33Z | 2019-01-02T20:26:14Z |
TST: Unskip some Categorical Tests | diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index b1d08a5620bf3..8810c535c655f 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -73,10 +73,10 @@ class TestDtype(base.BaseDtypeTests):
class TestInterface(base.BaseInterfaceTests):
- @pytest.mark.skip(reason="Memory usage doesn't match")
- def test_memory_usage(self):
+ @pytest.mark.skip(reason="Memory usage doesn't match", strict=True)
+ def test_memory_usage(self, data):
# Is this deliberate?
- pass
+ super(TestInterface, self).test_memory_usage(data)
class TestConstructors(base.BaseConstructorsTests):
@@ -84,69 +84,56 @@ class TestConstructors(base.BaseConstructorsTests):
class TestReshaping(base.BaseReshapingTests):
- @pytest.mark.skip(reason="Unobserved categories preseved in concat.")
- def test_concat_columns(self, data, na_value):
- pass
-
- @pytest.mark.skip(reason="Unobserved categories preseved in concat.")
- def test_align(self, data, na_value):
- pass
-
- @pytest.mark.skip(reason="Unobserved categories preseved in concat.")
- def test_align_frame(self, data, na_value):
- pass
-
- @pytest.mark.skip(reason="Unobserved categories preseved in concat.")
- def test_merge(self, data, na_value):
- pass
+ pass
class TestGetitem(base.BaseGetitemTests):
- skip_take = pytest.mark.skip(reason="GH-20664.")
+ skip_take = pytest.mark.skip(reason="GH-20664.", strict=True)
- @pytest.mark.skip(reason="Backwards compatibility")
- def test_getitem_scalar(self):
+ @pytest.mark.skip(reason="Backwards compatibility", strict=True)
+ def test_getitem_scalar(self, data):
# CategoricalDtype.type isn't "correct" since it should
# be a parent of the elements (object). But don't want
# to break things by changing.
- pass
+ super(TestGetitem, self).test_getitem_scalar(data)
@skip_take
- def test_take(self):
+ def test_take(self, data, na_value, na_cmp):
# TODO remove this once Categorical.take is fixed
- pass
+ super(TestGetitem, self).test_take(data, na_value, na_cmp)
@skip_take
- def test_take_negative(self):
- pass
+ def test_take_negative(self, data):
+ super().test_take_negative(data)
@skip_take
- def test_take_pandas_style_negative_raises(self):
- pass
+ def test_take_pandas_style_negative_raises(self, data, na_value):
+ super().test_take_pandas_style_negative_raises(data, na_value)
@skip_take
- def test_take_non_na_fill_value(self):
- pass
+ def test_take_non_na_fill_value(self, data_missing):
+ super().test_take_non_na_fill_value(data_missing)
@skip_take
- def test_take_out_of_bounds_raises(self):
- pass
+ def test_take_out_of_bounds_raises(self, data, allow_fill):
+ return super().test_take_out_of_bounds_raises(data, allow_fill)
- @pytest.mark.skip(reason="GH-20747. Unobserved categories.")
- def test_take_series(self):
- pass
+ @pytest.mark.skip(reason="GH-20747. Unobserved categories.", strict=True)
+ def test_take_series(self, data):
+ super().test_take_series(data)
@skip_take
- def test_reindex_non_na_fill_value(self):
- pass
+ def test_reindex_non_na_fill_value(self, data_missing):
+ super().test_reindex_non_na_fill_value(data_missing)
- @pytest.mark.skip(reason="Categorical.take buggy")
- def test_take_empty(self):
- pass
+ @pytest.mark.skip(reason="Categorical.take buggy", strict=True)
+ def test_take_empty(self, data, na_value, na_cmp):
+ super().test_take_empty(data, na_value, na_cmp)
- @pytest.mark.skip(reason="test not written correctly for categorical")
- def test_reindex(self):
- pass
+ @pytest.mark.skip(reason="test not written correctly for categorical",
+ strict=True)
+ def test_reindex(self, data, na_value):
+ super().test_reindex(data, na_value)
class TestSetitem(base.BaseSetitemTests):
@@ -155,13 +142,13 @@ class TestSetitem(base.BaseSetitemTests):
class TestMissing(base.BaseMissingTests):
- @pytest.mark.skip(reason="Not implemented")
- def test_fillna_limit_pad(self):
- pass
+ @pytest.mark.skip(reason="Not implemented", strict=True)
+ def test_fillna_limit_pad(self, data_missing):
+ super().test_fillna_limit_pad(data_missing)
- @pytest.mark.skip(reason="Not implemented")
- def test_fillna_limit_backfill(self):
- pass
+ @pytest.mark.skip(reason="Not implemented", strict=True)
+ def test_fillna_limit_backfill(self, data_missing):
+ super().test_fillna_limit_backfill(data_missing)
class TestReduce(base.BaseNoReduceTests):
@@ -169,11 +156,9 @@ class TestReduce(base.BaseNoReduceTests):
class TestMethods(base.BaseMethodsTests):
- pass
-
- @pytest.mark.skip(reason="Unobserved categories included")
+ @pytest.mark.skip(reason="Unobserved categories included", strict=True)
def test_value_counts(self, all_data, dropna):
- pass
+ return super().test_value_counts(all_data, dropna)
def test_combine_add(self, data_repeated):
# GH 20825
@@ -191,9 +176,9 @@ def test_combine_add(self, data_repeated):
expected = pd.Series([a + val for a in list(orig_data1)])
self.assert_series_equal(result, expected)
- @pytest.mark.skip(reason="Not Applicable")
+ @pytest.mark.skip(reason="Not Applicable", strict=True)
def test_fillna_length_mismatch(self, data_missing):
- pass
+ super().test_fillna_length_mismatch(data_missing)
class TestCasting(base.BaseCastingTests):
diff --git a/setup.cfg b/setup.cfg
index 4726a0ddb2fb2..2e07182196d5b 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -90,7 +90,7 @@ known_post_core=pandas.tseries,pandas.io,pandas.plotting
sections=FUTURE,STDLIB,THIRDPARTY,PRE_CORE,DTYPES,FIRSTPARTY,POST_CORE,LOCALFOLDER
known_first_party=pandas
-known_third_party=Cython,numpy,python-dateutil,pytz,pyarrow
+known_third_party=Cython,numpy,python-dateutil,pytz,pyarrow,pytest
multi_line_output=4
force_grid_wrap=0
combine_as_imports=True
| closes https://github.com/pandas-dev/pandas/issues/20747 | https://api.github.com/repos/pandas-dev/pandas/pulls/23613 | 2018-11-10T13:20:07Z | 2018-11-11T14:48:38Z | 2018-11-11T14:48:37Z | 2018-11-11T14:48:41Z |
DOC: Fix Order of parameters in docstrings | diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index ec4cdffc56435..d12dbb81765d8 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -407,12 +407,12 @@ def crosstab(index, columns, values=None, rownames=None, colnames=None,
values : array-like, optional
Array of values to aggregate according to the factors.
Requires `aggfunc` be specified.
- aggfunc : function, optional
- If specified, requires `values` be specified as well
rownames : sequence, default None
If passed, must match number of row arrays passed
colnames : sequence, default None
If passed, must match number of column arrays passed
+ aggfunc : function, optional
+ If specified, requires `values` be specified as well
margins : boolean, default False
Add row/column margins (subtotals)
margins_name : string, default 'All'
diff --git a/pandas/core/window.py b/pandas/core/window.py
index 5256532a31870..be28a3bcccec6 100644
--- a/pandas/core/window.py
+++ b/pandas/core/window.py
@@ -136,7 +136,7 @@ def _gotitem(self, key, ndim, subset=None):
Parameters
----------
- key : string / list of selections
+ key : str / list of selections
ndim : 1,2
requested ndim of result
subset : object, default None
@@ -464,15 +464,16 @@ class Window(_Window):
(otherwise result is NA). For a window that is specified by an offset,
`min_periods` will default to 1. Otherwise, `min_periods` will default
to the size of the window.
- center : boolean, default False
+ center : bool, default False
Set the labels at the center of the window.
- win_type : string, default None
+ win_type : str, default None
Provide a window type. If ``None``, all points are evenly weighted.
See the notes below for further information.
- on : string, optional
+ on : str, optional
For a DataFrame, column on which to calculate
the rolling window, rather than the index
- closed : string, default None
+ axis : int or str, default 0
+ closed : str, default None
Make the interval closed on the 'right', 'left', 'both' or
'neither' endpoints.
For offset-based windows, it defaults to 'right'.
@@ -481,8 +482,6 @@ class Window(_Window):
.. versionadded:: 0.20.0
- axis : int or string, default 0
-
Returns
-------
a Window or Rolling sub-classed for the particular operation
@@ -661,7 +660,7 @@ def _apply_window(self, mean=True, **kwargs):
Parameters
----------
- mean : boolean, default True
+ mean : bool, default True
If True computes weighted mean, else weighted sum
Returns
@@ -819,11 +818,11 @@ def _apply(self, func, name=None, window=None, center=None,
Parameters
----------
- func : string/callable to apply
- name : string, optional
+ func : str/callable to apply
+ name : str, optional
name of this function
window : int/array, default to _get_window()
- center : boolean, default to self.center
+ center : bool, default to self.center
check_minp : function, default to _use_window
Returns
@@ -1816,9 +1815,9 @@ class Expanding(_Rolling_and_Expanding):
min_periods : int, default 1
Minimum number of observations in window required to have a value
(otherwise result is NA).
- center : boolean, default False
+ center : bool, default False
Set the labels at the center of the window.
- axis : int or string, default 0
+ axis : int or str, default 0
Returns
-------
@@ -2062,7 +2061,7 @@ def _constructor(self):
Parameters
----------
-bias : boolean, default False
+bias : bool, default False
Use a standard estimation bias correction
"""
@@ -2079,7 +2078,7 @@ def _constructor(self):
will be a MultiIndex DataFrame in the case of DataFrame inputs.
In the case of missing elements, only complete pairwise observations will
be used.
-bias : boolean, default False
+bias : bool, default False
Use a standard estimation bias correction
"""
@@ -2110,10 +2109,10 @@ class EWM(_Rolling):
min_periods : int, default 0
Minimum number of observations in window required to have a value
(otherwise result is NA).
- adjust : boolean, default True
+ adjust : bool, default True
Divide by decaying adjustment factor in beginning periods to account
for imbalance in relative weightings (viewing EWMA as a moving average)
- ignore_na : boolean, default False
+ ignore_na : bool, default False
Ignore missing values when calculating weights;
specify True to reproduce pre-0.15.0 behavior
@@ -2242,7 +2241,7 @@ def _apply(self, func, **kwargs):
Parameters
----------
- func : string/callable to apply
+ func : str/callable to apply
Returns
-------
diff --git a/pandas/io/json/normalize.py b/pandas/io/json/normalize.py
index ce07a795017e5..af046d9f309e7 100644
--- a/pandas/io/json/normalize.py
+++ b/pandas/io/json/normalize.py
@@ -110,10 +110,10 @@ def json_normalize(data, record_path=None, meta=None,
assumed to be an array of records
meta : list of paths (string or list of strings), default None
Fields to use as metadata for each record in resulting table
+ meta_prefix : string, default None
record_prefix : string, default None
If True, prefix records with dotted (?) path, e.g. foo.bar.field if
path to records is ['foo', 'bar']
- meta_prefix : string, default None
errors : {'raise', 'ignore'}, default 'raise'
* 'ignore' : will ignore KeyError if keys listed in meta are not
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 6fb562e301ac2..53719b71d1180 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -807,7 +807,6 @@ class CustomBusinessDay(_CustomMixin, BusinessDay):
Parameters
----------
n : int, default 1
- offset : timedelta, default timedelta(0)
normalize : bool, default False
Normalize start/end dates to midnight before generating date range
weekmask : str, Default 'Mon Tue Wed Thu Fri'
@@ -816,6 +815,7 @@ class CustomBusinessDay(_CustomMixin, BusinessDay):
list/array of dates to exclude from the set of valid business days,
passed to ``numpy.busdaycalendar``
calendar : pd.HolidayCalendar or np.busdaycalendar
+ offset : timedelta, default timedelta(0)
"""
_prefix = 'C'
_attributes = frozenset(['n', 'normalize',
@@ -958,7 +958,6 @@ class _CustomBusinessMonth(_CustomMixin, BusinessMixin, MonthOffset):
Parameters
----------
n : int, default 1
- offset : timedelta, default timedelta(0)
normalize : bool, default False
Normalize start/end dates to midnight before generating date range
weekmask : str, Default 'Mon Tue Wed Thu Fri'
@@ -967,6 +966,7 @@ class _CustomBusinessMonth(_CustomMixin, BusinessMixin, MonthOffset):
list/array of dates to exclude from the set of valid business days,
passed to ``numpy.busdaycalendar``
calendar : pd.HolidayCalendar or np.busdaycalendar
+ offset : timedelta, default timedelta(0)
"""
_attributes = frozenset(['n', 'normalize',
'weekmask', 'holidays', 'calendar', 'offset'])
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index c6457545038e0..d0486b9d8fd3a 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1242,18 +1242,18 @@ def assert_series_equal(left, right, check_dtype=True,
check_less_precise : bool or int, default False
Specify comparison precision. Only used when check_exact is False.
5 digits (False) or 3 digits (True) after decimal points are compared.
- If int, then specify the digits to compare
- check_exact : bool, default False
- Whether to compare number exactly.
+ If int, then specify the digits to compare.
check_names : bool, default True
Whether to check the Series and Index names attribute.
+ check_exact : bool, default False
+ Whether to compare number exactly.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
Whether to compare internal Categorical exactly.
obj : str, default 'Series'
Specify object name being compared, internally used to show appropriate
- assertion message
+ assertion message.
"""
__tracebackhide__ = True
| - [ ] closes #23596
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Fix the following:
```diff
pandas.io.json.json_normalize: Wrong parameters order. Actual: ('data', 'record_path', 'meta', 'meta_prefix', 'record_prefix', 'errors', 'sep'). Documented: ('data', 'record_path', 'meta', 'record_prefix', 'meta_prefix', 'errors', 'sep')
pandas.crosstab: Wrong parameters order. Actual: ('index', 'columns', 'values', 'rownames', 'colnames', 'aggfunc', 'margins', 'margins_name', 'dropna', 'normalize'). Documented: ('index', 'columns', 'values', 'aggfunc', 'rownames', 'colnames', 'margins', 'margins_name', 'dropna', 'normalize')
pandas.Series.rolling: Wrong parameters order. Actual: ('window', 'min_periods', 'center', 'win_type', 'on', 'axis', 'closed'). Documented: ('window', 'min_periods', 'center', 'win_type', 'on', 'closed', 'axis')
pandas.DataFrame.rolling: Wrong parameters order. Actual: ('window', 'min_periods', 'center', 'win_type', 'on', 'axis', 'closed'). Documented: ('window', 'min_periods', 'center', 'win_type', 'on', 'closed', 'axis')
-pandas.DataFrame.to_html: Wrong parameters order. Actual: ('buf', 'columns', 'col_space', 'header', 'index', 'na_rep', 'formatters', 'float_format', 'sparsify', 'index_names', 'justify', 'bold_rows', 'classes', 'escape', 'max_rows', 'max_cols', 'show_dimensions', 'notebook', 'decimal', 'border', 'table_id'). Documented: ('buf', 'columns', 'col_space', 'header', 'index', 'na_rep', 'formatters', 'float_format', 'sparsify', 'index_names', 'justify', 'max_rows', 'max_cols', 'show_dimensions', 'bold_rows', 'classes', 'escape', 'notebook', 'decimal', 'border', 'table_id')
-pandas.DataFrame.to_string: Wrong parameters order. Actual: ('buf', 'columns', 'col_space', 'header', 'index', 'na_rep', 'formatters', 'float_format', 'sparsify', 'index_names', 'justify', 'line_width', 'max_rows', 'max_cols', 'show_dimensions'). Documented: ('buf', 'columns', 'col_space', 'header', 'index', 'na_rep', 'formatters', 'float_format', 'sparsify', 'index_names', 'justify', 'max_rows', 'max_cols', 'show_dimensions', 'line_width')
pandas.tseries.offsets.CustomBusinessDay: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.tseries.offsets.CustomBusinessMonthEnd: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.tseries.offsets.CustomBusinessMonthBegin: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.tseries.offsets.CBMonthEnd: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.tseries.offsets.CBMonthBegin: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.tseries.offsets.CDay: Wrong parameters order. Actual: ('n', 'normalize', 'weekmask', 'holidays', 'calendar', 'offset'). Documented: ('n', 'offset', 'normalize', 'weekmask', 'holidays', 'calendar')
pandas.testing.assert_series_equal: Wrong parameters order. Actual: ('left', 'right', 'check_dtype', 'check_index_type', 'check_series_type', 'check_less_precise', 'check_names', 'check_exact', 'check_datetimelike_compat', 'check_categorical', 'obj'). Documented: ('left', 'right', 'check_dtype', 'check_index_type', 'check_series_type', 'check_less_precise', 'check_exact', 'check_names', 'check_datetimelike_compat', 'check_categorical', 'obj')``` | https://api.github.com/repos/pandas-dev/pandas/pulls/23611 | 2018-11-10T04:46:46Z | 2018-11-11T14:50:27Z | 2018-11-11T14:50:27Z | 2019-01-02T20:26:28Z |
TST: Use intp as expected dtype in IntervalIndex indexing tests | diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 258f2dc41fb79..49d093d312cf1 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -412,9 +412,9 @@ def test_get_loc_value(self):
assert idx.get_loc(0.5) == 0
assert idx.get_loc(1) == 0
tm.assert_numpy_array_equal(idx.get_loc(1.5),
- np.array([0, 1], dtype='int64'))
+ np.array([0, 1], dtype='intp'))
tm.assert_numpy_array_equal(np.sort(idx.get_loc(2)),
- np.array([0, 1], dtype='int64'))
+ np.array([0, 1], dtype='intp'))
assert idx.get_loc(3) == 1
pytest.raises(KeyError, idx.get_loc, 3.5)
@@ -537,12 +537,12 @@ def test_get_loc_datetimelike_overlapping(self, arrays):
value = index[0].mid + Timedelta('12 hours')
result = np.sort(index.get_loc(value))
- expected = np.array([0, 1], dtype='int64')
+ expected = np.array([0, 1], dtype='intp')
assert tm.assert_numpy_array_equal(result, expected)
interval = Interval(index[0].left, index[1].right)
result = np.sort(index.get_loc(interval))
- expected = np.array([0, 1, 2], dtype='int64')
+ expected = np.array([0, 1, 2], dtype='intp')
assert tm.assert_numpy_array_equal(result, expected)
# To be removed, replaced by test_interval_new.py (see #16316, #16386)
@@ -617,7 +617,7 @@ def test_get_reindexer_datetimelike(self, arrays):
target = IntervalIndex.from_tuples(tuples)
result = index._get_reindexer(target)
- expected = np.array([0, 3], dtype='int64')
+ expected = np.array([0, 3], dtype='intp')
tm.assert_numpy_array_equal(result, expected)
@pytest.mark.parametrize('breaks', [
| xref https://github.com/pandas-dev/pandas/pull/23468#issuecomment-437335512: fixes some failing 32bit tests | https://api.github.com/repos/pandas-dev/pandas/pulls/23609 | 2018-11-09T23:51:06Z | 2018-11-10T04:24:09Z | 2018-11-10T04:24:09Z | 2018-11-11T13:15:42Z |
DOC: Adding validation of the section order in docstrings | diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index ccd5f56141a6a..c1bdab73c2671 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -350,6 +350,35 @@ def private_classes(self):
This mentions NDFrame, which is not correct.
"""
+ def unknown_section(self):
+ """
+ This section has an unknown section title.
+
+ Unknown Section
+ ---------------
+ This should raise an error in the validation.
+ """
+
+ def sections_in_wrong_order(self):
+ """
+ This docstring has the sections in the wrong order.
+
+ Parameters
+ ----------
+ name : str
+ This section is in the right position.
+
+ Examples
+ --------
+ >>> print('So far Examples is good, as it goes before Parameters')
+ So far Examples is good, as it goes before Parameters
+
+ See Also
+ --------
+ function : This should generate an error, as See Also needs to go
+ before Examples.
+ """
+
class BadSummaries(object):
@@ -706,6 +735,11 @@ def test_bad_generic_functions(self, func):
('BadGenericDocStrings', 'private_classes',
("Private classes (NDFrame) should not be mentioned in public "
'docstrings',)),
+ ('BadGenericDocStrings', 'unknown_section',
+ ('Found unknown section "Unknown Section".',)),
+ ('BadGenericDocStrings', 'sections_in_wrong_order',
+ ('Wrong order of sections. "See Also" should be located before '
+ '"Notes"',)),
('BadSeeAlso', 'desc_no_period',
('Missing period at end of description for See Also "Series.iloc"',)),
('BadSeeAlso', 'desc_first_letter_lowercase',
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index ed84e58049cae..7da77a1f60ad5 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -56,6 +56,9 @@
PRIVATE_CLASSES = ['NDFrame', 'IndexOpsMixin']
DIRECTIVES = ['versionadded', 'versionchanged', 'deprecated']
+ALLOWED_SECTIONS = ['Parameters', 'Attributes', 'Methods', 'Returns', 'Yields',
+ 'Other Parameters', 'Raises', 'Warns', 'See Also', 'Notes',
+ 'References', 'Examples']
ERROR_MSGS = {
'GL01': 'Docstring text (summary) should start in the line immediately '
'after the opening quotes (not in the same line, or leaving a '
@@ -69,6 +72,10 @@
'mentioned in public docstrings',
'GL05': 'Tabs found at the start of line "{line_with_tabs}", please use '
'whitespace only',
+ 'GL06': 'Found unknown section "{section}". Allowed sections are: '
+ '{allowed_sections}',
+ 'GL07': 'Wrong order of sections. "{wrong_section}" should be located '
+ 'before "{goes_before}", the right order is: {sorted_sections}',
'SS01': 'No summary found (a short summary in a single line should be '
'present at the beginning of the docstring)',
'SS02': 'Summary does not start with a capital letter',
@@ -353,6 +360,18 @@ def double_blank_lines(self):
prev = row.strip()
return False
+ @property
+ def section_titles(self):
+ sections = []
+ self.doc._doc.reset()
+ while not self.doc._doc.eof():
+ content = self.doc._read_to_next_section()
+ if (len(content) > 1
+ and len(content[0]) == len(content[1])
+ and set(content[1]) == {'-'}):
+ sections.append(content[0])
+ return sections
+
@property
def summary(self):
return ' '.join(self.doc['Summary'])
@@ -580,6 +599,25 @@ def validate_one(func_name):
if re.match("^ *\t", line):
errs.append(error('GL05', line_with_tabs=line.lstrip()))
+ unseen_sections = list(ALLOWED_SECTIONS)
+ for section in doc.section_titles:
+ if section not in ALLOWED_SECTIONS:
+ errs.append(error('GL06',
+ section=section,
+ allowed_sections=', '.join(ALLOWED_SECTIONS)))
+ else:
+ if section in unseen_sections:
+ section_idx = unseen_sections.index(section)
+ unseen_sections = unseen_sections[section_idx + 1:]
+ else:
+ section_idx = ALLOWED_SECTIONS.index(section)
+ goes_before = ALLOWED_SECTIONS[section_idx + 1]
+ errs.append(error('GL07',
+ sorted_sections=' > '.join(ALLOWED_SECTIONS),
+ wrong_section=section,
+ goes_before=goes_before))
+ break
+
if not doc.summary:
errs.append(error('SS01'))
else:
| - [X] closes #23133
- [X] tests added / passed
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/23607 | 2018-11-09T22:31:13Z | 2018-11-12T15:33:48Z | 2018-11-12T15:33:48Z | 2018-11-12T15:33:48Z |
TST: add tests for keeping dtype in Series.update | diff --git a/pandas/tests/series/test_combine_concat.py b/pandas/tests/series/test_combine_concat.py
index 3f137bf686715..e13cb9edffe2b 100644
--- a/pandas/tests/series/test_combine_concat.py
+++ b/pandas/tests/series/test_combine_concat.py
@@ -10,10 +10,10 @@
import pandas as pd
from pandas import DataFrame, DatetimeIndex, Series, compat, date_range
import pandas.util.testing as tm
-from pandas.util.testing import assert_series_equal
+from pandas.util.testing import assert_frame_equal, assert_series_equal
-class TestSeriesCombine():
+class TestSeriesCombine(object):
def test_append(self, datetime_series, string_series, object_series):
appendedSeries = string_series.append(object_series)
@@ -116,8 +116,40 @@ def test_update(self):
df = DataFrame([{"a": 1}, {"a": 3, "b": 2}])
df['c'] = np.nan
- # this will fail as long as series is a sub-class of ndarray
- # df['c'].update(Series(['foo'],index=[0])) #####
+ df['c'].update(Series(['foo'], index=[0]))
+ expected = DataFrame([[1, np.nan, 'foo'], [3, 2., np.nan]],
+ columns=['a', 'b', 'c'])
+ assert_frame_equal(df, expected)
+
+ @pytest.mark.parametrize('other, dtype, expected', [
+ # other is int
+ ([61, 63], 'int32', pd.Series([10, 61, 12], dtype='int32')),
+ ([61, 63], 'int64', pd.Series([10, 61, 12])),
+ ([61, 63], float, pd.Series([10., 61., 12.])),
+ ([61, 63], object, pd.Series([10, 61, 12], dtype=object)),
+ # other is float, but can be cast to int
+ ([61., 63.], 'int32', pd.Series([10, 61, 12], dtype='int32')),
+ ([61., 63.], 'int64', pd.Series([10, 61, 12])),
+ ([61., 63.], float, pd.Series([10., 61., 12.])),
+ ([61., 63.], object, pd.Series([10, 61., 12], dtype=object)),
+ # others is float, cannot be cast to int
+ ([61.1, 63.1], 'int32', pd.Series([10., 61.1, 12.])),
+ ([61.1, 63.1], 'int64', pd.Series([10., 61.1, 12.])),
+ ([61.1, 63.1], float, pd.Series([10., 61.1, 12.])),
+ ([61.1, 63.1], object, pd.Series([10, 61.1, 12], dtype=object)),
+ # other is object, cannot be cast
+ ([(61,), (63,)], 'int32', pd.Series([10, (61,), 12])),
+ ([(61,), (63,)], 'int64', pd.Series([10, (61,), 12])),
+ ([(61,), (63,)], float, pd.Series([10., (61,), 12.])),
+ ([(61,), (63,)], object, pd.Series([10, (61,), 12]))
+ ])
+ def test_update_dtypes(self, other, dtype, expected):
+
+ s = Series([10, 11, 12], dtype=dtype)
+ other = Series(other, index=[1, 3])
+ s.update(other)
+
+ assert_series_equal(s, expected)
def test_concat_empty_series_dtypes_roundtrips(self):
| - [x] precursor to #23192
- [x] tests added / passed
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/23604 | 2018-11-09T18:27:32Z | 2018-11-20T02:17:28Z | 2018-11-20T02:17:28Z | 2018-11-20T06:53:52Z |
CLN: remove values attribute from datetimelike EAs | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index ed4309395ac1f..3fa4f503d2dd5 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -66,7 +66,7 @@ def cmp_method(self, other):
with warnings.catch_warnings(record=True):
warnings.filterwarnings("ignore", "elementwise", FutureWarning)
with np.errstate(all='ignore'):
- result = op(self.values, np.asarray(other))
+ result = op(self._data, np.asarray(other))
return result
@@ -119,15 +119,10 @@ def _box_values(self, values):
def __iter__(self):
return (self._box_func(v) for v in self.asi8)
- @property
- def values(self):
- """ return the underlying data as an ndarray """
- return self._data.view(np.ndarray)
-
@property
def asi8(self):
# do not cache or you'll create a memory leak
- return self.values.view('i8')
+ return self._data.view('i8')
# ------------------------------------------------------------------
# Array-like Methods
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 39a2c7e75027e..405056c628ceb 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -886,7 +886,7 @@ def to_period(self, freq=None):
freq = get_period_alias(freq)
- return PeriodArray._from_datetime64(self.values, freq, tz=self.tz)
+ return PeriodArray._from_datetime64(self._data, freq, tz=self.tz)
def to_perioddelta(self, freq):
"""
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 0fd69abd96cfa..cf3ba263d1f81 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -81,7 +81,7 @@ def wrapper(self, other):
raise TypeError(msg.format(cls=type(self).__name__,
typ=type(other).__name__))
else:
- other = type(self)(other).values
+ other = type(self)(other)._data
result = meth(self, other)
result = com.values_from_object(result)
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 3a2f9986760d3..56ab9b6c020c0 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -292,7 +292,7 @@ def __new__(cls, data=None,
'set specified tz: {1}')
raise TypeError(msg.format(data.tz, tz))
- subarr = data.values
+ subarr = data._data
if freq is None:
freq = data.freq
diff --git a/pandas/tests/extension/test_period.py b/pandas/tests/extension/test_period.py
index 83f30aed88e65..3de3f1dfd9dbc 100644
--- a/pandas/tests/extension/test_period.py
+++ b/pandas/tests/extension/test_period.py
@@ -75,9 +75,7 @@ def test_combine_add(self, data_repeated):
class TestInterface(BasePeriodTests, base.BaseInterfaceTests):
- def test_no_values_attribute(self, data):
- # We have a values attribute.
- pass
+ pass
class TestArithmeticOps(BasePeriodTests, base.BaseArithmeticOpsTests):
| We don't need a `.values` attribute on the EAs (in the interface, we actually explicitly disallow it (in words and tests)).
I thought there was a reason during the PeriodArray PR that this couldn't yet be removed, but relevant tests seem to be passing locally.
cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/23603 | 2018-11-09T16:00:18Z | 2018-11-09T21:05:55Z | 2018-11-09T21:05:55Z | 2018-11-12T11:04:01Z |
Add default repr for EAs | diff --git a/doc/source/whatsnew/v0.24.0.rst b/doc/source/whatsnew/v0.24.0.rst
index 7617ad5b428a2..02ffd07e81ff9 100644
--- a/doc/source/whatsnew/v0.24.0.rst
+++ b/doc/source/whatsnew/v0.24.0.rst
@@ -1001,6 +1001,7 @@ update the ``ExtensionDtype._metadata`` tuple to match the signature of your
- :meth:`DataFrame.stack` no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (:issue:`23077`).
- :meth:`Series.unstack` and :meth:`DataFrame.unstack` no longer convert extension arrays to object-dtype ndarrays. Each column in the output ``DataFrame`` will now have the same dtype as the input (:issue:`23077`).
- Bug when grouping :meth:`Dataframe.groupby()` and aggregating on ``ExtensionArray`` it was not returning the actual ``ExtensionArray`` dtype (:issue:`23227`).
+- A default repr for :class:`ExtensionArray` is now provided (:issue:`23601`).
.. _whatsnew_0240.api.incompatibilities:
@@ -1116,6 +1117,7 @@ Deprecations
- The methods :meth:`Series.str.partition` and :meth:`Series.str.rpartition` have deprecated the ``pat`` keyword in favor of ``sep`` (:issue:`22676`)
- Deprecated the `nthreads` keyword of :func:`pandas.read_feather` in favor of
`use_threads` to reflect the changes in pyarrow 0.11.0. (:issue:`23053`)
+- :meth:`ExtensionArray._formatting_values` is deprecated. Use :attr:`ExtensionArray._formatter` instead. (:issue:`23601`)
- :func:`pandas.read_excel` has deprecated accepting ``usecols`` as an integer. Please pass in a list of ints from 0 to ``usecols`` inclusive instead (:issue:`23527`)
- Constructing a :class:`TimedeltaIndex` from data with ``datetime64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23539`)
- Constructing a :class:`DatetimeIndex` from data with ``timedelta64``-dtyped data is deprecated, will raise ``TypeError`` in a future version (:issue:`23675`)
@@ -1282,6 +1284,7 @@ Datetimelike
- Bug in rounding methods of :class:`DatetimeIndex` (:meth:`~DatetimeIndex.round`, :meth:`~DatetimeIndex.ceil`, :meth:`~DatetimeIndex.floor`) and :class:`Timestamp` (:meth:`~Timestamp.round`, :meth:`~Timestamp.ceil`, :meth:`~Timestamp.floor`) could give rise to loss of precision (:issue:`22591`)
- Bug in :func:`to_datetime` with an :class:`Index` argument that would drop the ``name`` from the result (:issue:`21697`)
- Bug in :class:`PeriodIndex` where adding or subtracting a :class:`timedelta` or :class:`Tick` object produced incorrect results (:issue:`22988`)
+- Bug in the :class:`Series` repr with period-dtype data missing a space before the data (:issue:`23601`)
- Bug in :func:`date_range` when decrementing a start date to a past end date by a negative frequency (:issue:`23270`)
- Bug in :meth:`Series.min` which would return ``NaN`` instead of ``NaT`` when called on a series of ``NaT`` (:issue:`23282`)
- Bug in :func:`DataFrame.combine` with datetimelike values raising a TypeError (:issue:`23079`)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 8877436dcf51c..9c6aa4a12923f 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -47,10 +47,12 @@ class ExtensionArray(object):
* copy
* _concat_same_type
- An additional method is available to satisfy pandas' internal,
- private block API.
+ A default repr displaying the type, (truncated) data, length,
+ and dtype is provided. It can be customized or replaced by
+ by overriding:
- * _formatting_values
+ * __repr__ : A default repr for the ExtensionArray.
+ * _formatter : Print scalars inside a Series or DataFrame.
Some methods require casting the ExtensionArray to an ndarray of Python
objects with ``self.astype(object)``, which may be expensive. When
@@ -676,17 +678,70 @@ def copy(self, deep=False):
raise AbstractMethodError(self)
# ------------------------------------------------------------------------
- # Block-related methods
+ # Printing
# ------------------------------------------------------------------------
+ def __repr__(self):
+ from pandas.io.formats.printing import format_object_summary
+
+ template = (
+ u'{class_name}'
+ u'{data}\n'
+ u'Length: {length}, dtype: {dtype}'
+ )
+ # the short repr has no trailing newline, while the truncated
+ # repr does. So we include a newline in our template, and strip
+ # any trailing newlines from format_object_summary
+ data = format_object_summary(self, self._formatter(),
+ indent_for_name=False).rstrip(', \n')
+ class_name = u'<{}>\n'.format(self.__class__.__name__)
+ return template.format(class_name=class_name, data=data,
+ length=len(self),
+ dtype=self.dtype)
+
+ def _formatter(self, boxed=False):
+ # type: (bool) -> Callable[[Any], Optional[str]]
+ """Formatting function for scalar values.
+
+ This is used in the default '__repr__'. The returned formatting
+ function receives instances of your scalar type.
+
+ Parameters
+ ----------
+ boxed: bool, default False
+ An indicated for whether or not your array is being printed
+ within a Series, DataFrame, or Index (True), or just by
+ itself (False). This may be useful if you want scalar values
+ to appear differently within a Series versus on its own (e.g.
+ quoted or not).
+
+ Returns
+ -------
+ Callable[[Any], str]
+ A callable that gets instances of the scalar type and
+ returns a string. By default, :func:`repr` is used
+ when ``boxed=False`` and :func:`str` is used when
+ ``boxed=True``.
+ """
+ if boxed:
+ return str
+ return repr
def _formatting_values(self):
# type: () -> np.ndarray
# At the moment, this has to be an array since we use result.dtype
"""
An array of values to be printed in, e.g. the Series repr
+
+ .. deprecated:: 0.24.0
+
+ Use :meth:`ExtensionArray._formatter` instead.
"""
return np.array(self)
+ # ------------------------------------------------------------------------
+ # Reshaping
+ # ------------------------------------------------------------------------
+
@classmethod
def _concat_same_type(cls, to_concat):
# type: (Sequence[ExtensionArray]) -> ExtensionArray
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 3ed2a3b3955e4..ac1c34edba914 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -500,6 +500,10 @@ def _constructor(self):
def _from_sequence(cls, scalars, dtype=None, copy=False):
return Categorical(scalars, dtype=dtype)
+ def _formatter(self, boxed=False):
+ # Defer to CategoricalFormatter's formatter.
+ return None
+
def copy(self):
"""
Copy constructor.
@@ -2036,6 +2040,10 @@ def __unicode__(self):
return result
+ def __repr__(self):
+ # We want PandasObject.__repr__, which dispatches to __unicode__
+ return super(ExtensionArray, self).__repr__()
+
def _maybe_coerce_indexer(self, indexer):
"""
return an indexer coerced to the codes dtype
@@ -2392,9 +2400,6 @@ def _concat_same_type(self, to_concat):
return _concat_categorical(to_concat)
- def _formatting_values(self):
- return self
-
def isin(self, values):
"""
Check whether `values` are contained in Categorical.
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index f500422f0cbc5..38dc68e8f77a3 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas._libs import lib
-from pandas.compat import range, set_function_name, string_types, u
+from pandas.compat import range, set_function_name, string_types
from pandas.util._decorators import cache_readonly
from pandas.core.dtypes.base import ExtensionDtype
@@ -20,9 +20,6 @@
from pandas.core import nanops
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
-from pandas.io.formats.printing import (
- default_pprint, format_object_attrs, format_object_summary)
-
class _IntegerDtype(ExtensionDtype):
"""
@@ -268,6 +265,13 @@ def _from_sequence(cls, scalars, dtype=None, copy=False):
def _from_factorized(cls, values, original):
return integer_array(values, dtype=original.dtype)
+ def _formatter(self, boxed=False):
+ def fmt(x):
+ if isna(x):
+ return 'NaN'
+ return str(x)
+ return fmt
+
def __getitem__(self, item):
if is_integer(item):
if self._mask[item]:
@@ -301,10 +305,6 @@ def __iter__(self):
else:
yield self._data[i]
- def _formatting_values(self):
- # type: () -> np.ndarray
- return self._coerce_to_ndarray()
-
def take(self, indexer, allow_fill=False, fill_value=None):
from pandas.api.extensions import take
@@ -354,25 +354,6 @@ def __setitem__(self, key, value):
def __len__(self):
return len(self._data)
- def __repr__(self):
- """
- Return a string representation for this object.
-
- Invoked by unicode(df) in py2 only. Yields a Unicode String in both
- py2/py3.
- """
- klass = self.__class__.__name__
- data = format_object_summary(self, default_pprint, False)
- attrs = format_object_attrs(self)
- space = " "
-
- prepr = (u(",%s") %
- space).join(u("%s=%s") % (k, v) for k, v in attrs)
-
- res = u("%s(%s%s)") % (klass, data, prepr)
-
- return res
-
@property
def nbytes(self):
return self._data.nbytes + self._mask.nbytes
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index b055bc3f2eb52..785fb02c4d95d 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -690,9 +690,6 @@ def copy(self, deep=False):
# TODO: Could skip verify_integrity here.
return type(self).from_arrays(left, right, closed=closed)
- def _formatting_values(self):
- return np.asarray(self)
-
def isna(self):
return isna(self.left)
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index e258e474f4154..e648ea2d336bc 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -341,13 +341,10 @@ def to_timestamp(self, freq=None, how='start'):
# --------------------------------------------------------------------
# Array-like / EA-Interface Methods
- def __repr__(self):
- return '<{}>\n{}\nLength: {}, dtype: {}'.format(
- self.__class__.__name__,
- [str(s) for s in self],
- len(self),
- self.dtype
- )
+ def _formatter(self, boxed=False):
+ if boxed:
+ return str
+ return "'{}'".format
def __setitem__(
self,
diff --git a/pandas/core/arrays/sparse.py b/pandas/core/arrays/sparse.py
index ae5a4eb7075de..96724b6c4b362 100644
--- a/pandas/core/arrays/sparse.py
+++ b/pandas/core/arrays/sparse.py
@@ -1746,6 +1746,11 @@ def __unicode__(self):
fill=printing.pprint_thing(self.fill_value),
index=printing.pprint_thing(self.sp_index))
+ def _formatter(self, boxed=False):
+ # Defer to the formatter from the GenericArrayFormatter calling us.
+ # This will infer the correct formatter from the dtype of the values.
+ return None
+
SparseArray._add_arithmetic_ops()
SparseArray._add_comparison_ops()
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index c1a78b985fec9..26e51e4f63101 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -503,7 +503,7 @@ def __array_wrap__(self, result, context=None):
@property
def _formatter_func(self):
- return lambda x: "'%s'" % x
+ return self.array._formatter(boxed=False)
def asof_locs(self, where, mask):
"""
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 1b67c20530eb0..dd6742e1ab10b 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -33,7 +33,7 @@
_isna_compat, array_equivalent, is_null_datelike_scalar, isna, notna)
import pandas.core.algorithms as algos
-from pandas.core.arrays import Categorical
+from pandas.core.arrays import Categorical, ExtensionArray
from pandas.core.base import PandasObject
import pandas.core.common as com
from pandas.core.indexes.datetimes import DatetimeIndex
@@ -1915,7 +1915,19 @@ def _slice(self, slicer):
return self.values[slicer]
def formatting_values(self):
- return self.values._formatting_values()
+ # Deprecating the ability to override _formatting_values.
+ # Do the warning here, it's only user in pandas, since we
+ # have to check if the subclass overrode it.
+ fv = getattr(type(self.values), '_formatting_values', None)
+ if fv and fv != ExtensionArray._formatting_values:
+ msg = (
+ "'ExtensionArray._formatting_values' is deprecated. "
+ "Specify 'ExtensionArray._formatter' instead."
+ )
+ warnings.warn(msg, DeprecationWarning, stacklevel=10)
+ return self.values._formatting_values()
+
+ return self.values
def concat_same_type(self, to_concat, placement=None):
"""
diff --git a/pandas/io/formats/format.py b/pandas/io/formats/format.py
index b35f5d1e548b7..8452eb562a8e6 100644
--- a/pandas/io/formats/format.py
+++ b/pandas/io/formats/format.py
@@ -16,11 +16,12 @@
from pandas.compat import StringIO, lzip, map, u, zip
from pandas.core.dtypes.common import (
- is_categorical_dtype, is_datetime64_dtype, is_datetime64tz_dtype, is_float,
- is_float_dtype, is_integer, is_integer_dtype, is_interval_dtype,
- is_list_like, is_numeric_dtype, is_period_arraylike, is_scalar,
+ is_categorical_dtype, is_datetime64_dtype, is_datetime64tz_dtype,
+ is_extension_array_dtype, is_float, is_float_dtype, is_integer,
+ is_integer_dtype, is_list_like, is_numeric_dtype, is_scalar,
is_timedelta64_dtype)
-from pandas.core.dtypes.generic import ABCMultiIndex, ABCSparseArray
+from pandas.core.dtypes.generic import (
+ ABCIndexClass, ABCMultiIndex, ABCSeries, ABCSparseArray)
from pandas.core.dtypes.missing import isna, notna
from pandas import compat
@@ -29,7 +30,6 @@
from pandas.core.config import get_option, set_option
from pandas.core.index import Index, ensure_index
from pandas.core.indexes.datetimes import DatetimeIndex
-from pandas.core.indexes.period import PeriodIndex
from pandas.io.common import _expand_user, _stringify_path
from pandas.io.formats.printing import adjoin, justify, pprint_thing
@@ -842,22 +842,18 @@ def _get_column_name_list(self):
def format_array(values, formatter, float_format=None, na_rep='NaN',
digits=None, space=None, justify='right', decimal='.'):
- if is_categorical_dtype(values):
- fmt_klass = CategoricalArrayFormatter
- elif is_interval_dtype(values):
- fmt_klass = IntervalArrayFormatter
+ if is_datetime64_dtype(values.dtype):
+ fmt_klass = Datetime64Formatter
+ elif is_timedelta64_dtype(values.dtype):
+ fmt_klass = Timedelta64Formatter
+ elif is_extension_array_dtype(values.dtype):
+ fmt_klass = ExtensionArrayFormatter
elif is_float_dtype(values.dtype):
fmt_klass = FloatArrayFormatter
- elif is_period_arraylike(values):
- fmt_klass = PeriodArrayFormatter
elif is_integer_dtype(values.dtype):
fmt_klass = IntArrayFormatter
elif is_datetime64tz_dtype(values):
fmt_klass = Datetime64TZFormatter
- elif is_datetime64_dtype(values.dtype):
- fmt_klass = Datetime64Formatter
- elif is_timedelta64_dtype(values.dtype):
- fmt_klass = Timedelta64Formatter
else:
fmt_klass = GenericArrayFormatter
@@ -1121,39 +1117,22 @@ def _format_strings(self):
return fmt_values.tolist()
-class IntervalArrayFormatter(GenericArrayFormatter):
-
- def __init__(self, values, *args, **kwargs):
- GenericArrayFormatter.__init__(self, values, *args, **kwargs)
-
- def _format_strings(self):
- formatter = self.formatter or str
- fmt_values = np.array([formatter(x) for x in self.values])
- return fmt_values
-
-
-class PeriodArrayFormatter(IntArrayFormatter):
-
+class ExtensionArrayFormatter(GenericArrayFormatter):
def _format_strings(self):
- from pandas.core.indexes.period import IncompatibleFrequency
- try:
- values = PeriodIndex(self.values).to_native_types()
- except IncompatibleFrequency:
- # periods may contains different freq
- values = Index(self.values, dtype='object').to_native_types()
-
- formatter = self.formatter or (lambda x: '{x}'.format(x=x))
- fmt_values = [formatter(x) for x in values]
- return fmt_values
-
+ values = self.values
+ if isinstance(values, (ABCIndexClass, ABCSeries)):
+ values = values._values
-class CategoricalArrayFormatter(GenericArrayFormatter):
+ formatter = values._formatter(boxed=True)
- def __init__(self, values, *args, **kwargs):
- GenericArrayFormatter.__init__(self, values, *args, **kwargs)
+ if is_categorical_dtype(values.dtype):
+ # Categorical is special for now, so that we can preserve tzinfo
+ array = values.get_values()
+ else:
+ array = np.asarray(values)
- def _format_strings(self):
- fmt_values = format_array(self.values.get_values(), self.formatter,
+ fmt_values = format_array(array,
+ formatter,
float_format=self.float_format,
na_rep=self.na_rep, digits=self.digits,
space=self.space, justify=self.justify)
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index e671571560b19..6d45d1e5dfcee 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -271,7 +271,8 @@ class TableSchemaFormatter(BaseFormatter):
max_seq_items=max_seq_items)
-def format_object_summary(obj, formatter, is_justify=True, name=None):
+def format_object_summary(obj, formatter, is_justify=True, name=None,
+ indent_for_name=True):
"""
Return the formatted obj as a unicode string
@@ -283,8 +284,11 @@ def format_object_summary(obj, formatter, is_justify=True, name=None):
string formatter for an element
is_justify : boolean
should justify the display
- name : name, optiona
+ name : name, optional
defaults to the class name of the obj
+ indent_for_name : bool, default True
+ Whether subsequent lines should be be indented to
+ align with the name.
Returns
-------
@@ -300,8 +304,13 @@ def format_object_summary(obj, formatter, is_justify=True, name=None):
if name is None:
name = obj.__class__.__name__
- space1 = "\n%s" % (' ' * (len(name) + 1))
- space2 = "\n%s" % (' ' * (len(name) + 2))
+ if indent_for_name:
+ name_len = len(name)
+ space1 = "\n%s" % (' ' * (name_len + 1))
+ space2 = "\n%s" % (' ' * (name_len + 2))
+ else:
+ space1 = "\n"
+ space2 = "\n " # space for the opening '['
n = len(obj)
sep = ','
@@ -328,15 +337,17 @@ def best_len(values):
else:
return 0
+ close = u', '
+
if n == 0:
- summary = '[], '
+ summary = u'[]{}'.format(close)
elif n == 1:
first = formatter(obj[0])
- summary = '[%s], ' % first
+ summary = u'[{}]{}'.format(first, close)
elif n == 2:
first = formatter(obj[0])
last = formatter(obj[-1])
- summary = '[%s, %s], ' % (first, last)
+ summary = u'[{}, {}]{}'.format(first, last, close)
else:
if n > max_seq_items:
@@ -381,7 +392,11 @@ def best_len(values):
summary, line = _extend_line(summary, line, tail[-1],
display_width - 2, space2)
summary += line
- summary += '],'
+
+ # right now close is either '' or ', '
+ # Now we want to include the ']', but not the maybe space.
+ close = ']' + close.rstrip(' ')
+ summary += close
if len(summary) > (display_width):
summary += space1
diff --git a/pandas/tests/arrays/test_integer.py b/pandas/tests/arrays/test_integer.py
index 51cd139a6ccad..173f9707e76c2 100644
--- a/pandas/tests/arrays/test_integer.py
+++ b/pandas/tests/arrays/test_integer.py
@@ -57,24 +57,27 @@ def test_dtypes(dtype):
assert dtype.name is not None
-class TestInterface(object):
-
- def test_repr_array(self, data):
- result = repr(data)
-
- # not long
- assert '...' not in result
-
- assert 'dtype=' in result
- assert 'IntegerArray' in result
+def test_repr_array():
+ result = repr(integer_array([1, None, 3]))
+ expected = (
+ '<IntegerArray>\n'
+ '[1, NaN, 3]\n'
+ 'Length: 3, dtype: Int64'
+ )
+ assert result == expected
- def test_repr_array_long(self, data):
- # some arrays may be able to assert a ... in the repr
- with pd.option_context('display.max_seq_items', 1):
- result = repr(data)
- assert '...' in result
- assert 'length' in result
+def test_repr_array_long():
+ data = integer_array([1, 2, None] * 1000)
+ expected = (
+ "<IntegerArray>\n"
+ "[ 1, 2, NaN, 1, 2, NaN, 1, 2, NaN, 1,\n"
+ " ...\n"
+ " NaN, 1, 2, NaN, 1, 2, NaN, 1, 2, NaN]\n"
+ "Length: 3000, dtype: Int64"
+ )
+ result = repr(data)
+ assert result == expected
class TestConstructors(object):
diff --git a/pandas/tests/arrays/test_period.py b/pandas/tests/arrays/test_period.py
index 63b34db13705e..bf139bb0ce616 100644
--- a/pandas/tests/arrays/test_period.py
+++ b/pandas/tests/arrays/test_period.py
@@ -195,3 +195,36 @@ def test_sub_period():
other = pd.Period("2000", freq="M")
with pytest.raises(IncompatibleFrequency, match="freq"):
arr - other
+
+
+# ----------------------------------------------------------------------------
+# Printing
+
+def test_repr_small():
+ arr = period_array(['2000', '2001'], freq='D')
+ result = str(arr)
+ expected = (
+ "<PeriodArray>\n"
+ "['2000-01-01', '2001-01-01']\n"
+ "Length: 2, dtype: period[D]"
+ )
+ assert result == expected
+
+
+def test_repr_large():
+ arr = period_array(['2000', '2001'] * 500, freq='D')
+ result = str(arr)
+ expected = (
+ "<PeriodArray>\n"
+ "['2000-01-01', '2001-01-01', '2000-01-01', '2001-01-01', "
+ "'2000-01-01',\n"
+ " '2001-01-01', '2000-01-01', '2001-01-01', '2000-01-01', "
+ "'2001-01-01',\n"
+ " ...\n"
+ " '2000-01-01', '2001-01-01', '2000-01-01', '2001-01-01', "
+ "'2000-01-01',\n"
+ " '2001-01-01', '2000-01-01', '2001-01-01', '2000-01-01', "
+ "'2001-01-01']\n"
+ "Length: 1000, dtype: period[D]"
+ )
+ assert result == expected
diff --git a/pandas/tests/extension/base/__init__.py b/pandas/tests/extension/base/__init__.py
index d11bb8b6beb77..57704b77bb233 100644
--- a/pandas/tests/extension/base/__init__.py
+++ b/pandas/tests/extension/base/__init__.py
@@ -48,6 +48,7 @@ class TestMyDtype(BaseDtypeTests):
from .interface import BaseInterfaceTests # noqa
from .methods import BaseMethodsTests # noqa
from .ops import BaseArithmeticOpsTests, BaseComparisonOpsTests, BaseOpsUtil # noqa
+from .printing import BasePrintingTests # noqa
from .reduce import BaseNoReduceTests, BaseNumericReduceTests, BaseBooleanReduceTests # noqa
from .missing import BaseMissingTests # noqa
from .reshaping import BaseReshapingTests # noqa
diff --git a/pandas/tests/extension/base/interface.py b/pandas/tests/extension/base/interface.py
index 00a480d311b58..f8464dbac8053 100644
--- a/pandas/tests/extension/base/interface.py
+++ b/pandas/tests/extension/base/interface.py
@@ -1,7 +1,5 @@
import numpy as np
-from pandas.compat import StringIO
-
from pandas.core.dtypes.common import is_extension_array_dtype
from pandas.core.dtypes.dtypes import ExtensionDtype
@@ -35,29 +33,6 @@ def test_array_interface(self, data):
result = np.array(data)
assert result[0] == data[0]
- def test_repr(self, data):
- ser = pd.Series(data)
- assert data.dtype.name in repr(ser)
-
- df = pd.DataFrame({"A": data})
- repr(df)
-
- def test_repr_array(self, data):
- # some arrays may be able to assert
- # attributes in the repr
- repr(data)
-
- def test_repr_array_long(self, data):
- # some arrays may be able to assert a ... in the repr
- with pd.option_context('display.max_seq_items', 1):
- repr(data)
-
- def test_dtype_name_in_info(self, data):
- buf = StringIO()
- pd.DataFrame({"A": data}).info(buf=buf)
- result = buf.getvalue()
- assert data.dtype.name in result
-
def test_is_extension_array_dtype(self, data):
assert is_extension_array_dtype(data)
assert is_extension_array_dtype(data.dtype)
diff --git a/pandas/tests/extension/base/printing.py b/pandas/tests/extension/base/printing.py
new file mode 100644
index 0000000000000..b2ba1d95cf33e
--- /dev/null
+++ b/pandas/tests/extension/base/printing.py
@@ -0,0 +1,44 @@
+import io
+
+import pytest
+
+import pandas as pd
+from pandas import compat
+
+from .base import BaseExtensionTests
+
+
+class BasePrintingTests(BaseExtensionTests):
+ """Tests checking the formatting of your EA when printed."""
+
+ @pytest.mark.parametrize("size", ["big", "small"])
+ def test_array_repr(self, data, size):
+ if size == "small":
+ data = data[:5]
+ else:
+ data = type(data)._concat_same_type([data] * 5)
+
+ result = repr(data)
+ assert data.__class__.__name__ in result
+ assert 'Length: {}'.format(len(data)) in result
+ assert str(data.dtype) in result
+ if size == 'big':
+ assert '...' in result
+
+ def test_array_repr_unicode(self, data):
+ result = compat.text_type(data)
+ assert isinstance(result, compat.text_type)
+
+ def test_series_repr(self, data):
+ ser = pd.Series(data)
+ assert data.dtype.name in repr(ser)
+
+ def test_dataframe_repr(self, data):
+ df = pd.DataFrame({"A": data})
+ repr(df)
+
+ def test_dtype_name_in_info(self, data):
+ buf = io.StringIO()
+ pd.DataFrame({"A": data}).info(buf=buf)
+ result = buf.getvalue()
+ assert data.dtype.name in result
diff --git a/pandas/tests/extension/decimal/array.py b/pandas/tests/extension/decimal/array.py
index 3c8905c578c4f..79e81f1034c6d 100644
--- a/pandas/tests/extension/decimal/array.py
+++ b/pandas/tests/extension/decimal/array.py
@@ -114,9 +114,6 @@ def __setitem__(self, key, value):
def __len__(self):
return len(self._data)
- def __repr__(self):
- return 'DecimalArray({!r})'.format(self._data)
-
@property
def nbytes(self):
n = len(self)
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index 01efd7ec7e590..6281c5360cd03 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -188,7 +188,8 @@ def test_value_counts(self, all_data, dropna):
class TestCasting(BaseDecimal, base.BaseCastingTests):
- pass
+ pytestmark = pytest.mark.skipif(compat.PY2,
+ reason="Unhashble dtype in Py2.")
class TestGroupby(BaseDecimal, base.BaseGroupbyTests):
@@ -200,6 +201,11 @@ class TestSetitem(BaseDecimal, base.BaseSetitemTests):
pass
+class TestPrinting(BaseDecimal, base.BasePrintingTests):
+ pytestmark = pytest.mark.skipif(compat.PY2,
+ reason="Unhashble dtype in Py2.")
+
+
# TODO(extension)
@pytest.mark.xfail(reason=(
"raising AssertionError as this is not implemented, "
@@ -379,3 +385,17 @@ def test_divmod_array(reverse, expected_div, expected_mod):
tm.assert_extension_array_equal(div, expected_div)
tm.assert_extension_array_equal(mod, expected_mod)
+
+
+def test_formatting_values_deprecated():
+ class DecimalArray2(DecimalArray):
+ def _formatting_values(self):
+ return np.array(self)
+
+ ser = pd.Series(DecimalArray2([decimal.Decimal('1.0')]))
+ # different levels for 2 vs. 3
+ check_stacklevel = compat.PY3
+
+ with tm.assert_produces_warning(DeprecationWarning,
+ check_stacklevel=check_stacklevel):
+ repr(ser)
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index 2c6e74fda8a0e..d58b7ddf29123 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -115,9 +115,6 @@ def __setitem__(self, key, value):
def __len__(self):
return len(self.data)
- def __repr__(self):
- return 'JSONArary({!r})'.format(self.data)
-
@property
def nbytes(self):
return sys.getsizeof(self.data)
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 66e5f6b6dc732..a941b562fe1ec 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -283,3 +283,7 @@ def _check_divmod_op(self, s, op, other, exc=NotImplementedError):
class TestComparisonOps(BaseJSON, base.BaseComparisonOpsTests):
pass
+
+
+class TestPrinting(BaseJSON, base.BasePrintingTests):
+ pass
diff --git a/pandas/tests/extension/test_integer.py b/pandas/tests/extension/test_integer.py
index 218b2e9bd0e11..e21ca81bcf5c3 100644
--- a/pandas/tests/extension/test_integer.py
+++ b/pandas/tests/extension/test_integer.py
@@ -214,3 +214,7 @@ class TestNumericReduce(base.BaseNumericReduceTests):
class TestBooleanReduce(base.BaseBooleanReduceTests):
pass
+
+
+class TestPrinting(base.BasePrintingTests):
+ pass
diff --git a/pandas/tests/extension/test_interval.py b/pandas/tests/extension/test_interval.py
index d67c0d0a9c05a..644f3ef94f40b 100644
--- a/pandas/tests/extension/test_interval.py
+++ b/pandas/tests/extension/test_interval.py
@@ -146,3 +146,9 @@ class TestReshaping(BaseInterval, base.BaseReshapingTests):
class TestSetitem(BaseInterval, base.BaseSetitemTests):
pass
+
+
+class TestPrinting(BaseInterval, base.BasePrintingTests):
+ @pytest.mark.skip(reason="custom repr")
+ def test_array_repr(self, data, size):
+ pass
diff --git a/pandas/tests/extension/test_period.py b/pandas/tests/extension/test_period.py
index 2e629ccb2981e..08e21fc30ad10 100644
--- a/pandas/tests/extension/test_period.py
+++ b/pandas/tests/extension/test_period.py
@@ -152,3 +152,7 @@ class TestSetitem(BasePeriodTests, base.BaseSetitemTests):
class TestGroupby(BasePeriodTests, base.BaseGroupbyTests):
pass
+
+
+class TestPrinting(BasePeriodTests, base.BasePrintingTests):
+ pass
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 4f67a13215cfd..891e5f4dd9a95 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -316,3 +316,9 @@ def _compare_other(self, s, data, op_name, other):
s = pd.Series(data)
result = op(s, other)
tm.assert_series_equal(result, expected)
+
+
+class TestPrinting(BaseSparseTests, base.BasePrintingTests):
+ @pytest.mark.xfail(reason='Different repr', strict=True)
+ def test_array_repr(self, data, size):
+ super(TestPrinting, self).test_array_repr(data, size)
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 01dee47fffe49..07cbb8cdcde0a 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -513,12 +513,12 @@ def test_repr_categorical_dates_periods(self):
tz='US/Eastern')
p = period_range('2011-01', freq='M', periods=5)
df = DataFrame({'dt': dt, 'p': p})
- exp = """ dt p
-0 2011-01-01 09:00:00-05:00 2011-01
-1 2011-01-01 10:00:00-05:00 2011-02
-2 2011-01-01 11:00:00-05:00 2011-03
-3 2011-01-01 12:00:00-05:00 2011-04
-4 2011-01-01 13:00:00-05:00 2011-05"""
+ exp = """ dt p
+0 2011-01-01 09:00:00-05:00 2011-01
+1 2011-01-01 10:00:00-05:00 2011-02
+2 2011-01-01 11:00:00-05:00 2011-03
+3 2011-01-01 12:00:00-05:00 2011-04
+4 2011-01-01 13:00:00-05:00 2011-05"""
df = DataFrame({'dt': Categorical(dt), 'p': Categorical(p)})
assert repr(df) == exp
diff --git a/pandas/tests/series/test_repr.py b/pandas/tests/series/test_repr.py
index ef96274746655..c4a0496f7fb27 100644
--- a/pandas/tests/series/test_repr.py
+++ b/pandas/tests/series/test_repr.py
@@ -364,11 +364,11 @@ def test_categorical_series_repr_datetime_ordered(self):
def test_categorical_series_repr_period(self):
idx = period_range('2011-01-01 09:00', freq='H', periods=5)
s = Series(Categorical(idx))
- exp = """0 2011-01-01 09:00
-1 2011-01-01 10:00
-2 2011-01-01 11:00
-3 2011-01-01 12:00
-4 2011-01-01 13:00
+ exp = """0 2011-01-01 09:00
+1 2011-01-01 10:00
+2 2011-01-01 11:00
+3 2011-01-01 12:00
+4 2011-01-01 13:00
dtype: category
Categories (5, period[H]): [2011-01-01 09:00, 2011-01-01 10:00, 2011-01-01 11:00, 2011-01-01 12:00,
2011-01-01 13:00]""" # noqa
@@ -377,11 +377,11 @@ def test_categorical_series_repr_period(self):
idx = period_range('2011-01', freq='M', periods=5)
s = Series(Categorical(idx))
- exp = """0 2011-01
-1 2011-02
-2 2011-03
-3 2011-04
-4 2011-05
+ exp = """0 2011-01
+1 2011-02
+2 2011-03
+3 2011-04
+4 2011-05
dtype: category
Categories (5, period[M]): [2011-01, 2011-02, 2011-03, 2011-04, 2011-05]"""
@@ -390,11 +390,11 @@ def test_categorical_series_repr_period(self):
def test_categorical_series_repr_period_ordered(self):
idx = period_range('2011-01-01 09:00', freq='H', periods=5)
s = Series(Categorical(idx, ordered=True))
- exp = """0 2011-01-01 09:00
-1 2011-01-01 10:00
-2 2011-01-01 11:00
-3 2011-01-01 12:00
-4 2011-01-01 13:00
+ exp = """0 2011-01-01 09:00
+1 2011-01-01 10:00
+2 2011-01-01 11:00
+3 2011-01-01 12:00
+4 2011-01-01 13:00
dtype: category
Categories (5, period[H]): [2011-01-01 09:00 < 2011-01-01 10:00 < 2011-01-01 11:00 < 2011-01-01 12:00 <
2011-01-01 13:00]""" # noqa
@@ -403,11 +403,11 @@ def test_categorical_series_repr_period_ordered(self):
idx = period_range('2011-01', freq='M', periods=5)
s = Series(Categorical(idx, ordered=True))
- exp = """0 2011-01
-1 2011-02
-2 2011-03
-3 2011-04
-4 2011-05
+ exp = """0 2011-01
+1 2011-02
+2 2011-03
+3 2011-04
+4 2011-05
dtype: category
Categories (5, period[M]): [2011-01 < 2011-02 < 2011-03 < 2011-04 < 2011-05]"""
| Closes https://github.com/pandas-dev/pandas/issues/22846
Closes https://github.com/pandas-dev/pandas/issues/23590
```python
In [4]: pd.core.arrays.period_array(['2000', '2001', None], freq='D')
Out[4]:
<PeriodArray>
['2000-01-01', '2001-01-01', 'NaT']
Length: 3, dtype: period[D]
In [5]: pd.core.arrays.period_array(['2000', '2001', None] * 100, freq='D')
Out[5]:
<PeriodArray>
['2000-01-01', '2001-01-01', 'NaT', '2000-01-01', '2001-01-01',
'NaT', '2000-01-01', '2001-01-01', 'NaT', '2000-01-01',
...
'NaT', '2000-01-01', '2001-01-01', 'NaT', '2000-01-01',
'2001-01-01', 'NaT', '2000-01-01', '2001-01-01', 'NaT']
Length: 300, dtype: period[D]
In [6]: pd.core.arrays.integer_array([1, 2, None])
Out[6]:
<IntegerArray>
[1, 2, NaN]
Length: 3, dtype: Int64
In [7]: pd.core.arrays.integer_array([1, 2, None] * 1000)
Out[7]:
<IntegerArray>
[ 1, 2, NaN, 1, 2, NaN, 1, 2, NaN, 1,
...
NaN, 1, 2, NaN, 1, 2, NaN, 1, 2, NaN]
Length: 3000, dtype: Int64
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/23601 | 2018-11-09T15:31:00Z | 2018-12-04T13:09:17Z | 2018-12-04T13:09:16Z | 2019-04-05T23:43:03Z |
DOC: Remove incorrect periods at the end of parameter types (#23597) | diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index d56bd83f01236..5f35a040d7d47 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -73,7 +73,7 @@ def is_string_like(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Examples
--------
@@ -127,7 +127,7 @@ def is_iterator(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -172,7 +172,7 @@ def is_file_like(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -203,7 +203,7 @@ def is_re(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -227,7 +227,7 @@ def is_re_compilable(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -261,7 +261,7 @@ def is_list_like(obj, allow_sets=True):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
allow_sets : boolean, default True
If this parameter is False, sets will not be considered list-like
@@ -310,7 +310,7 @@ def is_array_like(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -343,7 +343,7 @@ def is_nested_list_like(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -384,7 +384,7 @@ def is_dict_like(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -408,7 +408,7 @@ def is_named_tuple(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
@@ -468,7 +468,7 @@ def is_sequence(obj):
Parameters
----------
- obj : The object to check.
+ obj : The object to check
Returns
-------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 53cdc46fdd16b..c8237447abd36 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -5248,11 +5248,11 @@ def astype(self, dtype, copy=True, errors='raise', **kwargs):
the same type. Alternatively, use {col: dtype, ...}, where col is a
column label and dtype is a numpy.dtype or Python type to cast one
or more of the DataFrame's columns to column-specific types.
- copy : bool, default True.
+ copy : bool, default True
Return a copy when ``copy=True`` (be very careful setting
``copy=False`` as changes to values then may propagate to other
pandas objects).
- errors : {'raise', 'ignore'}, default 'raise'.
+ errors : {'raise', 'ignore'}, default 'raise'
Control raising of exceptions on invalid data for provided dtype.
- ``raise`` : allow exceptions to be raised
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 3a2f9986760d3..aeaaf4124d314 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -501,7 +501,7 @@ def to_series(self, keep_tz=False, index=None, name=None):
Parameters
----------
- keep_tz : optional, defaults False.
+ keep_tz : optional, defaults False
return the data keeping the timezone.
If keep_tz is True:
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 9c981c24190a4..01304cce507f0 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1885,7 +1885,7 @@ def sortlevel(self, level=0, ascending=True, sort_remaining=True):
ascending : boolean, default True
False to sort in descending order
Can also be a list to specify a directed ordering
- sort_remaining : sort by the remaining levels after level.
+ sort_remaining : sort by the remaining levels after level
Returns
-------
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index d1b5645928921..e4c177a08462e 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -38,7 +38,7 @@ class RangeIndex(Int64Index):
Parameters
----------
- start : int (default: 0), or other RangeIndex instance.
+ start : int (default: 0), or other RangeIndex instance
If int and "stop" is not given, interpreted as "stop" instead.
stop : int (default: 0)
step : int (default: 1)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 6971b0b0c78e0..b9f4b848b2ed7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3025,7 +3025,7 @@ def reorder_levels(self, order):
Parameters
----------
- order : list of int representing new level order.
+ order : list of int representing new level order
(reference level by number or key)
Returns
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index bf0c93437f4dc..a12605aaed554 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -659,7 +659,7 @@ def str_match(arr, pat, case=True, flags=0, na=np.nan):
If True, case sensitive
flags : int, default 0 (no flags)
re module flags, e.g. re.IGNORECASE
- na : default NaN, fill value for missing values.
+ na : default NaN, fill value for missing values
Returns
-------
@@ -2665,7 +2665,7 @@ def encode(self, encoding, errors="strict"):
Parameters
----------
- to_strip : str or None, default None.
+ to_strip : str or None, default None
Specifying the set of characters to be removed.
All combinations of this set of characters will be stripped.
If None then whitespaces are removed.
diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py
index c6108f30a560a..23a2b04214e4e 100644
--- a/pandas/io/clipboards.py
+++ b/pandas/io/clipboards.py
@@ -16,7 +16,7 @@ def read_clipboard(sep=r'\s+', **kwargs): # pragma: no cover
Parameters
----------
- sep : str, default '\s+'.
+ sep : str, default '\s+'
A string or regex delimiter. The default of '\s+' denotes
one or more whitespace characters.
| - [x] closes #23597
- [x] passes `git diff master -u -- "*.py" | flake8 --diff`
| https://api.github.com/repos/pandas-dev/pandas/pulls/23600 | 2018-11-09T14:41:41Z | 2018-11-11T01:05:55Z | 2018-11-11T01:05:55Z | 2018-11-11T02:20:49Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.