title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: fix punctuation in Timestamp/Timedelta docstrings | diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 020d1acf0b4ce..9306338029b73 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -545,7 +545,7 @@ class NaTType(_NaT):
""")
round = _make_nat_func('round', # noqa:E128
"""
- Round the Timestamp to the specified resolution
+ Round the Timestamp to the specified resolution.
Parameters
----------
@@ -583,7 +583,7 @@ default 'raise'
""")
floor = _make_nat_func('floor', # noqa:E128
"""
- return a new Timestamp floored to this resolution
+ return a new Timestamp floored to this resolution.
Parameters
----------
@@ -617,7 +617,7 @@ default 'raise'
""")
ceil = _make_nat_func('ceil', # noqa:E128
"""
- return a new Timestamp ceiled to this resolution
+ return a new Timestamp ceiled to this resolution.
Parameters
----------
@@ -729,7 +729,7 @@ default 'raise'
""")
replace = _make_nat_func('replace', # noqa:E128
"""
- implements datetime.replace, handles nanoseconds
+ implements datetime.replace, handles nanoseconds.
Parameters
----------
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index d24aafae0967d..52911f9fbcccd 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1150,7 +1150,7 @@ cdef class _Timedelta(timedelta):
"""
Format Timedelta as ISO 8601 Duration like
``P[n]Y[n]M[n]DT[n]H[n]M[n]S``, where the ``[n]`` s are replaced by the
- values. See https://en.wikipedia.org/wiki/ISO_8601#Durations
+ values. See https://en.wikipedia.org/wiki/ISO_8601#Durations.
.. versionadded:: 0.20.0
@@ -1314,7 +1314,7 @@ class Timedelta(_Timedelta):
def round(self, freq):
"""
- Round the Timedelta to the specified resolution
+ Round the Timedelta to the specified resolution.
Parameters
----------
@@ -1332,7 +1332,7 @@ class Timedelta(_Timedelta):
def floor(self, freq):
"""
- return a new Timedelta floored to this resolution
+ return a new Timedelta floored to this resolution.
Parameters
----------
@@ -1342,7 +1342,7 @@ class Timedelta(_Timedelta):
def ceil(self, freq):
"""
- return a new Timedelta ceiled to this resolution
+ return a new Timedelta ceiled to this resolution.
Parameters
----------
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index c8c6efda30fae..6ca39d83afd25 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -441,7 +441,7 @@ class Timestamp(_Timestamp):
def round(self, freq, ambiguous='raise', nonexistent='raise'):
"""
- Round the Timestamp to the specified resolution
+ Round the Timestamp to the specified resolution.
Parameters
----------
@@ -483,7 +483,7 @@ default 'raise'
def floor(self, freq, ambiguous='raise', nonexistent='raise'):
"""
- return a new Timestamp floored to this resolution
+ return a new Timestamp floored to this resolution.
Parameters
----------
@@ -519,7 +519,7 @@ default 'raise'
def ceil(self, freq, ambiguous='raise', nonexistent='raise'):
"""
- return a new Timestamp ceiled to this resolution
+ return a new Timestamp ceiled to this resolution.
Parameters
----------
@@ -556,7 +556,7 @@ default 'raise'
@property
def tz(self):
"""
- Alias for tzinfo
+ Alias for tzinfo.
"""
return self.tzinfo
@@ -754,7 +754,7 @@ default 'raise'
def resolution(self):
"""
Return resolution describing the smallest difference between two
- times that can be represented by Timestamp object_state
+ times that can be represented by Timestamp object_state.
"""
# GH#21336, GH#21365
return Timedelta(nanoseconds=1)
@@ -893,7 +893,7 @@ default 'raise'
hour=None, minute=None, second=None, microsecond=None,
nanosecond=None, tzinfo=object, fold=0):
"""
- implements datetime.replace, handles nanoseconds
+ implements datetime.replace, handles nanoseconds.
Parameters
----------
| - [ ] xref #27977
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Add periods to Timestamp and Timedelta doctrings for the following attributes and methods:
```
pandas.Timestamp.resolution
pandas.Timestamp.tz
pandas.Timestamp.ceil
pandas.Timestamp.combine
pandas.Timestamp.floor
pandas.Timestamp.fromordinal
pandas.Timestamp.replace
pandas.Timestamp.round
pandas.Timestamp.weekday
pandas.Timedelta.ceil
pandas.Timedelta.floor
pandas.Timedelta.isoformat
pandas.Timedelta.round
```
I did not find the method for `pandas.Timestamp.isoweekday` this in the Timestamp source code. Was this deprecated ? @datapythonista | https://api.github.com/repos/pandas-dev/pandas/pulls/28053 | 2019-08-21T05:16:45Z | 2019-09-28T07:39:08Z | 2019-09-28T07:39:08Z | 2019-09-28T07:39:09Z |
Make DataFrame.to_string output full content by default | diff --git a/doc/source/user_guide/options.rst b/doc/source/user_guide/options.rst
index 1f1dff417e68f..a6491c6645613 100644
--- a/doc/source/user_guide/options.rst
+++ b/doc/source/user_guide/options.rst
@@ -353,7 +353,7 @@ display.max_colwidth 50 The maximum width in charac
a column in the repr of a pandas
data structure. When the column overflows,
a "..." placeholder is embedded in
- the output.
+ the output. 'None' value means unlimited.
display.max_info_columns 100 max_info_columns is used in DataFrame.info
method to decide if per column information
will be printed.
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 5b9e3a7dbad06..c78e27f098f13 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -21,6 +21,8 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~
+- :meth:`DataFrame.to_string` added the ``max_colwidth`` parameter to control when wide columns are truncated (:issue:`9784`)
+-
.. _whatsnew_1000.enhancements.other:
@@ -191,6 +193,7 @@ I/O
- Bug in :meth:`DataFrame.to_json` where using a Tuple as a column or index value and using ``orient="columns"`` or ``orient="index"`` would produce invalid JSON (:issue:`20500`)
- Improve infinity parsing. :meth:`read_csv` now interprets ``Infinity``, ``+Infinity``, ``-Infinity`` as floating point values (:issue:`10065`)
- Bug in :meth:`DataFrame.to_csv` where values were truncated when the length of ``na_rep`` was shorter than the text input data. (:issue:`25099`)
+- Bug in :func:`DataFrame.to_string` where values were truncated using display options instead of outputting the full content (:issue:`9784`)
Plotting
^^^^^^^^
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index dfc80140433f8..bc2eb3511629d 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -148,10 +148,10 @@ def use_numexpr_cb(key):
"""
max_colwidth_doc = """
-: int
+: int or None
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
- placeholder is embedded in the output.
+ placeholder is embedded in the output. A 'None' value means unlimited.
"""
colheader_justify_doc = """
@@ -340,7 +340,9 @@ def is_terminal():
validator=is_instance_factory([type(None), int]),
)
cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
- cf.register_option("max_colwidth", 50, max_colwidth_doc, validator=is_int)
+ cf.register_option(
+ "max_colwidth", 50, max_colwidth_doc, validator=is_nonnegative_int
+ )
if is_terminal():
max_cols = 0 # automatically determine optimal number of columns
else:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f1ed3a125f60c..44d3d840016fe 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -641,6 +641,7 @@ def __repr__(self):
max_rows = get_option("display.max_rows")
min_rows = get_option("display.min_rows")
max_cols = get_option("display.max_columns")
+ max_colwidth = get_option("display.max_colwidth")
show_dimensions = get_option("display.show_dimensions")
if get_option("display.expand_frame_repr"):
width, _ = console.get_console_size()
@@ -652,6 +653,7 @@ def __repr__(self):
min_rows=min_rows,
max_cols=max_cols,
line_width=width,
+ max_colwidth=max_colwidth,
show_dimensions=show_dimensions,
)
@@ -730,12 +732,17 @@ def to_string(
show_dimensions=False,
decimal=".",
line_width=None,
+ max_colwidth=None,
):
"""
Render a DataFrame to a console-friendly tabular output.
%(shared_params)s
line_width : int, optional
Width to wrap a line in characters.
+ max_colwidth : int, optional
+ Max width to truncate each column in characters. By default, no limit.
+
+ .. versionadded:: 1.0.0
%(returns)s
See Also
--------
@@ -752,26 +759,29 @@ def to_string(
2 3 6
"""
- formatter = fmt.DataFrameFormatter(
- self,
- columns=columns,
- col_space=col_space,
- na_rep=na_rep,
- formatters=formatters,
- float_format=float_format,
- sparsify=sparsify,
- justify=justify,
- index_names=index_names,
- header=header,
- index=index,
- min_rows=min_rows,
- max_rows=max_rows,
- max_cols=max_cols,
- show_dimensions=show_dimensions,
- decimal=decimal,
- line_width=line_width,
- )
- return formatter.to_string(buf=buf)
+ from pandas import option_context
+
+ with option_context("display.max_colwidth", max_colwidth):
+ formatter = fmt.DataFrameFormatter(
+ self,
+ columns=columns,
+ col_space=col_space,
+ na_rep=na_rep,
+ formatters=formatters,
+ float_format=float_format,
+ sparsify=sparsify,
+ justify=justify,
+ index_names=index_names,
+ header=header,
+ index=index,
+ min_rows=min_rows,
+ max_rows=max_rows,
+ max_cols=max_cols,
+ show_dimensions=show_dimensions,
+ decimal=decimal,
+ line_width=line_width,
+ )
+ return formatter.to_string(buf=buf)
# ----------------------------------------------------------------------
diff --git a/pandas/io/clipboards.py b/pandas/io/clipboards.py
index 76c01535a26e7..518b940ec5da3 100644
--- a/pandas/io/clipboards.py
+++ b/pandas/io/clipboards.py
@@ -131,7 +131,7 @@ def to_clipboard(obj, excel=True, sep=None, **kwargs): # pragma: no cover
if isinstance(obj, ABCDataFrame):
# str(df) has various unhelpful defaults, like truncation
- with option_context("display.max_colwidth", 999999):
+ with option_context("display.max_colwidth", None):
objstr = obj.to_string(**kwargs)
else:
objstr = str(obj)
diff --git a/pandas/io/formats/html.py b/pandas/io/formats/html.py
index 8c4a7f4a1213d..50fa4796f8d72 100644
--- a/pandas/io/formats/html.py
+++ b/pandas/io/formats/html.py
@@ -377,7 +377,7 @@ def _write_header(self, indent: int) -> None:
self.write("</thead>", indent)
def _get_formatted_values(self) -> Dict[int, List[str]]:
- with option_context("display.max_colwidth", 999999):
+ with option_context("display.max_colwidth", None):
fmt_values = {i: self.fmt._format_col(i) for i in range(self.ncols)}
return fmt_values
diff --git a/pandas/tests/config/test_config.py b/pandas/tests/config/test_config.py
index efaeb7b1471ec..51640641c78e6 100644
--- a/pandas/tests/config/test_config.py
+++ b/pandas/tests/config/test_config.py
@@ -218,6 +218,7 @@ def test_validation(self):
self.cf.set_option("a", 2) # int is_int
self.cf.set_option("b.c", "wurld") # str is_str
self.cf.set_option("d", 2)
+ self.cf.set_option("d", None) # non-negative int can be None
# None not is_int
with pytest.raises(ValueError, match=msg):
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index c0451a0672c89..454e2afb8abe0 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -527,6 +527,45 @@ def test_str_max_colwidth(self):
"1 foo bar stuff 1"
)
+ def test_to_string_truncate(self):
+ # GH 9784 - dont truncate when calling DataFrame.to_string
+ df = pd.DataFrame(
+ [
+ {
+ "a": "foo",
+ "b": "bar",
+ "c": "let's make this a very VERY long line that is longer "
+ "than the default 50 character limit",
+ "d": 1,
+ },
+ {"a": "foo", "b": "bar", "c": "stuff", "d": 1},
+ ]
+ )
+ df.set_index(["a", "b", "c"])
+ assert df.to_string() == (
+ " a b "
+ " c d\n"
+ "0 foo bar let's make this a very VERY long line t"
+ "hat is longer than the default 50 character limit 1\n"
+ "1 foo bar "
+ " stuff 1"
+ )
+ with option_context("max_colwidth", 20):
+ # the display option has no effect on the to_string method
+ assert df.to_string() == (
+ " a b "
+ " c d\n"
+ "0 foo bar let's make this a very VERY long line t"
+ "hat is longer than the default 50 character limit 1\n"
+ "1 foo bar "
+ " stuff 1"
+ )
+ assert df.to_string(max_colwidth=20) == (
+ " a b c d\n"
+ "0 foo bar let's make this ... 1\n"
+ "1 foo bar stuff 1"
+ )
+
def test_auto_detect(self):
term_width, term_height = get_terminal_size()
fac = 1.05 # Arbitrary large factor to exceed term width
| I modeled this off of https://github.com/pandas-dev/pandas/pull/24841. Some alternatives I considered:
* Instead of setting the option_context here, we could wind the param into the depths of the formatter. I tried this, actually, and started finding a number of edge cases and bugs. I realized that the issue only occurs in a pretty narrow case - if the user is explicitly calling to_string - because most of the time, when representing a DataFrame, the user *will* want long strings truncated for readability. So I think the safest way is to do it at the top level without interfering with lower-level formatters.
* Series.to_string() could arguably benefit from the same treatment, although that wasn't mentioned in the original issue (and I have never found the need to use it personally) so I didn't bring that in.
Here's an example on a real dataset showing long columns preserved in a text file produced by to_string():

Additional manual testing:
* Main use case- by default, no limits and ignores the display options, but can still override:
```
>>> print(df.to_string())
A B
0 NaN NaN
1 -1.0000 foo
2 -2.1234 foooo
3 3.0000 fooooo
4 4.0000 bar
>>> with option_context('display.max_colwidth', 5):
... print(df.to_string())
...
A B
0 NaN NaN
1 -1.0000 foo
2 -2.1234 foooo
3 3.0000 fooooo
4 4.0000 bar
>>> print(df.to_string(max_colwidth=5))
A B
0 NaN NaN
1 -1... foo
2 -2... f...
3 3... f...
4 4... bar
```
* The string representation of DataFrame **does** still use the display options (so it's only the explicit ```to_string``` that doesn't:
```
>>> with option_context('display.max_colwidth', 5):
... print(str(df))
...
A B
0 NaN NaN
1 -1... foo
2 -2... f...
3 3... f...
4 4... bar
```
* The new parameter validates for None and positive ints, but rejects anything else:
```
>>> print(df.to_string(max_colwidth=-5))
...
raise ValueError(msg)
ValueError: Value must be a nonnegative integer or None
```
- [ ] closes #9784
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28052 | 2019-08-21T03:10:21Z | 2019-09-16T02:33:40Z | 2019-09-16T02:33:40Z | 2019-09-16T04:40:14Z |
TST: fix compression tests when run without virtualenv/condaenv | diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index 16ca1109f266c..d68b6a1effaa0 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -1,6 +1,7 @@
import contextlib
import os
import subprocess
+import sys
import textwrap
import warnings
@@ -139,7 +140,7 @@ def test_with_missing_lzma():
import pandas
"""
)
- subprocess.check_output(["python", "-c", code])
+ subprocess.check_output([sys.executable, "-c", code])
def test_with_missing_lzma_runtime():
@@ -156,4 +157,4 @@ def test_with_missing_lzma_runtime():
df.to_csv('foo.csv', compression='xz')
"""
)
- subprocess.check_output(["python", "-c", code])
+ subprocess.check_output([sys.executable, "-c", code])
| sys.executable is the pattern we use elsewhere in subprocess tests. | https://api.github.com/repos/pandas-dev/pandas/pulls/28051 | 2019-08-21T02:46:32Z | 2019-08-23T18:11:50Z | 2019-08-23T18:11:50Z | 2019-08-23T18:22:18Z |
BUG: timedelta64(NaT) incorrectly treated as datetime in some dataframe ops | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index b02769322e013..173cc6b6b483c 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -180,7 +180,8 @@ Datetimelike
- Addition and subtraction of integer or integer-dtype arrays with :class:`Timestamp` will now raise ``NullFrequencyError`` instead of ``ValueError`` (:issue:`28268`)
- Bug in :class:`Series` and :class:`DataFrame` with integer dtype failing to raise ``TypeError`` when adding or subtracting a ``np.datetime64`` object (:issue:`28080`)
- Bug in :class:`Week` with ``weekday`` incorrectly raising ``AttributeError`` instead of ``TypeError`` when adding or subtracting an invalid type (:issue:`28530`)
-
+- Bug in :class:`DataFrame` arithmetic operations when operating with a :class:`Series` with dtype `'timedelta64[ns]'` (:issue:`28049`)
+-
Timedelta
^^^^^^^^^
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index ca4f35514f2a5..f53b5045abff3 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -498,8 +498,19 @@ def column_op(a, b):
# in which case we specifically want to operate row-by-row
assert right.index.equals(left.columns)
- def column_op(a, b):
- return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
+ if right.dtype == "timedelta64[ns]":
+ # ensure we treat NaT values as the correct dtype
+ # Note: we do not do this unconditionally as it may be lossy or
+ # expensive for EA dtypes.
+ right = np.asarray(right)
+
+ def column_op(a, b):
+ return {i: func(a.iloc[:, i], b[i]) for i in range(len(a.columns))}
+
+ else:
+
+ def column_op(a, b):
+ return {i: func(a.iloc[:, i], b.iloc[i]) for i in range(len(a.columns))}
elif isinstance(right, ABCSeries):
assert right.index.equals(left.index) # Handle other cases later
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 706bc122c6d9e..fc3640503e385 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -457,6 +457,16 @@ def test_arith_flex_zero_len_raises(self):
class TestFrameArithmetic:
+ def test_td64_op_nat_casting(self):
+ # Make sure we don't accidentally treat timedelta64(NaT) as datetime64
+ # when calling dispatch_to_series in DataFrame arithmetic
+ ser = pd.Series(["NaT", "NaT"], dtype="timedelta64[ns]")
+ df = pd.DataFrame([[1, 2], [3, 4]])
+
+ result = df * ser
+ expected = pd.DataFrame({0: ser, 1: ser})
+ tm.assert_frame_equal(result, expected)
+
def test_df_add_2d_array_rowlike_broadcasts(self):
# GH#23000
arr = np.arange(6).reshape(3, 2)
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28049 | 2019-08-21T01:00:31Z | 2019-09-20T23:36:14Z | 2019-09-20T23:36:14Z | 2019-09-20T23:45:26Z |
Issue 20927 fix resolves read_sas error for dates/datetimes greater than 2262-04-11 | diff --git a/doc/source/whatsnew/v1.1.0.rst b/doc/source/whatsnew/v1.1.0.rst
index 20e2cce1a3dfa..3e7666c8636b9 100644
--- a/doc/source/whatsnew/v1.1.0.rst
+++ b/doc/source/whatsnew/v1.1.0.rst
@@ -781,6 +781,7 @@ I/O
timestamps with ``version="2.0"`` (:issue:`31652`).
- Bug in :meth:`read_csv` was raising `TypeError` when `sep=None` was used in combination with `comment` keyword (:issue:`31396`)
- Bug in :class:`HDFStore` that caused it to set to ``int64`` the dtype of a ``datetime64`` column when reading a DataFrame in Python 3 from fixed format written in Python 2 (:issue:`31750`)
+- :func:`read_sas()` now handles dates and datetimes larger than :attr:`Timestamp.max` returning them as :class:`datetime.datetime` objects (:issue:`20927`)
- Bug in :meth:`DataFrame.to_json` where ``Timedelta`` objects would not be serialized correctly with ``date_format="iso"`` (:issue:`28256`)
- :func:`read_csv` will raise a ``ValueError`` when the column names passed in `parse_dates` are missing in the Dataframe (:issue:`31251`)
- Bug in :meth:`read_excel` where a UTF-8 string with a high surrogate would cause a segmentation violation (:issue:`23809`)
diff --git a/pandas/io/sas/sas7bdat.py b/pandas/io/sas/sas7bdat.py
index 2bfcd500ee239..c8f1336bcec60 100644
--- a/pandas/io/sas/sas7bdat.py
+++ b/pandas/io/sas/sas7bdat.py
@@ -14,12 +14,12 @@
http://collaboration.cmc.ec.gc.ca/science/rpn/biblio/ddj/Website/articles/CUJ/1992/9210/ross/ross.htm
"""
from collections import abc
-from datetime import datetime
+from datetime import datetime, timedelta
import struct
import numpy as np
-from pandas.errors import EmptyDataError
+from pandas.errors import EmptyDataError, OutOfBoundsDatetime
import pandas as pd
@@ -29,6 +29,39 @@
from pandas.io.sas.sasreader import ReaderBase
+def _convert_datetimes(sas_datetimes: pd.Series, unit: str) -> pd.Series:
+ """
+ Convert to Timestamp if possible, otherwise to datetime.datetime.
+ SAS float64 lacks precision for more than ms resolution so the fit
+ to datetime.datetime is ok.
+
+ Parameters
+ ----------
+ sas_datetimes : {Series, Sequence[float]}
+ Dates or datetimes in SAS
+ unit : {str}
+ "d" if the floats represent dates, "s" for datetimes
+
+ Returns
+ -------
+ Series
+ Series of datetime64 dtype or datetime.datetime.
+ """
+ try:
+ return pd.to_datetime(sas_datetimes, unit=unit, origin="1960-01-01")
+ except OutOfBoundsDatetime:
+ if unit == "s":
+ return sas_datetimes.apply(
+ lambda sas_float: datetime(1960, 1, 1) + timedelta(seconds=sas_float)
+ )
+ elif unit == "d":
+ return sas_datetimes.apply(
+ lambda sas_float: datetime(1960, 1, 1) + timedelta(days=sas_float)
+ )
+ else:
+ raise ValueError("unit must be 'd' or 's'")
+
+
class _subheader_pointer:
pass
@@ -706,15 +739,10 @@ def _chunk_to_dataframe(self):
rslt[name] = self._byte_chunk[jb, :].view(dtype=self.byte_order + "d")
rslt[name] = np.asarray(rslt[name], dtype=np.float64)
if self.convert_dates:
- unit = None
if self.column_formats[j] in const.sas_date_formats:
- unit = "d"
+ rslt[name] = _convert_datetimes(rslt[name], "d")
elif self.column_formats[j] in const.sas_datetime_formats:
- unit = "s"
- if unit:
- rslt[name] = pd.to_datetime(
- rslt[name], unit=unit, origin="1960-01-01"
- )
+ rslt[name] = _convert_datetimes(rslt[name], "s")
jb += 1
elif self._column_types[j] == b"s":
rslt[name] = self._string_chunk[js, :]
diff --git a/pandas/tests/io/sas/data/max_sas_date.sas7bdat b/pandas/tests/io/sas/data/max_sas_date.sas7bdat
new file mode 100644
index 0000000000000..b7838ebdcfeea
Binary files /dev/null and b/pandas/tests/io/sas/data/max_sas_date.sas7bdat differ
diff --git a/pandas/tests/io/sas/test_sas7bdat.py b/pandas/tests/io/sas/test_sas7bdat.py
index 62e9ac6929c8e..8c14f9de9f61c 100644
--- a/pandas/tests/io/sas/test_sas7bdat.py
+++ b/pandas/tests/io/sas/test_sas7bdat.py
@@ -3,6 +3,7 @@
import os
from pathlib import Path
+import dateutil.parser
import numpy as np
import pytest
@@ -214,3 +215,94 @@ def test_zero_variables(datapath):
fname = datapath("io", "sas", "data", "zero_variables.sas7bdat")
with pytest.raises(EmptyDataError):
pd.read_sas(fname)
+
+
+def round_datetime_to_ms(ts):
+ if isinstance(ts, datetime):
+ return ts.replace(microsecond=int(round(ts.microsecond, -3) / 1000) * 1000)
+ elif isinstance(ts, str):
+ _ts = dateutil.parser.parse(timestr=ts)
+ return _ts.replace(microsecond=int(round(_ts.microsecond, -3) / 1000) * 1000)
+ else:
+ return ts
+
+
+def test_max_sas_date(datapath):
+ # GH 20927
+ # NB. max datetime in SAS dataset is 31DEC9999:23:59:59.999
+ # but this is read as 29DEC9999:23:59:59.998993 by a buggy
+ # sas7bdat module
+ fname = datapath("io", "sas", "data", "max_sas_date.sas7bdat")
+ df = pd.read_sas(fname, encoding="iso-8859-1")
+
+ # SAS likes to left pad strings with spaces - lstrip before comparing
+ df = df.applymap(lambda x: x.lstrip() if isinstance(x, str) else x)
+ # GH 19732: Timestamps imported from sas will incur floating point errors
+ try:
+ df["dt_as_dt"] = df["dt_as_dt"].dt.round("us")
+ except pd._libs.tslibs.np_datetime.OutOfBoundsDatetime:
+ df = df.applymap(round_datetime_to_ms)
+ except AttributeError:
+ df["dt_as_dt"] = df["dt_as_dt"].apply(round_datetime_to_ms)
+ # if there are any date/times > pandas.Timestamp.max then ALL in that chunk
+ # are returned as datetime.datetime
+ expected = pd.DataFrame(
+ {
+ "text": ["max", "normal"],
+ "dt_as_float": [253717747199.999, 1880323199.999],
+ "dt_as_dt": [
+ datetime(9999, 12, 29, 23, 59, 59, 999000),
+ datetime(2019, 8, 1, 23, 59, 59, 999000),
+ ],
+ "date_as_float": [2936547.0, 21762.0],
+ "date_as_date": [datetime(9999, 12, 29), datetime(2019, 8, 1)],
+ },
+ columns=["text", "dt_as_float", "dt_as_dt", "date_as_float", "date_as_date"],
+ )
+ tm.assert_frame_equal(df, expected)
+
+
+def test_max_sas_date_iterator(datapath):
+ # GH 20927
+ # when called as an iterator, only those chunks with a date > pd.Timestamp.max
+ # are returned as datetime.datetime, if this happens that whole chunk is returned
+ # as datetime.datetime
+ col_order = ["text", "dt_as_float", "dt_as_dt", "date_as_float", "date_as_date"]
+ fname = datapath("io", "sas", "data", "max_sas_date.sas7bdat")
+ results = []
+ for df in pd.read_sas(fname, encoding="iso-8859-1", chunksize=1):
+ # SAS likes to left pad strings with spaces - lstrip before comparing
+ df = df.applymap(lambda x: x.lstrip() if isinstance(x, str) else x)
+ # GH 19732: Timestamps imported from sas will incur floating point errors
+ try:
+ df["dt_as_dt"] = df["dt_as_dt"].dt.round("us")
+ except pd._libs.tslibs.np_datetime.OutOfBoundsDatetime:
+ df = df.applymap(round_datetime_to_ms)
+ except AttributeError:
+ df["dt_as_dt"] = df["dt_as_dt"].apply(round_datetime_to_ms)
+ df.reset_index(inplace=True, drop=True)
+ results.append(df)
+ expected = [
+ pd.DataFrame(
+ {
+ "text": ["max"],
+ "dt_as_float": [253717747199.999],
+ "dt_as_dt": [datetime(9999, 12, 29, 23, 59, 59, 999000)],
+ "date_as_float": [2936547.0],
+ "date_as_date": [datetime(9999, 12, 29)],
+ },
+ columns=col_order,
+ ),
+ pd.DataFrame(
+ {
+ "text": ["normal"],
+ "dt_as_float": [1880323199.999],
+ "dt_as_dt": [np.datetime64("2019-08-01 23:59:59.999")],
+ "date_as_float": [21762.0],
+ "date_as_date": [np.datetime64("2019-08-01")],
+ },
+ columns=col_order,
+ ),
+ ]
+ for result, expected in zip(results, expected):
+ tm.assert_frame_equal(result, expected)
| - [x] closes #20927
- [x] tests added / passed
- [x] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28047 | 2019-08-20T23:07:48Z | 2020-05-25T21:47:53Z | 2020-05-25T21:47:53Z | 2023-10-27T16:40:11Z |
Backport PR #27991 on branch 0.25.x (DataFrame html repr: also follow min_rows setting) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 44cc38662d976..9b3e7d24d9901 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -106,6 +106,7 @@ I/O
^^^
- Avoid calling ``S3File.s3`` when reading parquet, as this was removed in s3fs version 0.3.0 (:issue:`27756`)
- Better error message when a negative header is passed in :func:`pandas.read_csv` (:issue:`27779`)
+- Follow the ``min_rows`` display option (introduced in v0.25.0) correctly in the html repr in the notebook (:issue:`27991`).
-
Plotting
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1f85b31be69b2..a69abaeacd18b 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -673,15 +673,19 @@ def _repr_html_(self):
if get_option("display.notebook_repr_html"):
max_rows = get_option("display.max_rows")
+ min_rows = get_option("display.min_rows")
max_cols = get_option("display.max_columns")
show_dimensions = get_option("display.show_dimensions")
- return self.to_html(
+ formatter = fmt.DataFrameFormatter(
+ self,
max_rows=max_rows,
+ min_rows=min_rows,
max_cols=max_cols,
show_dimensions=show_dimensions,
- notebook=True,
)
+ formatter.to_html(notebook=True)
+ return formatter.buf.getvalue()
else:
return None
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index ad47f714c9550..83ef2bebd1b69 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -422,28 +422,35 @@ def test_repr_min_rows(self):
# default setting no truncation even if above min_rows
assert ".." not in repr(df)
+ assert ".." not in df._repr_html_()
df = pd.DataFrame({"a": range(61)})
# default of max_rows 60 triggers truncation if above
assert ".." in repr(df)
+ assert ".." in df._repr_html_()
with option_context("display.max_rows", 10, "display.min_rows", 4):
# truncated after first two rows
assert ".." in repr(df)
assert "2 " not in repr(df)
+ assert "..." in df._repr_html_()
+ assert "<td>2</td>" not in df._repr_html_()
with option_context("display.max_rows", 12, "display.min_rows", None):
# when set to None, follow value of max_rows
assert "5 5" in repr(df)
+ assert "<td>5</td>" in df._repr_html_()
with option_context("display.max_rows", 10, "display.min_rows", 12):
# when set value higher as max_rows, use the minimum
assert "5 5" not in repr(df)
+ assert "<td>5</td>" not in df._repr_html_()
with option_context("display.max_rows", None, "display.min_rows", 12):
# max_rows of None -> never truncate
assert ".." not in repr(df)
+ assert ".." not in df._repr_html_()
def test_str_max_colwidth(self):
# GH 7856
| Backport PR #27991: DataFrame html repr: also follow min_rows setting | https://api.github.com/repos/pandas-dev/pandas/pulls/28044 | 2019-08-20T19:24:57Z | 2019-08-21T17:48:47Z | 2019-08-21T17:48:46Z | 2019-08-21T17:48:47Z |
DOC: Add punctuation to IntervalArray docstrings | diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 9cb2721b33634..7a14d6f1b619a 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -361,7 +361,7 @@ def from_arrays(cls, left, right, closed="right", copy=False, dtype=None):
_interval_shared_docs[
"from_tuples"
] = """
- Construct an %(klass)s from an array-like of tuples
+ Construct an %(klass)s from an array-like of tuples.
Parameters
----------
@@ -854,7 +854,7 @@ def _format_space(self):
def left(self):
"""
Return the left endpoints of each Interval in the IntervalArray as
- an Index
+ an Index.
"""
return self._left
@@ -862,7 +862,7 @@ def left(self):
def right(self):
"""
Return the right endpoints of each Interval in the IntervalArray as
- an Index
+ an Index.
"""
return self._right
@@ -870,7 +870,7 @@ def right(self):
def closed(self):
"""
Whether the intervals are closed on the left-side, right-side, both or
- neither
+ neither.
"""
return self._closed
@@ -878,7 +878,7 @@ def closed(self):
"set_closed"
] = """
Return an %(klass)s identical to the current one, but closed on the
- specified side
+ specified side.
.. versionadded:: 0.24.0
@@ -917,7 +917,7 @@ def set_closed(self, closed):
def length(self):
"""
Return an Index with entries denoting the length of each Interval in
- the IntervalArray
+ the IntervalArray.
"""
try:
return self.right - self.left
@@ -945,7 +945,7 @@ def mid(self):
] = """
Return True if the %(klass)s is non-overlapping (no Intervals share
points) and is either monotonic increasing or monotonic decreasing,
- else False
+ else False.
"""
# https://github.com/python/mypy/issues/1362
# Mypy does not support decorated properties
@@ -995,7 +995,7 @@ def __array__(self, dtype=None):
_interval_shared_docs[
"to_tuples"
] = """
- Return an %(return_type)s of tuples of the form (left, right)
+ Return an %(return_type)s of tuples of the form (left, right).
Parameters
----------
| - [ ] xref #27979
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Added missing period to IntervalArray docstrings as suggested by @datapythonista for the following attributes and parameters:
```
pandas.arrays.IntervalArray.left
pandas.arrays.IntervalArray.right
pandas.arrays.IntervalArray.closed
pandas.arrays.IntervalArray.mid
pandas.arrays.IntervalArray.length
pandas.arrays.IntervalArray.is_non_overlapping_monotonic
pandas.arrays.IntervalArray.from_tuples
pandas.arrays.IntervalArray.set_closed
pandas.arrays.IntervalArray.to_tuples
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/28043 | 2019-08-20T19:14:21Z | 2019-08-21T06:28:33Z | 2019-08-21T06:28:33Z | 2019-08-21T06:28:42Z |
Backport PR #26419 on branch 0.25.x (Fix GroupBy nth Handling with Observed=False) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index ac3d645684fda..426822c19fd6f 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -122,6 +122,7 @@ Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug in :meth:`pandas.core.groupby.DataFrameGroupBy.transform` where applying a timezone conversion lambda function would drop timezone information (:issue:`27496`)
+- Bug in :meth:`pandas.core.groupby.GroupBy.nth` where ``observed=False`` was being ignored for Categorical groupers (:issue:`26385`)
- Bug in windowing over read-only arrays (:issue:`27766`)
-
-
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 9aba9723e0546..b852513e454a2 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1771,7 +1771,11 @@ def nth(self, n: Union[int, List[int]], dropna: Optional[str] = None) -> DataFra
if not self.as_index:
return out
- out.index = self.grouper.result_index[ids[mask]]
+ result_index = self.grouper.result_index
+ out.index = result_index[ids[mask]]
+
+ if not self.observed and isinstance(result_index, CategoricalIndex):
+ out = out.reindex(result_index)
return out.sort_index() if self.sort else out
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 99cc4cf0ffbd1..9750a36d9350b 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -434,6 +434,21 @@ def test_observed_groups_with_nan(observed):
tm.assert_dict_equal(result, expected)
+def test_observed_nth():
+ # GH 26385
+ cat = pd.Categorical(["a", np.nan, np.nan], categories=["a", "b", "c"])
+ ser = pd.Series([1, 2, 3])
+ df = pd.DataFrame({"cat": cat, "ser": ser})
+
+ result = df.groupby("cat", observed=False)["ser"].nth(0)
+
+ index = pd.Categorical(["a", "b", "c"], categories=["a", "b", "c"])
+ expected = pd.Series([1, np.nan, np.nan], index=index, name="ser")
+ expected.index.name = "cat"
+
+ tm.assert_series_equal(result, expected)
+
+
def test_dataframe_categorical_with_nan(observed):
# GH 21151
s1 = Categorical([np.nan, "a", np.nan, "a"], categories=["a", "b", "c"])
| Backport PR #26419: Fix GroupBy nth Handling with Observed=False | https://api.github.com/repos/pandas-dev/pandas/pulls/28042 | 2019-08-20T19:02:23Z | 2019-08-20T21:13:21Z | 2019-08-20T21:13:21Z | 2019-08-20T21:13:21Z |
Backport PR #27664: BUG: added a check for if obj is instance of type … | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index ac3d645684fda..124a2ced534c4 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -91,7 +91,7 @@ Indexing
Missing
^^^^^^^
--
+- Bug in :func:`pandas.isnull` or :func:`pandas.isna` when the input is a type e.g. `type(pandas.Series())` (:issue:`27482`)
-
-
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index bea73d72b91c9..37f790e41ad04 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -131,6 +131,8 @@ def _isna_new(obj):
# hack (for now) because MI registers as ndarray
elif isinstance(obj, ABCMultiIndex):
raise NotImplementedError("isna is not defined for MultiIndex")
+ elif isinstance(obj, type):
+ return False
elif isinstance(
obj,
(
@@ -169,6 +171,8 @@ def _isna_old(obj):
# hack (for now) because MI registers as ndarray
elif isinstance(obj, ABCMultiIndex):
raise NotImplementedError("isna is not defined for MultiIndex")
+ elif isinstance(obj, type):
+ return False
elif isinstance(obj, (ABCSeries, np.ndarray, ABCIndexClass)):
return _isna_ndarraylike_old(obj)
elif isinstance(obj, ABCGeneric):
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index a688dec50bc95..bbc485ecf94f2 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -86,6 +86,10 @@ def test_isna_isnull(self, isna_f):
assert not isna_f(np.inf)
assert not isna_f(-np.inf)
+ # type
+ assert not isna_f(type(pd.Series()))
+ assert not isna_f(type(pd.DataFrame()))
+
# series
for s in [
tm.makeFloatSeries(),
| Manual backport of #27664 | https://api.github.com/repos/pandas-dev/pandas/pulls/28041 | 2019-08-20T18:12:43Z | 2019-08-20T19:16:53Z | 2019-08-20T19:16:53Z | 2019-08-20T19:16:57Z |
Backport PR #28024 on branch 0.25.x (BUG: rfloordiv with fill_value, closes#27464) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index ac3d645684fda..a82df806a9de4 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -55,7 +55,7 @@ Numeric
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
- Bug where :class:`DataFrame` arithmetic operators such as :meth:`DataFrame.mul` with a :class:`Series` with axis=1 would raise an ``AttributeError`` on :class:`DataFrame` larger than the minimum threshold to invoke numexpr (:issue:`27636`)
--
+- Bug in :class:`DataFrame` arithmetic where missing values in results were incorrectly masked with ``NaN`` instead of ``Inf`` (:issue:`27464`)
Conversion
^^^^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0570b9af2d256..1f85b31be69b2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -111,6 +111,7 @@
sanitize_index,
to_arrays,
)
+from pandas.core.ops.missing import dispatch_fill_zeros
from pandas.core.series import Series
from pandas.io.formats import console, format as fmt
@@ -5365,7 +5366,9 @@ def _arith_op(left, right):
# iterate over columns
return ops.dispatch_to_series(this, other, _arith_op)
else:
- result = _arith_op(this.values, other.values)
+ with np.errstate(all="ignore"):
+ result = _arith_op(this.values, other.values)
+ result = dispatch_fill_zeros(func, this.values, other.values, result)
return self._constructor(
result, index=new_index, columns=new_columns, copy=False
)
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 2b23790e4ccd3..d686d9f90a5a4 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1227,3 +1227,36 @@ def test_addsub_arithmetic(self, dtype, delta):
tm.assert_index_equal(index + index, 2 * index)
tm.assert_index_equal(index - index, 0 * index)
assert not (index - index).empty
+
+
+def test_fill_value_inf_masking():
+ # GH #27464 make sure we mask 0/1 with Inf and not NaN
+ df = pd.DataFrame({"A": [0, 1, 2], "B": [1.1, None, 1.1]})
+
+ other = pd.DataFrame({"A": [1.1, 1.2, 1.3]}, index=[0, 2, 3])
+
+ result = df.rfloordiv(other, fill_value=1)
+
+ expected = pd.DataFrame(
+ {"A": [np.inf, 1.0, 0.0, 1.0], "B": [0.0, np.nan, 0.0, np.nan]}
+ )
+ tm.assert_frame_equal(result, expected)
+
+
+def test_dataframe_div_silenced():
+ # GH#26793
+ pdf1 = pd.DataFrame(
+ {
+ "A": np.arange(10),
+ "B": [np.nan, 1, 2, 3, 4] * 2,
+ "C": [np.nan] * 10,
+ "D": np.arange(10),
+ },
+ index=list("abcdefghij"),
+ columns=list("ABCD"),
+ )
+ pdf2 = pd.DataFrame(
+ np.random.randn(10, 4), index=list("abcdefghjk"), columns=list("ABCX")
+ )
+ with tm.assert_produces_warning(None):
+ pdf1.div(pdf2, fill_value=0)
| Backport PR #28024: BUG: rfloordiv with fill_value, closes#27464 | https://api.github.com/repos/pandas-dev/pandas/pulls/28040 | 2019-08-20T17:45:16Z | 2019-08-20T19:17:16Z | 2019-08-20T19:17:16Z | 2019-08-20T19:17:16Z |
REF: boilerplate for ops internal consistency | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 73d1db9bda8ed..817972b3356a2 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -8,7 +8,7 @@
from pandas._config import get_option
-from pandas._libs import algos as libalgos, hashtable as htable, lib
+from pandas._libs import algos as libalgos, hashtable as htable
from pandas.compat.numpy import function as nv
from pandas.util._decorators import (
Appender,
@@ -39,7 +39,7 @@
needs_i8_conversion,
)
from pandas.core.dtypes.dtypes import CategoricalDtype
-from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
+from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
from pandas.core.dtypes.inference import is_hashable
from pandas.core.dtypes.missing import isna, notna
@@ -52,6 +52,7 @@
import pandas.core.common as com
from pandas.core.construction import array, extract_array, sanitize_array
from pandas.core.missing import interpolate_2d
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.sorting import nargsort
from pandas.io.formats import console
@@ -74,16 +75,14 @@
def _cat_compare_op(op):
opname = "__{op}__".format(op=op.__name__)
+ @unpack_zerodim_and_defer(opname)
def f(self, other):
# On python2, you can usually compare any type to any type, and
# Categoricals can be seen as a custom type, but having different
# results depending whether categories are the same or not is kind of
# insane, so be a bit stricter here and use the python3 idea of
# comparing only things of equal type.
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- return NotImplemented
- other = lib.item_from_zerodim(other)
if is_list_like(other) and len(other) != len(self):
# TODO: Could this fail if the categories are listlike objects?
raise ValueError("Lengths must match.")
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 287ff9d618501..e52bc17fcc319 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -33,12 +33,7 @@
is_unsigned_integer_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCIndexClass,
- ABCPeriodArray,
- ABCSeries,
-)
+from pandas.core.dtypes.generic import ABCIndexClass, ABCPeriodArray, ABCSeries
from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import is_valid_nat_for_dtype, isna
@@ -46,6 +41,7 @@
from pandas.core import missing, nanops
from pandas.core.algorithms import checked_add_with_arr, take, unique1d, value_counts
import pandas.core.common as com
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.invalid import make_invalid_op
from pandas.tseries import frequencies
@@ -1194,13 +1190,11 @@ def _time_shift(self, periods, freq=None):
# to be passed explicitly.
return self._generate_range(start=start, end=end, periods=None, freq=self.freq)
+ @unpack_zerodim_and_defer("__add__")
def __add__(self, other):
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCSeries, ABCDataFrame, ABCIndexClass)):
- return NotImplemented
# scalar others
- elif other is NaT:
+ if other is NaT:
result = self._add_nat()
elif isinstance(other, (Tick, timedelta, np.timedelta64)):
result = self._add_delta(other)
@@ -1248,13 +1242,11 @@ def __radd__(self, other):
# alias for __add__
return self.__add__(other)
+ @unpack_zerodim_and_defer("__sub__")
def __sub__(self, other):
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCSeries, ABCDataFrame, ABCIndexClass)):
- return NotImplemented
# scalar others
- elif other is NaT:
+ if other is NaT:
result = self._sub_nat()
elif isinstance(other, (Tick, timedelta, np.timedelta64)):
result = self._add_delta(-other)
@@ -1343,11 +1335,11 @@ def __rsub__(self, other):
return -(self - other)
# FIXME: DTA/TDA/PA inplace methods should actually be inplace, GH#24115
- def __iadd__(self, other):
+ def __iadd__(self, other): # type: ignore
# alias for __add__
return self.__add__(other)
- def __isub__(self, other):
+ def __isub__(self, other): # type: ignore
# alias for __sub__
return self.__sub__(other)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 7cd103d12fa8a..8e3c727a14c99 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -40,12 +40,7 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import DatetimeTZDtype
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCIndexClass,
- ABCPandasArray,
- ABCSeries,
-)
+from pandas.core.dtypes.generic import ABCIndexClass, ABCPandasArray, ABCSeries
from pandas.core.dtypes.missing import isna
from pandas.core import ops
@@ -53,6 +48,7 @@
from pandas.core.arrays import datetimelike as dtl
from pandas.core.arrays._ranges import generate_regular_range
import pandas.core.common as com
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries.frequencies import get_period_alias, to_offset
@@ -157,11 +153,8 @@ def _dt_array_cmp(cls, op):
opname = "__{name}__".format(name=op.__name__)
nat_result = opname == "__ne__"
+ @unpack_zerodim_and_defer(opname)
def wrapper(self, other):
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- return NotImplemented
-
- other = lib.item_from_zerodim(other)
if isinstance(other, (datetime, np.datetime64, str)):
if isinstance(other, (datetime, np.datetime64)):
diff --git a/pandas/core/arrays/integer.py b/pandas/core/arrays/integer.py
index 41d8bffd8c131..e167e556b244a 100644
--- a/pandas/core/arrays/integer.py
+++ b/pandas/core/arrays/integer.py
@@ -21,12 +21,12 @@
is_scalar,
)
from pandas.core.dtypes.dtypes import register_extension_dtype
-from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
from pandas.core.dtypes.missing import isna, notna
from pandas.core import nanops, ops
from pandas.core.algorithms import take
from pandas.core.arrays import ExtensionArray, ExtensionOpsMixin
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.tools.numeric import to_numeric
@@ -602,13 +602,8 @@ def _values_for_argsort(self) -> np.ndarray:
def _create_comparison_method(cls, op):
op_name = op.__name__
+ @unpack_zerodim_and_defer(op.__name__)
def cmp_method(self, other):
-
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- # Rely on pandas to unbox and dispatch to us.
- return NotImplemented
-
- other = lib.item_from_zerodim(other)
mask = None
if isinstance(other, IntegerArray):
@@ -697,15 +692,14 @@ def _maybe_mask_result(self, result, mask, other, op_name):
def _create_arithmetic_method(cls, op):
op_name = op.__name__
+ @unpack_zerodim_and_defer(op.__name__)
def integer_arithmetic_method(self, other):
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- # Rely on pandas to unbox and dispatch to us.
- return NotImplemented
-
- other = lib.item_from_zerodim(other)
mask = None
+ if getattr(other, "ndim", 0) > 1:
+ raise NotImplementedError("can only perform ops with 1-d structures")
+
if isinstance(other, IntegerArray):
other, mask = other._data, other._mask
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 78cc54db4b1b8..fdf4059fad569 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -4,7 +4,6 @@
import numpy as np
-from pandas._libs import lib
from pandas._libs.tslibs import (
NaT,
NaTType,
@@ -35,7 +34,6 @@
)
from pandas.core.dtypes.dtypes import PeriodDtype
from pandas.core.dtypes.generic import (
- ABCDataFrame,
ABCIndexClass,
ABCPeriodArray,
ABCPeriodIndex,
@@ -46,6 +44,7 @@
import pandas.core.algorithms as algos
from pandas.core.arrays import datetimelike as dtl
import pandas.core.common as com
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.tseries import frequencies
from pandas.tseries.offsets import DateOffset, Tick, _delta_to_tick
@@ -69,13 +68,10 @@ def _period_array_cmp(cls, op):
opname = "__{name}__".format(name=op.__name__)
nat_result = opname == "__ne__"
+ @unpack_zerodim_and_defer(opname)
def wrapper(self, other):
ordinal_op = getattr(self.asi8, opname)
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- return NotImplemented
-
if is_list_like(other) and len(other) != len(self):
raise ValueError("Lengths must match")
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 14024401ea110..943dea4252499 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -34,12 +34,7 @@
is_string_dtype,
pandas_dtype,
)
-from pandas.core.dtypes.generic import (
- ABCDataFrame,
- ABCIndexClass,
- ABCSeries,
- ABCSparseArray,
-)
+from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries, ABCSparseArray
from pandas.core.dtypes.missing import isna, na_value_for_dtype, notna
import pandas.core.algorithms as algos
@@ -49,6 +44,7 @@
from pandas.core.construction import sanitize_array
from pandas.core.missing import interpolate_2d
import pandas.core.ops as ops
+from pandas.core.ops.common import unpack_zerodim_and_defer
import pandas.io.formats.printing as printing
@@ -1410,12 +1406,8 @@ def sparse_unary_method(self):
def _create_arithmetic_method(cls, op):
op_name = op.__name__
+ @unpack_zerodim_and_defer(op_name)
def sparse_arithmetic_method(self, other):
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- # Rely on pandas to dispatch to us.
- return NotImplemented
-
- other = lib.item_from_zerodim(other)
if isinstance(other, SparseArray):
return _sparse_array_op(self, other, op, op_name)
@@ -1463,12 +1455,9 @@ def _create_comparison_method(cls, op):
if op_name in {"and_", "or_"}:
op_name = op_name[:-1]
+ @unpack_zerodim_and_defer(op_name)
def cmp_method(self, other):
- if isinstance(other, (ABCSeries, ABCIndexClass)):
- # Rely on pandas to unbox and dispatch to us.
- return NotImplemented
-
if not is_scalar(other) and not isinstance(other, type(self)):
# convert list-like to ndarray
other = np.asarray(other)
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 21e07b5101a64..816beb758dd33 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -45,6 +45,7 @@
from pandas.core import nanops
from pandas.core.algorithms import checked_add_with_arr
import pandas.core.common as com
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.invalid import invalid_comparison
from pandas.tseries.frequencies import to_offset
@@ -82,10 +83,8 @@ def _td_array_cmp(cls, op):
opname = "__{name}__".format(name=op.__name__)
nat_result = opname == "__ne__"
+ @unpack_zerodim_and_defer(opname)
def wrapper(self, other):
- other = lib.item_from_zerodim(other)
- if isinstance(other, (ABCDataFrame, ABCSeries, ABCIndexClass)):
- return NotImplemented
if _is_convertible_to_td(other) or other is NaT:
try:
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index 962ba8cc00557..f5cb435b8c1c2 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -21,7 +21,7 @@
is_scalar,
is_timedelta64_dtype,
)
-from pandas.core.dtypes.generic import ABCDataFrame, ABCSeries, ABCTimedeltaIndex
+from pandas.core.dtypes.generic import ABCTimedeltaIndex
from pandas.core import ops
import pandas.core.common as com
@@ -29,6 +29,7 @@
import pandas.core.indexes.base as ibase
from pandas.core.indexes.base import Index, _index_shared_docs
from pandas.core.indexes.numeric import Int64Index
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.io.formats.printing import pprint_thing
@@ -734,9 +735,8 @@ def __getitem__(self, key):
# fall back to Int64Index
return super().__getitem__(key)
+ @unpack_zerodim_and_defer("__floordiv__")
def __floordiv__(self, other):
- if isinstance(other, (ABCSeries, ABCDataFrame)):
- return NotImplemented
if is_integer(other) and other != 0:
if len(self) == 0 or self.start % other == 0 and self.step % other == 0:
@@ -772,10 +772,9 @@ def _make_evaluate_binop(op, step=False):
if False, use the existing step
"""
+ @unpack_zerodim_and_defer(op.__name__)
def _evaluate_numeric_binop(self, other):
- if isinstance(other, (ABCSeries, ABCDataFrame)):
- return NotImplemented
- elif isinstance(other, ABCTimedeltaIndex):
+ if isinstance(other, ABCTimedeltaIndex):
# Defer to TimedeltaIndex implementation
return NotImplemented
elif isinstance(other, (timedelta, np.timedelta64)):
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 398fa9b0c1fc0..f7a1258894b89 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -29,6 +29,7 @@
logical_op,
)
from pandas.core.ops.array_ops import comp_method_OBJECT_ARRAY # noqa:F401
+from pandas.core.ops.common import unpack_zerodim_and_defer
from pandas.core.ops.dispatch import maybe_dispatch_ufunc_to_dunder_op # noqa:F401
from pandas.core.ops.dispatch import should_series_dispatch
from pandas.core.ops.docstrings import (
@@ -489,9 +490,8 @@ def _arith_method_SERIES(cls, op, special):
op_name = _get_op_name(op, special)
eval_kwargs = _gen_eval_kwargs(op_name)
+ @unpack_zerodim_and_defer(op_name)
def wrapper(left, right):
- if isinstance(right, ABCDataFrame):
- return NotImplemented
left, right = _align_method_SERIES(left, right)
res_name = get_op_result_name(left, right)
@@ -512,14 +512,11 @@ def _comp_method_SERIES(cls, op, special):
"""
op_name = _get_op_name(op, special)
+ @unpack_zerodim_and_defer(op_name)
def wrapper(self, other):
res_name = get_op_result_name(self, other)
- if isinstance(other, ABCDataFrame): # pragma: no cover
- # Defer to DataFrame implementation; fail early
- return NotImplemented
-
if isinstance(other, ABCSeries) and not self._indexed_same(other):
raise ValueError("Can only compare identically-labeled Series objects")
@@ -541,14 +538,11 @@ def _bool_method_SERIES(cls, op, special):
"""
op_name = _get_op_name(op, special)
+ @unpack_zerodim_and_defer(op_name)
def wrapper(self, other):
self, other = _align_method_SERIES(self, other, align_asobject=True)
res_name = get_op_result_name(self, other)
- if isinstance(other, ABCDataFrame):
- # Defer to DataFrame implementation; fail early
- return NotImplemented
-
lvalues = extract_array(self, extract_numpy=True)
rvalues = extract_array(other, extract_numpy=True)
diff --git a/pandas/core/ops/common.py b/pandas/core/ops/common.py
new file mode 100644
index 0000000000000..f4b16cf4a0cf2
--- /dev/null
+++ b/pandas/core/ops/common.py
@@ -0,0 +1,66 @@
+"""
+Boilerplate functions used in defining binary operations.
+"""
+from functools import wraps
+
+from pandas._libs.lib import item_from_zerodim
+
+from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
+
+
+def unpack_zerodim_and_defer(name: str):
+ """
+ Boilerplate for pandas conventions in arithmetic and comparison methods.
+
+ Parameters
+ ----------
+ name : str
+
+ Returns
+ -------
+ decorator
+ """
+
+ def wrapper(method):
+ return _unpack_zerodim_and_defer(method, name)
+
+ return wrapper
+
+
+def _unpack_zerodim_and_defer(method, name: str):
+ """
+ Boilerplate for pandas conventions in arithmetic and comparison methods.
+
+ Ensure method returns NotImplemented when operating against "senior"
+ classes. Ensure zero-dimensional ndarrays are always unpacked.
+
+ Parameters
+ ----------
+ method : binary method
+ name : str
+
+ Returns
+ -------
+ method
+ """
+
+ is_cmp = name.strip("__") in {"eq", "ne", "lt", "le", "gt", "ge"}
+
+ @wraps(method)
+ def new_method(self, other):
+
+ if is_cmp and isinstance(self, ABCIndexClass) and isinstance(other, ABCSeries):
+ # For comparison ops, Index does *not* defer to Series
+ pass
+ else:
+ for cls in [ABCDataFrame, ABCSeries, ABCIndexClass]:
+ if isinstance(self, cls):
+ break
+ if isinstance(other, cls):
+ return NotImplemented
+
+ other = item_from_zerodim(other)
+
+ return method(self, other)
+
+ return new_method
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 4d3d6e2df35db..1ba0930c06334 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1029,6 +1029,7 @@ def test_dt64arr_add_sub_invalid(self, dti_freq, other, box_with_array):
[
"unsupported operand type",
"cannot (add|subtract)",
+ "cannot use operands with types",
"ufunc '?(add|subtract)'? cannot use operands with types",
]
)
| Progress towards #23853, with more progress available pending resolution of #27911.
| https://api.github.com/repos/pandas-dev/pandas/pulls/28037 | 2019-08-20T15:12:34Z | 2019-11-14T16:31:55Z | 2019-11-14T16:31:55Z | 2019-11-14T16:36:00Z |
CLN: small ops optimizations | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 97567192aa17a..04ddc9a53ad04 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5298,12 +5298,19 @@ def _combine_frame(self, other, func, fill_value=None, level=None):
this, other = self.align(other, join="outer", level=level, copy=False)
new_index, new_columns = this.index, this.columns
- def _arith_op(left, right):
- # for the mixed_type case where we iterate over columns,
- # _arith_op(left, right) is equivalent to
- # left._binop(right, func, fill_value=fill_value)
- left, right = ops.fill_binop(left, right, fill_value)
- return func(left, right)
+ if fill_value is None:
+ # since _arith_op may be called in a loop, avoid function call
+ # overhead if possible by doing this check once
+ _arith_op = func
+
+ else:
+
+ def _arith_op(left, right):
+ # for the mixed_type case where we iterate over columns,
+ # _arith_op(left, right) is equivalent to
+ # left._binop(right, func, fill_value=fill_value)
+ left, right = ops.fill_binop(left, right, fill_value)
+ return func(left, right)
if ops.should_series_dispatch(this, other, func):
# iterate over columns
@@ -5318,7 +5325,7 @@ def _arith_op(left, right):
def _combine_match_index(self, other, func, level=None):
left, right = self.align(other, join="outer", axis=0, level=level, copy=False)
- assert left.index.equals(right.index)
+ # at this point we have `left.index.equals(right.index)`
if left._is_mixed_type or right._is_mixed_type:
# operate column-wise; avoid costly object-casting in `.values`
@@ -5331,14 +5338,13 @@ def _combine_match_index(self, other, func, level=None):
new_data, index=left.index, columns=self.columns, copy=False
)
- def _combine_match_columns(self, other, func, level=None):
- assert isinstance(other, Series)
+ def _combine_match_columns(self, other: Series, func, level=None):
left, right = self.align(other, join="outer", axis=1, level=level, copy=False)
- assert left.columns.equals(right.index)
+ # at this point we have `left.columns.equals(right.index)`
return ops.dispatch_to_series(left, right, func, axis="columns")
def _combine_const(self, other, func):
- assert lib.is_scalar(other) or np.ndim(other) == 0
+ # scalar other or np.ndim(other) == 0
return ops.dispatch_to_series(self, other, func)
def combine(self, other, func, fill_value=None, overwrite=True):
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 7e03b9544ee72..86cd6e878cde6 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -169,7 +169,7 @@ def maybe_upcast_for_op(obj, shape: Tuple[int, ...]):
# np.timedelta64(3, 'D') / 2 == np.timedelta64(1, 'D')
return Timedelta(obj)
- elif isinstance(obj, np.ndarray) and is_timedelta64_dtype(obj):
+ elif isinstance(obj, np.ndarray) and is_timedelta64_dtype(obj.dtype):
# GH#22390 Unfortunately we need to special-case right-hand
# timedelta64 dtypes because numpy casts integer dtypes to
# timedelta64 when operating with timedelta64
@@ -415,7 +415,7 @@ def should_extension_dispatch(left: ABCSeries, right: Any) -> bool:
):
return True
- if is_extension_array_dtype(right) and not is_scalar(right):
+ if not is_scalar(right) and is_extension_array_dtype(right):
# GH#22378 disallow scalar to exclude e.g. "category", "Int64"
return True
@@ -755,7 +755,7 @@ def na_op(x, y):
assert not isinstance(y, (list, ABCSeries, ABCIndexClass))
if isinstance(y, np.ndarray):
# bool-bool dtype operations should be OK, should not get here
- assert not (is_bool_dtype(x) and is_bool_dtype(y))
+ assert not (is_bool_dtype(x.dtype) and is_bool_dtype(y.dtype))
x = ensure_object(x)
y = ensure_object(y)
result = libops.vec_binop(x, y, op)
@@ -804,7 +804,7 @@ def wrapper(self, other):
else:
# scalars, list, tuple, np.array
- is_other_int_dtype = is_integer_dtype(np.asarray(other))
+ is_other_int_dtype = is_integer_dtype(np.asarray(other).dtype)
if is_list_like(other) and not isinstance(other, np.ndarray):
# TODO: Can we do this before the is_integer_dtype check?
# could the is_integer_dtype check be checking the wrong
@@ -988,10 +988,10 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
self, other, pass_op, fill_value=fill_value, axis=axis, level=level
)
else:
+ # in this case we always have `np.ndim(other) == 0`
if fill_value is not None:
self = self.fillna(fill_value)
- assert np.ndim(other) == 0
return self._combine_const(other, op)
f.__name__ = op_name
@@ -1032,7 +1032,7 @@ def f(self, other, axis=default_axis, level=None):
self, other, na_op, fill_value=None, axis=axis, level=level
)
else:
- assert np.ndim(other) == 0, other
+ # in this case we always have `np.ndim(other) == 0`
return self._combine_const(other, na_op)
f.__name__ = op_name
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 523ba5d42a69c..f5f6d77676f1f 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -11,7 +11,7 @@
find_common_type,
maybe_upcast_putmask,
)
-from pandas.core.dtypes.common import is_object_dtype, is_period_dtype, is_scalar
+from pandas.core.dtypes.common import is_object_dtype, is_scalar
from pandas.core.dtypes.generic import ABCIndex, ABCSeries
from pandas.core.dtypes.missing import notna
@@ -57,9 +57,9 @@ def masked_arith_op(x, y, op):
dtype = find_common_type([x.dtype, y.dtype])
result = np.empty(x.size, dtype=dtype)
- # PeriodIndex.ravel() returns int64 dtype, so we have
- # to work around that case. See GH#19956
- yrav = y if is_period_dtype(y) else y.ravel()
+ # NB: ravel() is only safe since y is ndarray; for e.g. PeriodIndex
+ # we would get int64 dtype, see GH#19956
+ yrav = y.ravel()
mask = notna(xrav) & notna(yrav)
if yrav.shape != mask.shape:
@@ -82,9 +82,9 @@ def masked_arith_op(x, y, op):
mask = notna(xrav)
# 1 ** np.nan is 1. So we have to unmask those.
- if op == pow:
+ if op is pow:
mask = np.where(x == 1, False, mask)
- elif op == rpow:
+ elif op is rpow:
mask = np.where(y == 1, False, mask)
if mask.any():
diff --git a/pandas/core/ops/missing.py b/pandas/core/ops/missing.py
index 01bc345a40b83..45fa6a2830af6 100644
--- a/pandas/core/ops/missing.py
+++ b/pandas/core/ops/missing.py
@@ -40,7 +40,7 @@ def fill_zeros(result, x, y, name, fill):
Mask the nan's from x.
"""
- if fill is None or is_float_dtype(result):
+ if fill is None or is_float_dtype(result.dtype):
return result
if name.startswith(("r", "__r")):
@@ -55,7 +55,7 @@ def fill_zeros(result, x, y, name, fill):
if is_scalar_type:
y = np.array(y)
- if is_integer_dtype(y):
+ if is_integer_dtype(y.dtype):
if (y == 0).any():
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index f5add426297a7..8fe6850c84b8b 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -569,13 +569,13 @@ def _combine_frame(self, other, func, fill_value=None, level=None):
).__finalize__(self)
def _combine_match_index(self, other, func, level=None):
- new_data = {}
if level is not None:
raise NotImplementedError("'level' argument is not supported")
this, other = self.align(other, join="outer", axis=0, level=level, copy=False)
+ new_data = {}
for col, series in this.items():
new_data[col] = func(series.values, other.values)
| @jorisvandenbossche you've mentioned some ops optimizations; any suggestions for things to add here? | https://api.github.com/repos/pandas-dev/pandas/pulls/28036 | 2019-08-20T14:42:49Z | 2019-08-26T23:39:13Z | 2019-08-26T23:39:13Z | 2019-08-27T00:19:49Z |
Backport PR #27481 on branch 0.25.x (Correctly re-instate Matplotlib converters) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index dffe4ceb4a218..ac3d645684fda 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -112,6 +112,9 @@ Plotting
^^^^^^^^
- Added a pandas_plotting_backends entrypoint group for registering plot backends. See :ref:`extending.plotting-backends` for more (:issue:`26747`).
+- Fixed the re-instatement of Matplotlib datetime converters after calling
+ `pandas.plotting.deregister_matplotlib_converters()` (:issue:`27481`).
+-
- Fix compatibility issue with matplotlib when passing a pandas ``Index`` to a plot call (:issue:`27775`).
-
diff --git a/pandas/plotting/_matplotlib/converter.py b/pandas/plotting/_matplotlib/converter.py
index 15648d59c8f98..893854ab26e37 100644
--- a/pandas/plotting/_matplotlib/converter.py
+++ b/pandas/plotting/_matplotlib/converter.py
@@ -64,11 +64,12 @@ def register(explicit=True):
pairs = get_pairs()
for type_, cls in pairs:
- converter = cls()
- if type_ in units.registry:
+ # Cache previous converter if present
+ if type_ in units.registry and not isinstance(units.registry[type_], cls):
previous = units.registry[type_]
_mpl_units[type_] = previous
- units.registry[type_] = converter
+ # Replace with pandas converter
+ units.registry[type_] = cls()
def deregister():
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 35d12706f0590..7001264c41c05 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -40,6 +40,21 @@ def test_initial_warning():
assert "Using an implicitly" in out
+def test_registry_mpl_resets():
+ # Check that Matplotlib converters are properly reset (see issue #27481)
+ code = (
+ "import matplotlib.units as units; "
+ "import matplotlib.dates as mdates; "
+ "n_conv = len(units.registry); "
+ "import pandas as pd; "
+ "pd.plotting.register_matplotlib_converters(); "
+ "pd.plotting.deregister_matplotlib_converters(); "
+ "assert len(units.registry) == n_conv"
+ )
+ call = [sys.executable, "-c", code]
+ subprocess.check_output(call)
+
+
def test_timtetonum_accepts_unicode():
assert converter.time2num("00:01") == converter.time2num("00:01")
| Backport PR #27481: Correctly re-instate Matplotlib converters | https://api.github.com/repos/pandas-dev/pandas/pulls/28035 | 2019-08-20T14:27:05Z | 2019-08-20T16:18:41Z | 2019-08-20T16:18:41Z | 2019-08-20T16:18:42Z |
DOC: Fix docstrings lack of punctuation | diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 5c121172d0e4f..0778b6726d104 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -514,7 +514,7 @@ def fillna(self, value=None, method=None, limit=None):
def dropna(self):
"""
- Return ExtensionArray without NA values
+ Return ExtensionArray without NA values.
Returns
-------
@@ -957,7 +957,7 @@ def _concat_same_type(
cls, to_concat: Sequence[ABCExtensionArray]
) -> ABCExtensionArray:
"""
- Concatenate multiple array
+ Concatenate multiple array.
Parameters
----------
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 093334a815938..70df708d36b3b 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -1158,7 +1158,7 @@ def tz_localize(self, tz, ambiguous="raise", nonexistent="raise", errors=None):
def to_pydatetime(self):
"""
Return Datetime Array/Index as object ndarray of datetime.datetime
- objects
+ objects.
Returns
-------
@@ -1283,7 +1283,7 @@ def to_perioddelta(self, freq):
"""
Calculate TimedeltaArray of difference between index
values and index converted to PeriodArray at specified
- freq. Used for vectorized offsets
+ freq. Used for vectorized offsets.
Parameters
----------
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 20ce11c70c344..f2d74794eadf5 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -426,7 +426,7 @@ def __array__(self, dtype=None):
@property
def is_leap_year(self):
"""
- Logical indicating if the date belongs to a leap year
+ Logical indicating if the date belongs to a leap year.
"""
return isleapyear_arr(np.asarray(self.year))
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 51daad3b42649..272066d476ce3 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -661,7 +661,7 @@ def _get_time_micros(self):
def to_series(self, keep_tz=None, index=None, name=None):
"""
Create a Series with both index and values equal to the index keys
- useful with map for returning an indexer based on an index
+ useful with map for returning an indexer based on an index.
Parameters
----------
@@ -687,10 +687,10 @@ def to_series(self, keep_tz=None, index=None, name=None):
behaviour and silence the warning.
index : Index, optional
- index of resulting Series. If None, defaults to original index
- name : string, optional
- name of resulting Series. If None, defaults to name of original
- index
+ Index of resulting Series. If None, defaults to original index.
+ name : str, optional
+ Name of resulting Series. If None, defaults to name of original
+ index.
Returns
-------
@@ -735,7 +735,7 @@ def to_series(self, keep_tz=None, index=None, name=None):
def snap(self, freq="S"):
"""
- Snap time stamps to nearest occurring frequency
+ Snap time stamps to nearest occurring frequency.
Returns
-------
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index b614952ba1e04..761862b9f30e9 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1250,7 +1250,7 @@ def _set_names(self, names, level=None, validate=True):
self.levels[l].rename(name, inplace=True)
names = property(
- fset=_set_names, fget=_get_names, doc="""\nNames of levels in MultiIndex\n"""
+ fset=_set_names, fget=_get_names, doc="""\nNames of levels in MultiIndex.\n"""
)
@Appender(_index_shared_docs["_get_grouper_for_level"])
@@ -1762,7 +1762,7 @@ def is_all_dates(self):
def is_lexsorted(self):
"""
- Return True if the codes are lexicographically sorted
+ Return True if the codes are lexicographically sorted.
Returns
-------
@@ -2246,7 +2246,7 @@ def swaplevel(self, i=-2, j=-1):
def reorder_levels(self, order):
"""
- Rearrange levels using input order. May not drop or duplicate levels
+ Rearrange levels using input order. May not drop or duplicate levels.
Parameters
----------
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index d06afa3daa792..8cf14e2ca777e 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -68,20 +68,20 @@ class TimedeltaIndex(
):
"""
Immutable ndarray of timedelta64 data, represented internally as int64, and
- which can be boxed to timedelta objects
+ which can be boxed to timedelta objects.
Parameters
----------
data : array-like (1-dimensional), optional
- Optional timedelta-like data to construct index with
+ Optional timedelta-like data to construct index with.
unit : unit of the arg (D,h,m,s,ms,us,ns) denote the unit, optional
- which is an integer/float number
- freq : string or pandas offset object, optional
+ Which is an integer/float number.
+ freq : str or pandas offset object, optional
One of pandas date offset strings or corresponding objects. The string
'infer' can be passed in order to set the frequency of the index as the
- inferred frequency upon creation
+ inferred frequency upon creation.
copy : bool
- Make a copy of input ndarray
+ Make a copy of input ndarray.
start : starting value, timedelta-like, optional
If data is None, start is used as the start point in generating regular
timedelta data.
@@ -90,24 +90,24 @@ class TimedeltaIndex(
periods : int, optional, > 0
Number of periods to generate, if generating index. Takes precedence
- over end argument
+ over end argument.
.. deprecated:: 0.24.0
end : end time, timedelta-like, optional
If periods is none, generated index will extend to first conforming
- time on or just past end argument
+ time on or just past end argument.
.. deprecated:: 0.24. 0
- closed : string or None, default None
+ closed : str or None, default None
Make the interval closed with respect to the given frequency to
- the 'left', 'right', or both sides (None)
+ the 'left', 'right', or both sides (None).
.. deprecated:: 0.24. 0
name : object
- Name to be stored in the index
+ Name to be stored in the index.
Attributes
----------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index ea00737f776ee..9bdf312c917f0 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -49,7 +49,7 @@ def get_indexers_list():
# the public IndexSlicerMaker
class _IndexSlice:
"""
- Create an object to more easily perform multi-index slicing
+ Create an object to more easily perform multi-index slicing.
See Also
--------
| - [ ] xref #27977
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Similar to issue #27979, and as pointed out by @datapythonista, the summaries of some dosctrings don’t end with a period.
I added the period to the following cases:
```
• pandas.IndexSlice
• pandas.MultiIndex.names
• pandas.MultiIndex.is_lexsorted
• pandas.MultiIndex.reorder_levels
• pandas.DatetimeIndex.snap
• pandas.DatetimeIndex.to_perioddelta
• pandas.DatetimeIndex.to_pydatetime
• pandas.DatetimeIndex.to_series
• pandas.TimedeltaIndex
• pandas.PeriodIndex.is_leap_year
• pandas.api.extensions.ExtensionArray._concat_same_type
• pandas.api.extensions.ExtensionArray.dropna
```
These are the outputs of validate_docstrings.py when evaluating each case:
**pandas.IndexSlice**
```
4 Errors found:
Examples do not pass tests
flake8 error: E231 missing whitespace after ',' (4 times)
flake8 error: E902 TokenError: EOF in multi-line statement
flake8 error: E999 SyntaxError: invalid syntax
1 Warnings found:
No extended summary found
```
**pandas.MultiIndex.names**
```
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.MultiIndex.is_lexsorted**
```
1 Errors found:
Return value has no description
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.MultiIndex.reorder_levels**
```
2 Errors found:
Parameters {order} not documented
Return value has no description
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.DatetimeIndex.snap**
```
2 Errors found:
Parameters {freq} not documented
Return value has no description
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.DatetimeIndex.to_perioddelta**
```
5 Errors found:
Summary should fit in a single line
Parameters {**kwargs, *args} not documented
Unknown parameters {freq}
Parameter "freq" has no description
Return value has no description
2 Warnings found:
See Also section not found
No examples section found
```
**pandas.DatetimeIndex.to_pydatetime**
```
4 Errors found:
Summary should fit in a single line
Parameters {**kwargs, *args} not documented
The first line of the Returns section should contain only the type, unless multiple values are being returned
Return value has no description
2 Warnings found:
See Also section not found
No examples section found
```
**pandas.DatetimeIndex.to_series**
```
2 Errors found:
Summary should fit in a single line
Return value has no description
2 Warnings found:
See Also section not found
No examples section found
```
**pandas.TimedeltaIndex**
```
3 Errors found:
Summary should fit in a single line
Parameters {dtype, verify_integrity, periods, copy, data} not documented
Unknown parameters {periods , copy , data }
1 Warnings found:
No examples section found
```
**pandas.PeriodIndex.is_leap_year**
```
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.api.extensions.ExtensionArray._concat_same_type**
```
2 Errors found:
Parameter "to_concat" has no description
Return value has no description
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
**pandas.api.extensions.ExtensionArray.dropna**
```
2 Errors found:
The first line of the Returns section should contain only the type, unless multiple values are being returned
Return value has no description
3 Warnings found:
No extended summary found
See Also section not found
No examples section found
```
I would be happy to also work on the other errors/warnings in this PR on another. | https://api.github.com/repos/pandas-dev/pandas/pulls/28031 | 2019-08-20T10:02:38Z | 2019-08-23T09:03:01Z | 2019-08-23T09:03:01Z | 2019-08-23T09:03:11Z |
Backport PR #27916 on branch 0.25.x (BUG: fix to_timestamp out_of_bounds) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index cec927d73edca..81ba462d83840 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -32,7 +32,7 @@ Categorical
Datetimelike
^^^^^^^^^^^^
- Bug in :func:`to_datetime` where passing a timezone-naive :class:`DatetimeArray` or :class:`DatetimeIndex` and ``utc=True`` would incorrectly return a timezone-naive result (:issue:`27733`)
--
+- Bug in :meth:`Period.to_timestamp` where a :class:`Period` outside the :class:`Timestamp` implementation bounds (roughly 1677-09-21 to 2262-04-11) would return an incorrect :class:`Timestamp` instead of raising ``OutOfBoundsDatetime`` (:issue:`19643`)
-
-
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index c68d686ff2bf2..98e55f50062a2 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -21,7 +21,8 @@ PyDateTime_IMPORT
from pandas._libs.tslibs.np_datetime cimport (
npy_datetimestruct, dtstruct_to_dt64, dt64_to_dtstruct,
- pandas_datetime_to_datetimestruct, NPY_DATETIMEUNIT, NPY_FR_D)
+ pandas_datetime_to_datetimestruct, check_dts_bounds,
+ NPY_DATETIMEUNIT, NPY_FR_D)
cdef extern from "src/datetime/np_datetime.h":
int64_t npy_datetimestruct_to_datetime(NPY_DATETIMEUNIT fr,
@@ -1011,7 +1012,7 @@ def dt64arr_to_periodarr(int64_t[:] dtarr, int freq, tz=None):
@cython.wraparound(False)
@cython.boundscheck(False)
-def periodarr_to_dt64arr(int64_t[:] periodarr, int freq):
+def periodarr_to_dt64arr(const int64_t[:] periodarr, int freq):
"""
Convert array to datetime64 values from a set of ordinals corresponding to
periods per period convention.
@@ -1024,9 +1025,8 @@ def periodarr_to_dt64arr(int64_t[:] periodarr, int freq):
out = np.empty(l, dtype='i8')
- with nogil:
- for i in range(l):
- out[i] = period_ordinal_to_dt64(periodarr[i], freq)
+ for i in range(l):
+ out[i] = period_ordinal_to_dt64(periodarr[i], freq)
return out.base # .base to access underlying np.ndarray
@@ -1179,7 +1179,7 @@ cpdef int64_t period_ordinal(int y, int m, int d, int h, int min,
return get_period_ordinal(&dts, freq)
-cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq) nogil:
+cdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq) except? -1:
cdef:
npy_datetimestruct dts
@@ -1187,6 +1187,7 @@ cpdef int64_t period_ordinal_to_dt64(int64_t ordinal, int freq) nogil:
return NPY_NAT
get_date_info(ordinal, freq, &dts)
+ check_dts_bounds(&dts)
return dtstruct_to_dt64(&dts)
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index d9646feaf661e..9c9f976a90f51 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -1,6 +1,8 @@
import numpy as np
import pytest
+from pandas._libs import OutOfBoundsDatetime
+
import pandas as pd
from pandas.core.arrays import DatetimeArray, PeriodArray, TimedeltaArray
import pandas.util.testing as tm
@@ -608,6 +610,15 @@ def test_to_timestamp(self, how, period_index):
# an EA-specific tm.assert_ function
tm.assert_index_equal(pd.Index(result), pd.Index(expected))
+ def test_to_timestamp_out_of_bounds(self):
+ # GH#19643 previously overflowed silently
+ pi = pd.period_range("1500", freq="Y", periods=3)
+ with pytest.raises(OutOfBoundsDatetime):
+ pi.to_timestamp()
+
+ with pytest.raises(OutOfBoundsDatetime):
+ pi._data.to_timestamp()
+
@pytest.mark.parametrize("propname", PeriodArray._bool_ops)
def test_bool_properties(self, period_index, propname):
# in this case _bool_ops is just `is_leap_year`
diff --git a/pandas/tests/scalar/period/test_asfreq.py b/pandas/tests/scalar/period/test_asfreq.py
index 4cff061cabc40..357274e724c68 100644
--- a/pandas/tests/scalar/period/test_asfreq.py
+++ b/pandas/tests/scalar/period/test_asfreq.py
@@ -30,11 +30,8 @@ def test_asfreq_near_zero_weekly(self):
assert week1.asfreq("D", "E") >= per1
assert week2.asfreq("D", "S") <= per2
- @pytest.mark.xfail(
- reason="GH#19643 period_helper asfreq functions fail to check for overflows"
- )
def test_to_timestamp_out_of_bounds(self):
- # GH#19643, currently gives Timestamp('1754-08-30 22:43:41.128654848')
+ # GH#19643, used to incorrectly give Timestamp in 1754
per = Period("0001-01-01", freq="B")
with pytest.raises(OutOfBoundsDatetime):
per.to_timestamp()
| Backport PR #27916: BUG: fix to_timestamp out_of_bounds | https://api.github.com/repos/pandas-dev/pandas/pulls/28030 | 2019-08-20T07:00:55Z | 2019-08-20T13:32:43Z | 2019-08-20T13:32:43Z | 2019-08-20T13:32:43Z |
DOC: Change document code prun in a row | diff --git a/doc/source/user_guide/enhancingperf.rst b/doc/source/user_guide/enhancingperf.rst
index b77bfb9778837..a4eefadd54d8c 100644
--- a/doc/source/user_guide/enhancingperf.rst
+++ b/doc/source/user_guide/enhancingperf.rst
@@ -243,9 +243,9 @@ We've gotten another big improvement. Let's check again where the time is spent:
.. ipython:: python
- %prun -l 4 apply_integrate_f(df['a'].to_numpy(),
- df['b'].to_numpy(),
- df['N'].to_numpy())
+ %%prun -l 4 apply_integrate_f(df['a'].to_numpy(),
+ df['b'].to_numpy(),
+ df['N'].to_numpy())
As one might expect, the majority of the time is now spent in ``apply_integrate_f``,
so if we wanted to make anymore efficiencies we must continue to concentrate our
| - [x] closes #28026
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28029 | 2019-08-20T06:51:22Z | 2019-08-21T06:43:04Z | 2019-08-21T06:43:03Z | 2019-08-21T06:43:26Z |
Backport PR #27926 on branch 0.25.x (Fix regression in .ix fallback with IntervalIndex) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index cec927d73edca..a21955ea9aaac 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -85,6 +85,7 @@ Indexing
- Bug in partial-string indexing returning a NumPy array rather than a ``Series`` when indexing with a scalar like ``.loc['2015']`` (:issue:`27516`)
- Break reference cycle involving :class:`Index` and other index classes to allow garbage collection of index objects without running the GC. (:issue:`27585`, :issue:`27840`)
- Fix regression in assigning values to a single column of a DataFrame with a ``MultiIndex`` columns (:issue:`27841`).
+- Fix regression in ``.ix`` fallback with an ``IntervalIndex`` (:issue:`27865`).
-
Missing
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 71985c0707095..b37d903699327 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -123,7 +123,7 @@ def __getitem__(self, key):
key = tuple(com.apply_if_callable(x, self.obj) for x in key)
try:
values = self.obj._get_value(*key)
- except (KeyError, TypeError, InvalidIndexError):
+ except (KeyError, TypeError, InvalidIndexError, AttributeError):
# TypeError occurs here if the key has non-hashable entries,
# generally slice or list.
# TODO(ix): most/all of the TypeError cases here are for ix,
@@ -131,6 +131,9 @@ def __getitem__(self, key):
# The InvalidIndexError is only catched for compatibility
# with geopandas, see
# https://github.com/pandas-dev/pandas/issues/27258
+ # TODO: The AttributeError is for IntervalIndex which
+ # incorrectly implements get_value, see
+ # https://github.com/pandas-dev/pandas/issues/27865
pass
else:
if is_scalar(values):
diff --git a/pandas/tests/indexing/test_ix.py b/pandas/tests/indexing/test_ix.py
index 45ccd8d1b8fb3..6029db8ed66f6 100644
--- a/pandas/tests/indexing/test_ix.py
+++ b/pandas/tests/indexing/test_ix.py
@@ -343,3 +343,13 @@ def test_ix_duplicate_returns_series(self):
r = df.ix[0.2, "a"]
e = df.loc[0.2, "a"]
tm.assert_series_equal(r, e)
+
+ def test_ix_intervalindex(self):
+ # https://github.com/pandas-dev/pandas/issues/27865
+ df = DataFrame(
+ np.random.randn(5, 2),
+ index=pd.IntervalIndex.from_breaks([-np.inf, 0, 1, 2, 3, np.inf]),
+ )
+ result = df.ix[0:2, 0]
+ expected = df.iloc[0:2, 0]
+ tm.assert_series_equal(result, expected)
| Backport PR #27926: Fix regression in .ix fallback with IntervalIndex | https://api.github.com/repos/pandas-dev/pandas/pulls/28028 | 2019-08-20T06:48:03Z | 2019-08-20T13:32:27Z | 2019-08-20T13:32:27Z | 2019-08-20T13:32:27Z |
REF: standardize usage in DataFrame vs SparseDataFrame ops | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 16fece1c7eb8b..6aa3690ef54ea 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5306,7 +5306,6 @@ def reorder_levels(self, order, axis=0):
def _combine_frame(self, other, func, fill_value=None, level=None):
this, other = self.align(other, join="outer", level=level, copy=False)
- new_index, new_columns = this.index, this.columns
if fill_value is None:
# since _arith_op may be called in a loop, avoid function call
@@ -5324,14 +5323,12 @@ def _arith_op(left, right):
if ops.should_series_dispatch(this, other, func):
# iterate over columns
- return ops.dispatch_to_series(this, other, _arith_op)
+ new_data = ops.dispatch_to_series(this, other, _arith_op)
else:
with np.errstate(all="ignore"):
- result = _arith_op(this.values, other.values)
- result = dispatch_fill_zeros(func, this.values, other.values, result)
- return self._constructor(
- result, index=new_index, columns=new_columns, copy=False
- )
+ res_values = _arith_op(this.values, other.values)
+ new_data = dispatch_fill_zeros(func, this.values, other.values, res_values)
+ return this._construct_result(other, new_data, _arith_op)
def _combine_match_index(self, other, func, level=None):
left, right = self.align(other, join="outer", axis=0, level=level, copy=False)
@@ -5339,23 +5336,49 @@ def _combine_match_index(self, other, func, level=None):
if left._is_mixed_type or right._is_mixed_type:
# operate column-wise; avoid costly object-casting in `.values`
- return ops.dispatch_to_series(left, right, func)
+ new_data = ops.dispatch_to_series(left, right, func)
else:
# fastpath --> operate directly on values
with np.errstate(all="ignore"):
new_data = func(left.values.T, right.values).T
- return self._constructor(
- new_data, index=left.index, columns=self.columns, copy=False
- )
+ return left._construct_result(other, new_data, func)
def _combine_match_columns(self, other: Series, func, level=None):
left, right = self.align(other, join="outer", axis=1, level=level, copy=False)
# at this point we have `left.columns.equals(right.index)`
- return ops.dispatch_to_series(left, right, func, axis="columns")
+ new_data = ops.dispatch_to_series(left, right, func, axis="columns")
+ return left._construct_result(right, new_data, func)
def _combine_const(self, other, func):
# scalar other or np.ndim(other) == 0
- return ops.dispatch_to_series(self, other, func)
+ new_data = ops.dispatch_to_series(self, other, func)
+ return self._construct_result(other, new_data, func)
+
+ def _construct_result(self, other, result, func):
+ """
+ Wrap the result of an arithmetic, comparison, or logical operation.
+
+ Parameters
+ ----------
+ other : object
+ result : DataFrame
+ func : binary operator
+
+ Returns
+ -------
+ DataFrame
+
+ Notes
+ -----
+ `func` is included for compat with SparseDataFrame signature, is not
+ needed here.
+ """
+ out = self._constructor(result, index=self.index, copy=False)
+ # Pin columns instead of passing to constructor for compat with
+ # non-unique columns case
+ out.columns = self.columns
+ return out
+ # TODO: finalize? we do for SparseDataFrame
def combine(self, other, func, fill_value=None, overwrite=True):
"""
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index 016feff7e3beb..38faf3cea88fd 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -509,12 +509,7 @@ def column_op(a, b):
raise NotImplementedError(right)
new_data = expressions.evaluate(column_op, str_rep, left, right)
-
- result = left._constructor(new_data, index=left.index, copy=False)
- # Pin columns instead of passing to constructor for compat with
- # non-unique columns case
- result.columns = left.columns
- return result
+ return new_data
def dispatch_to_extension_op(
@@ -1055,7 +1050,8 @@ def f(self, other, axis=default_axis, level=None):
# Another DataFrame
if not self._indexed_same(other):
self, other = self.align(other, "outer", level=level, copy=False)
- return dispatch_to_series(self, other, na_op, str_rep)
+ new_data = dispatch_to_series(self, other, na_op, str_rep)
+ return self._construct_result(other, new_data, na_op)
elif isinstance(other, ABCSeries):
return _combine_series_frame(
@@ -1085,7 +1081,8 @@ def f(self, other):
raise ValueError(
"Can only compare identically-labeled DataFrame objects"
)
- return dispatch_to_series(self, other, func, str_rep)
+ new_data = dispatch_to_series(self, other, func, str_rep)
+ return self._construct_result(other, new_data, func)
elif isinstance(other, ABCSeries):
return _combine_series_frame(
diff --git a/pandas/core/sparse/frame.py b/pandas/core/sparse/frame.py
index 3d6ba0b8d9774..aaa99839144b4 100644
--- a/pandas/core/sparse/frame.py
+++ b/pandas/core/sparse/frame.py
@@ -534,19 +534,13 @@ def _set_value(self, index, col, value, takeable=False):
# Arithmetic-related methods
def _combine_frame(self, other, func, fill_value=None, level=None):
- if level is not None:
- raise NotImplementedError("'level' argument is not supported")
-
this, other = self.align(other, join="outer", level=level, copy=False)
- new_index, new_columns = this.index, this.columns
-
- if self.empty and other.empty:
- return self._constructor(index=new_index).__finalize__(self)
+ this._default_fill_value = self._default_fill_value
new_data = {}
if fill_value is not None:
# TODO: be a bit more intelligent here
- for col in new_columns:
+ for col in this.columns:
if col in this and col in other:
dleft = this[col].to_dense()
dright = other[col].to_dense()
@@ -555,38 +549,21 @@ def _combine_frame(self, other, func, fill_value=None, level=None):
new_data[col] = result
else:
- for col in new_columns:
+ for col in this.columns:
if col in this and col in other:
new_data[col] = func(this[col], other[col])
- new_fill_value = self._get_op_result_fill_value(other, func)
-
- return self._constructor(
- data=new_data,
- index=new_index,
- columns=new_columns,
- default_fill_value=new_fill_value,
- ).__finalize__(self)
+ return this._construct_result(other, new_data, func)
def _combine_match_index(self, other, func, level=None):
-
- if level is not None:
- raise NotImplementedError("'level' argument is not supported")
-
this, other = self.align(other, join="outer", axis=0, level=level, copy=False)
+ this._default_fill_value = self._default_fill_value
new_data = {}
for col in this.columns:
new_data[col] = func(this[col], other)
- fill_value = self._get_op_result_fill_value(other, func)
-
- return self._constructor(
- new_data,
- index=this.index,
- columns=self.columns,
- default_fill_value=fill_value,
- ).__finalize__(self)
+ return this._construct_result(other, new_data, func)
def _combine_match_columns(self, other, func, level=None):
# patched version of DataFrame._combine_match_columns to account for
@@ -594,27 +571,40 @@ def _combine_match_columns(self, other, func, level=None):
# where 3.0 is numpy.float64 and series is a SparseSeries. Still
# possible for this to happen, which is bothersome
- if level is not None:
- raise NotImplementedError("'level' argument is not supported")
-
left, right = self.align(other, join="outer", axis=1, level=level, copy=False)
assert left.columns.equals(right.index)
+ left._default_fill_value = self._default_fill_value
new_data = {}
-
for col in left.columns:
new_data[col] = func(left[col], right[col])
- return self._constructor(
- new_data,
- index=left.index,
- columns=left.columns,
- default_fill_value=self.default_fill_value,
- ).__finalize__(self)
+ # TODO: using this changed some behavior, see GH#28025
+ return left._construct_result(other, new_data, func)
def _combine_const(self, other, func):
return self._apply_columns(lambda x: func(x, other))
+ def _construct_result(self, other, result, func):
+ """
+ Wrap the result of an arithmetic, comparison, or logical operation.
+
+ Parameters
+ ----------
+ other : object
+ result : SparseDataFrame
+ func : binary operator
+
+ Returns
+ -------
+ SparseDataFrame
+ """
+ fill_value = self._get_op_result_fill_value(other, func)
+
+ out = self._constructor(result, index=self.index, default_fill_value=fill_value)
+ out.columns = self.columns
+ return out.__finalize__(self)
+
def _get_op_result_fill_value(self, other, func):
own_default = self.default_fill_value
@@ -643,6 +633,11 @@ def _get_op_result_fill_value(self, other, func):
else:
fill_value = func(np.float64(own_default), np.float64(other.fill_value))
fill_value = item_from_zerodim(fill_value)
+
+ elif isinstance(other, Series):
+ # reached via _combine_match_columns
+ fill_value = self.default_fill_value
+
else:
raise NotImplementedError(type(other))
| I _think_ that after this we're not far from being able to use the base class versions of _combine_frame, _combine_match_index, _combine_match_columns, and _combine_const. That'll be a good day.
The only actual behavior changed is in SparseDataFrame._combine_match_columns, which is changed to match the other methods. See #28025 | https://api.github.com/repos/pandas-dev/pandas/pulls/28027 | 2019-08-20T03:02:38Z | 2019-09-17T12:40:44Z | 2019-09-17T12:40:43Z | 2019-09-17T14:03:04Z |
BUG: rfloordiv with fill_value, closes#27464 | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 463dcef9feab6..108ddb1cdeab5 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -54,7 +54,7 @@ Numeric
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
- Bug where :class:`DataFrame` arithmetic operators such as :meth:`DataFrame.mul` with a :class:`Series` with axis=1 would raise an ``AttributeError`` on :class:`DataFrame` larger than the minimum threshold to invoke numexpr (:issue:`27636`)
--
+- Bug in :class:`DataFrame` arithmetic where missing values in results were incorrectly masked with ``NaN`` instead of ``Inf`` (:issue:`27464`)
Conversion
^^^^^^^^^^
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 603a615c1f8cb..1be7e0736f9fe 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -108,6 +108,7 @@
sanitize_index,
to_arrays,
)
+from pandas.core.ops.missing import dispatch_fill_zeros
from pandas.core.series import Series
from pandas.io.formats import console, format as fmt
@@ -5305,7 +5306,9 @@ def _arith_op(left, right):
# iterate over columns
return ops.dispatch_to_series(this, other, _arith_op)
else:
- result = _arith_op(this.values, other.values)
+ with np.errstate(all="ignore"):
+ result = _arith_op(this.values, other.values)
+ result = dispatch_fill_zeros(func, this.values, other.values, result)
return self._constructor(
result, index=new_index, columns=new_columns, copy=False
)
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 2b23790e4ccd3..d686d9f90a5a4 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1227,3 +1227,36 @@ def test_addsub_arithmetic(self, dtype, delta):
tm.assert_index_equal(index + index, 2 * index)
tm.assert_index_equal(index - index, 0 * index)
assert not (index - index).empty
+
+
+def test_fill_value_inf_masking():
+ # GH #27464 make sure we mask 0/1 with Inf and not NaN
+ df = pd.DataFrame({"A": [0, 1, 2], "B": [1.1, None, 1.1]})
+
+ other = pd.DataFrame({"A": [1.1, 1.2, 1.3]}, index=[0, 2, 3])
+
+ result = df.rfloordiv(other, fill_value=1)
+
+ expected = pd.DataFrame(
+ {"A": [np.inf, 1.0, 0.0, 1.0], "B": [0.0, np.nan, 0.0, np.nan]}
+ )
+ tm.assert_frame_equal(result, expected)
+
+
+def test_dataframe_div_silenced():
+ # GH#26793
+ pdf1 = pd.DataFrame(
+ {
+ "A": np.arange(10),
+ "B": [np.nan, 1, 2, 3, 4] * 2,
+ "C": [np.nan] * 10,
+ "D": np.arange(10),
+ },
+ index=list("abcdefghij"),
+ columns=list("ABCD"),
+ )
+ pdf2 = pd.DataFrame(
+ np.random.randn(10, 4), index=list("abcdefghjk"), columns=list("ABCX")
+ )
+ with tm.assert_produces_warning(None):
+ pdf1.div(pdf2, fill_value=0)
| - [x] closes #27464, closes #26793
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28024 | 2019-08-20T01:04:19Z | 2019-08-20T17:45:06Z | 2019-08-20T17:45:06Z | 2019-08-20T18:42:02Z |
CLN: timeseries in plotting clean up | diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index f3fcb090e9883..f160e50d8d99b 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -304,23 +304,6 @@ def _maybe_convert_index(ax, data):
# Do we need the rest for convenience?
-def format_timedelta_ticks(x, pos, n_decimals):
- """
- Convert seconds to 'D days HH:MM:SS.F'
- """
- s, ns = divmod(x, 1e9)
- m, s = divmod(s, 60)
- h, m = divmod(m, 60)
- d, h = divmod(h, 24)
- decimals = int(ns * 10 ** (n_decimals - 9))
- s = r"{:02d}:{:02d}:{:02d}".format(int(h), int(m), int(s))
- if n_decimals > 0:
- s += ".{{:0{:0d}d}}".format(n_decimals).format(decimals)
- if d != 0:
- s = "{:d} days ".format(int(d)) + s
- return s
-
-
def _format_coord(freq, t, y):
return "t = {0} y = {1:8f}".format(Period(ordinal=int(t), freq=freq), y)
diff --git a/pandas/tests/plotting/test_converter.py b/pandas/tests/plotting/test_converter.py
index 7001264c41c05..aabe16d5050f9 100644
--- a/pandas/tests/plotting/test_converter.py
+++ b/pandas/tests/plotting/test_converter.py
@@ -388,3 +388,21 @@ def test_convert_nested(self):
r1 = self.pc.convert([data, data], None, self.axis)
r2 = [self.pc.convert(data, None, self.axis) for _ in range(2)]
assert r1 == r2
+
+
+class TestTimeDeltaConverter:
+ """Test timedelta converter"""
+
+ @pytest.mark.parametrize(
+ "x, decimal, format_expected",
+ [
+ (0.0, 0, "00:00:00"),
+ (3972320000000, 1, "01:06:12.3"),
+ (713233432000000, 2, "8 days 06:07:13.43"),
+ (32423432000000, 4, "09:00:23.4320"),
+ ],
+ )
+ def test_format_timedelta_ticks(self, x, decimal, format_expected):
+ tdc = converter.TimeSeries_TimedeltaFormatter
+ result = tdc.format_timedelta_ticks(x, pos=None, n_decimals=decimal)
+ assert result == format_expected
| `format_timedelta_ticks` appears both in `timeseries.py` and `matplotlib.converter.py`, and there is no test. Then i look through the code base, it is only used in `matplotlib.converter.py` for `TimeSeries_TimedeltaFormatter`. So I remove the one in `timeseries.py` and add test for it.
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28020 | 2019-08-19T20:35:31Z | 2019-09-11T01:43:55Z | 2019-09-11T01:43:55Z | 2019-09-11T01:44:01Z |
Backport PR #28016 on branch 0.25.x (TST: fix flaky xfail) | diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 3a5a387b919be..1ddaa4692d741 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1482,16 +1482,7 @@ def test_value_counts_with_nan(self):
@pytest.mark.parametrize(
"dtype",
- [
- "int_",
- "uint",
- "float_",
- "unicode_",
- "timedelta64[h]",
- pytest.param(
- "datetime64[D]", marks=pytest.mark.xfail(reason="GH#7996", strict=True)
- ),
- ],
+ ["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"],
)
def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype))
@@ -1499,6 +1490,10 @@ def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
# Test case 1
input1 = np.array([1, 2, 3, 3], dtype=np.dtype(dtype))
tc1 = Series(Categorical(input1, categories=cat_array, ordered=ordered_fixture))
+ if dtype == "datetime64[D]":
+ # pre-empty flaky xfail, tc1 values are seemingly-random
+ if not (np.array(tc1) == input1).all():
+ pytest.xfail(reason="GH#7996")
expected = Series([False, False, False, True])
tm.assert_series_equal(tc1.duplicated(), expected)
@@ -1524,6 +1519,10 @@ def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
# Test case 2
input2 = np.array([1, 2, 3, 5, 3, 2, 4], dtype=np.dtype(dtype))
tc2 = Series(Categorical(input2, categories=cat_array, ordered=ordered_fixture))
+ if dtype == "datetime64[D]":
+ # pre-empty flaky xfail, tc2 values are seemingly-random
+ if not (np.array(tc2) == input2).all():
+ pytest.xfail(reason="GH#7996")
expected = Series([False, False, False, False, True, True, False])
tm.assert_series_equal(tc2.duplicated(), expected)
| Backport PR #28016: TST: fix flaky xfail | https://api.github.com/repos/pandas-dev/pandas/pulls/28019 | 2019-08-19T18:29:18Z | 2019-08-20T06:45:49Z | 2019-08-20T06:45:49Z | 2019-08-20T06:45:49Z |
DOC/TST: Update the parquet (pyarrow >= 0.15) docs and tests regarding Categorical support | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 2c8f66dd99e72..ee097c1f4d5e8 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -4710,7 +4710,8 @@ Several caveats.
indexes. This extra column can cause problems for non-Pandas consumers that are not expecting it. You can
force including or omitting indexes with the ``index`` argument, regardless of the underlying engine.
* Index level names, if specified, must be strings.
-* Categorical dtypes can be serialized to parquet, but will de-serialize as ``object`` dtype.
+* In the ``pyarrow`` engine, categorical dtypes for non-string types can be serialized to parquet, but will de-serialize as their primitive dtype.
+* The ``pyarrow`` engine preserves the ``ordered`` flag of categorical dtypes with string types. ``fastparquet`` does not preserve the ``ordered`` flag.
* Non supported types include ``Period`` and actual Python object types. These will raise a helpful error message
on an attempt at serialization.
@@ -4734,7 +4735,9 @@ See the documentation for `pyarrow <https://arrow.apache.org/docs/python/>`__ an
'd': np.arange(4.0, 7.0, dtype='float64'),
'e': [True, False, True],
'f': pd.date_range('20130101', periods=3),
- 'g': pd.date_range('20130101', periods=3, tz='US/Eastern')})
+ 'g': pd.date_range('20130101', periods=3, tz='US/Eastern'),
+ 'h': pd.Categorical(list('abc')),
+ 'i': pd.Categorical(list('abc'), ordered=True)})
df
df.dtypes
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index f8c4f9f3dc410..2b147f948adb1 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -176,6 +176,7 @@ Categorical
- Added test to assert the :func:`fillna` raises the correct ValueError message when the value isn't a value from categories (:issue:`13628`)
- Bug in :meth:`Categorical.astype` where ``NaN`` values were handled incorrectly when casting to int (:issue:`28406`)
- :meth:`Categorical.searchsorted` and :meth:`CategoricalIndex.searchsorted` now work on unordered categoricals also (:issue:`21667`)
+- Added test to assert roundtripping to parquet with :func:`DataFrame.to_parquet` or :func:`read_parquet` will preserve Categorical dtypes for string types (:issue:`27955`)
-
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index efc2b6d6c5b3d..2a95904d5668d 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -167,6 +167,7 @@ def compare(repeat):
df.to_parquet(path, **write_kwargs)
with catch_warnings(record=True):
actual = read_parquet(path, **read_kwargs)
+
tm.assert_frame_equal(expected, actual, check_names=check_names)
if path is None:
@@ -461,11 +462,26 @@ def test_unsupported(self, pa):
def test_categorical(self, pa):
# supported in >= 0.7.0
- df = pd.DataFrame({"a": pd.Categorical(list("abc"))})
+ df = pd.DataFrame()
+ df["a"] = pd.Categorical(list("abcdef"))
- # de-serialized as object
- expected = df.assign(a=df.a.astype(object))
- check_round_trip(df, pa, expected=expected)
+ # test for null, out-of-order values, and unobserved category
+ df["b"] = pd.Categorical(
+ ["bar", "foo", "foo", "bar", None, "bar"],
+ dtype=pd.CategoricalDtype(["foo", "bar", "baz"]),
+ )
+
+ # test for ordered flag
+ df["c"] = pd.Categorical(
+ ["a", "b", "c", "a", "c", "b"], categories=["b", "c", "d"], ordered=True
+ )
+
+ if LooseVersion(pyarrow.__version__) >= LooseVersion("0.15.0"):
+ check_round_trip(df, pa)
+ else:
+ # de-serialized as object for pyarrow < 0.15
+ expected = df.astype(object)
+ check_round_trip(df, pa, expected=expected)
def test_s3_roundtrip(self, df_compat, s3_resource, pa):
# GH #19134
| - [x] closes #27955
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28018 | 2019-08-19T18:22:38Z | 2019-10-04T19:48:04Z | 2019-10-04T19:48:04Z | 2019-10-05T00:23:59Z |
Backport PR #27773 on branch 0.25.x (BUG: _can_use_numexpr fails when passed large Series) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 87e46f97d3157..cec927d73edca 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -54,7 +54,7 @@ Numeric
^^^^^^^
- Bug in :meth:`Series.interpolate` when using a timezone aware :class:`DatetimeIndex` (:issue:`27548`)
- Bug when printing negative floating point complex numbers would raise an ``IndexError`` (:issue:`27484`)
--
+- Bug where :class:`DataFrame` arithmetic operators such as :meth:`DataFrame.mul` with a :class:`Series` with axis=1 would raise an ``AttributeError`` on :class:`DataFrame` larger than the minimum threshold to invoke numexpr (:issue:`27636`)
-
Conversion
diff --git a/pandas/core/computation/expressions.py b/pandas/core/computation/expressions.py
index ea61467080291..9621fb1d65509 100644
--- a/pandas/core/computation/expressions.py
+++ b/pandas/core/computation/expressions.py
@@ -76,16 +76,17 @@ def _can_use_numexpr(op, op_str, a, b, dtype_check):
# required min elements (otherwise we are adding overhead)
if np.prod(a.shape) > _MIN_ELEMENTS:
-
# check for dtype compatibility
dtypes = set()
for o in [a, b]:
- if hasattr(o, "dtypes"):
+ # Series implements dtypes, check for dimension count as well
+ if hasattr(o, "dtypes") and o.ndim > 1:
s = o.dtypes.value_counts()
if len(s) > 1:
return False
dtypes |= set(s.index.astype(str))
- elif isinstance(o, np.ndarray):
+ # ndarray and Series Case
+ elif hasattr(o, "dtype"):
dtypes |= {o.dtype.name}
# allowed are a superset
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 4070624985068..ca514f62f451d 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -66,7 +66,7 @@ def run_arithmetic(self, df, other, assert_func, check_dtype=False, test_flex=Tr
operator_name = "truediv"
if test_flex:
- op = lambda x, y: getattr(df, arith)(y)
+ op = lambda x, y: getattr(x, arith)(y)
op.__name__ = arith
else:
op = getattr(operator, operator_name)
@@ -318,7 +318,6 @@ def testit():
for f in [self.frame, self.frame2, self.mixed, self.mixed2]:
for cond in [True, False]:
-
c = np.empty(f.shape, dtype=np.bool_)
c.fill(cond)
result = expr.where(c, f.values, f.values + 1)
@@ -431,3 +430,29 @@ def test_bool_ops_column_name_dtype(self, test_input, expected):
# GH 22383 - .ne fails if columns containing column name 'dtype'
result = test_input.loc[:, ["a", "dtype"]].ne(test_input.loc[:, ["a", "dtype"]])
assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize(
+ "arith", ("add", "sub", "mul", "mod", "truediv", "floordiv")
+ )
+ @pytest.mark.parametrize("axis", (0, 1))
+ def test_frame_series_axis(self, axis, arith):
+ # GH#26736 Dataframe.floordiv(Series, axis=1) fails
+ if axis == 1 and arith == "floordiv":
+ pytest.xfail("'floordiv' does not succeed with axis=1 #27636")
+
+ df = self.frame
+ if axis == 1:
+ other = self.frame.iloc[0, :]
+ else:
+ other = self.frame.iloc[:, 0]
+
+ expr._MIN_ELEMENTS = 0
+
+ op_func = getattr(df, arith)
+
+ expr.set_use_numexpr(False)
+ expected = op_func(other, axis=axis)
+ expr.set_use_numexpr(True)
+
+ result = op_func(other, axis=axis)
+ assert_frame_equal(expected, result)
| Backport PR #27773: BUG: _can_use_numexpr fails when passed large Series | https://api.github.com/repos/pandas-dev/pandas/pulls/28017 | 2019-08-19T17:20:59Z | 2019-08-20T06:45:31Z | 2019-08-20T06:45:31Z | 2019-08-20T06:45:31Z |
TST: fix flaky xfail | diff --git a/pandas/tests/series/test_analytics.py b/pandas/tests/series/test_analytics.py
index 3a5a387b919be..1ddaa4692d741 100644
--- a/pandas/tests/series/test_analytics.py
+++ b/pandas/tests/series/test_analytics.py
@@ -1482,16 +1482,7 @@ def test_value_counts_with_nan(self):
@pytest.mark.parametrize(
"dtype",
- [
- "int_",
- "uint",
- "float_",
- "unicode_",
- "timedelta64[h]",
- pytest.param(
- "datetime64[D]", marks=pytest.mark.xfail(reason="GH#7996", strict=True)
- ),
- ],
+ ["int_", "uint", "float_", "unicode_", "timedelta64[h]", "datetime64[D]"],
)
def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
cat_array = np.array([1, 2, 3, 4, 5], dtype=np.dtype(dtype))
@@ -1499,6 +1490,10 @@ def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
# Test case 1
input1 = np.array([1, 2, 3, 3], dtype=np.dtype(dtype))
tc1 = Series(Categorical(input1, categories=cat_array, ordered=ordered_fixture))
+ if dtype == "datetime64[D]":
+ # pre-empty flaky xfail, tc1 values are seemingly-random
+ if not (np.array(tc1) == input1).all():
+ pytest.xfail(reason="GH#7996")
expected = Series([False, False, False, True])
tm.assert_series_equal(tc1.duplicated(), expected)
@@ -1524,6 +1519,10 @@ def test_drop_duplicates_categorical_non_bool(self, dtype, ordered_fixture):
# Test case 2
input2 = np.array([1, 2, 3, 5, 3, 2, 4], dtype=np.dtype(dtype))
tc2 = Series(Categorical(input2, categories=cat_array, ordered=ordered_fixture))
+ if dtype == "datetime64[D]":
+ # pre-empty flaky xfail, tc2 values are seemingly-random
+ if not (np.array(tc2) == input2).all():
+ pytest.xfail(reason="GH#7996")
expected = Series([False, False, False, False, True, True, False])
tm.assert_series_equal(tc2.duplicated(), expected)
| xfail in more specific circumstance
- [x] closes #28011
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28016 | 2019-08-19T17:13:13Z | 2019-08-19T18:29:07Z | 2019-08-19T18:29:07Z | 2019-08-19T18:41:50Z |
WEB: Adding new pandas website | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index f839d86318e2e..b03c4f2238445 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -188,9 +188,9 @@ if [[ -z "$CHECK" || "$CHECK" == "patterns" ]]; then
set -o pipefail
if [[ "$AZURE" == "true" ]]; then
# we exclude all c/cpp files as the c/cpp files of pandas code base are tested when Linting .c and .h files
- ! grep -n '--exclude=*.'{svg,c,cpp,html} --exclude-dir=env -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
+ ! grep -n '--exclude=*.'{svg,c,cpp,html,js} --exclude-dir=env -RI "\s$" * | awk -F ":" '{print "##vso[task.logissue type=error;sourcepath=" $1 ";linenumber=" $2 ";] Tailing whitespaces found: " $3}'
else
- ! grep -n '--exclude=*.'{svg,c,cpp,html} --exclude-dir=env -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
+ ! grep -n '--exclude=*.'{svg,c,cpp,html,js} --exclude-dir=env -RI "\s$" * | awk -F ":" '{print $1 ":" $2 ":Tailing whitespaces found: " $3}'
fi
RET=$(($RET + $?)) ; echo $MSG "DONE"
fi
diff --git a/environment.yml b/environment.yml
index 89089dcea8c99..6187321cd9242 100644
--- a/environment.yml
+++ b/environment.yml
@@ -36,6 +36,12 @@ dependencies:
- nbsphinx
- pandoc
+ # web (jinja2 is also needed, but it's also an optional pandas dependency)
+ - markdown
+ - feedparser
+ - pyyaml
+ - requests
+
# testing
- boto3
- botocore>=1.11
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 3f4636043dbac..fd8e6378240b4 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -17,6 +17,10 @@ numpydoc>=0.9.0
nbconvert>=5.4.1
nbsphinx
pandoc
+markdown
+feedparser
+pyyaml
+requests
boto3
botocore>=1.11
hypothesis>=3.82
diff --git a/web/README.md b/web/README.md
new file mode 100644
index 0000000000000..7396fbd0833a1
--- /dev/null
+++ b/web/README.md
@@ -0,0 +1,12 @@
+Directory containing the pandas website (hosted at https://pandas.io).
+
+The website sources are in `web/pandas/`, which also include a `config.yml` file
+containing the settings to build the website. The website is generated with the
+command `./pandas_web.py pandas`. See `./pandas_web.py --help` and the header of
+the script for more information and options.
+
+After building the website, to navigate it, it is needed to access the web using
+an http server (a not open the local files with the browser, since the links and
+the image sources are absolute to where they are served from). The easiest way
+to run an http server locally is to run `python -m http.server` from the
+`web/build/` directory.
diff --git a/web/pandas/_templates/layout.html b/web/pandas/_templates/layout.html
new file mode 100644
index 0000000000000..253318182f30c
--- /dev/null
+++ b/web/pandas/_templates/layout.html
@@ -0,0 +1,85 @@
+<!DOCTYPE html>
+<html>
+ <head>
+ <script type="text/javascript">
+ var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-27880019-2']); _gaq.push(['_trackPageview']);
+ (function() {
+ var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+ ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+ var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+ })();
+ </script>
+ <title>pandas - Python Data Analysis Library</title>
+ <meta charset="utf-8">
+ <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
+ <link rel="stylesheet"
+ href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css"
+ integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm"
+ crossorigin="anonymous">
+ {% for stylesheet in static.css %}
+ <link rel="stylesheet"
+ href="{{ base_url }}{{ stylesheet }}">
+ {% endfor %}
+ </head>
+ <body>
+ <header>
+ <nav class="navbar navbar-expand-md navbar-dark fixed-top bg-dark">
+ <button class="navbar-toggler" type="button" data-toggle="collapse" data-target="#nav-content" aria-controls="nav-content" aria-expanded="false" aria-label="Toggle navigation">
+ <span class="navbar-toggler-icon"></span>
+ </button>
+
+ {% if static.logo %}<a class="navbar-brand" href="{{ base_url }}/"><img alt="" src="{{ base_url }}{{ static.logo }}"/></a>{% endif %}
+
+ <div class="collapse navbar-collapse" id="nav-content">
+ <ul class="navbar-nav">
+ {% for item in navbar %}
+ {% if not item.has_subitems %}
+ <li class="nav-item">
+ <a class="nav-link" href="{% if not item.target.startswith("http") %}{{ base_url }}{% endif %}{{ item.target }}">{{ item.name }}</a>
+ </li>
+ {% else %}
+ <li class="nav-item dropdown">
+ <a class="nav-link dropdown-toggle"
+ data-toggle="dropdown"
+ id="{{ item.slug }}"
+ href="#"
+ role="button"
+ aria-haspopup="true"
+ aria-expanded="false">{{ item.name }}</a>
+ <div class="dropdown-menu" aria-labelledby="{{ item.slug }}">
+ {% for subitem in item.target %}
+ <a class="dropdown-item" href="{% if not subitem.target.startswith("http") %}{{ base_url }}{% endif %}{{ subitem.target }}">{{ subitem.name }}</a>
+ {% endfor %}
+ </div>
+ </li>
+ {% endif %}
+ {% endfor %}
+ </ul>
+ </div>
+ </nav>
+ </header>
+ <main role="main">
+ <div class="container">
+ {% block body %}{% endblock %}
+ </div>
+ </main>
+ <footer class="container pt-4 pt-md-5 border-top">
+ <p class="float-right">
+ <a href="#">Back to top</a>
+ </p>
+ <p>
+ © 2009 - 2019, pandas team
+ </p>
+ </footer>
+
+ <script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"
+ integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN"
+ crossorigin="anonymous"></script>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js"
+ integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q"
+ crossorigin="anonymous"></script>
+ <script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"
+ integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl"
+ crossorigin="anonymous"></script>
+ </body>
+</html>
diff --git a/web/pandas/blog.html b/web/pandas/blog.html
new file mode 100644
index 0000000000000..ffe6f97d679e4
--- /dev/null
+++ b/web/pandas/blog.html
@@ -0,0 +1,14 @@
+{% extends "layout.html" %}
+
+{% block body %}
+ {% for post in blog.posts %}
+ <div class="card">
+ <div class="card-body">
+ <h3 class="card-title"><a href="{{post.link }}" target="_blank">{{ post.title }}</a></h3>
+ <h6 class="card-subtitle">Source: {{ post.feed }} | Author: {{ post.author }} | Published: {{ post.published.strftime("%b %d, %Y") }}</h6>
+ <div class="card-text">{{ post.summary }}</div>
+ <a class="card-link" href="{{post.link }}" target="_blank">Read</a>
+ </div>
+ </div>
+ {% endfor %}
+{% endblock %}
diff --git a/web/pandas/community/about.md b/web/pandas/community/about.md
new file mode 100644
index 0000000000000..4e50d280d2a10
--- /dev/null
+++ b/web/pandas/community/about.md
@@ -0,0 +1,86 @@
+# About pandas
+
+## History of development
+
+In 2008, _pandas_ development began at [AQR Capital Management](http://www.aqr.com).
+By the end of 2009 it had been [open sourced](http://en.wikipedia.org/wiki/Open_source),
+and is actively supported today by a community of like-minded individuals around the world who
+contribute their valuable time and energy to help make open source _pandas_
+possible. Thank you to [all of our contributors](team.html).
+
+Since 2015, _pandas_ is a [NumFOCUS sponsored project](https://numfocus.org/sponsored-projects).
+This will help ensure the success of development of _pandas_ as a world-class open-source project.
+
+### Timeline
+
+- **2008**: Development of _pandas_ started
+- **2009**: _pandas_ becomes open source
+- **2012**: First edition of _Python for Data Analysis_ is published
+- **2015**: _pandas_ becomes a [NumFOCUS sponsored project](https://numfocus.org/sponsored-projects)
+- **2018**: First in-person core developer sprint
+
+## Library Highlights
+
+- A fast and efficient **DataFrame** object for data manipulation with
+ integrated indexing;
+
+- Tools for **reading and writing data** between in-memory data structures and
+ different formats: CSV and text files, Microsoft Excel, SQL databases, and
+ the fast HDF5 format;
+
+- Intelligent **data alignment** and integrated handling of **missing data**:
+ gain automatic label-based alignment in computations and easily manipulate
+ messy data into an orderly form;
+
+- Flexible **reshaping** and pivoting of data sets;
+
+- Intelligent label-based **slicing**, **fancy indexing**, and **subsetting**
+ of large data sets;
+
+- Columns can be inserted and deleted from data structures for **size
+ mutability**;
+
+- Aggregating or transforming data with a powerful **group by** engine
+ allowing split-apply-combine operations on data sets;
+
+- High performance **merging and joining** of data sets;
+
+- **Hierarchical axis indexing** provides an intuitive way of working with
+ high-dimensional data in a lower-dimensional data structure;
+
+- **Time series**-functionality: date range generation and frequency
+ conversion, moving window statistics, moving window linear regressions, date
+ shifting and lagging. Even create domain-specific time offsets and join time
+ series without losing data;
+
+- Highly **optimized for performance**, with critical code paths written in
+ [Cython](http://www.cython.org/) or C.
+
+- Python with *pandas* is in use in a wide variety of **academic and
+ commercial** domains, including Finance, Neuroscience, Economics,
+ Statistics, Advertising, Web Analytics, and more.
+
+## Mission
+
+_pandas_ aims to be the fundamental high-level building block for doing practical,
+real world data analysis in Python.
+Additionally, it has the broader goal of becoming the most powerful and flexible
+open source data analysis / manipulation tool available in any language.
+
+## Vision
+
+A world where data analytics and manipulation software is:
+
+- Accessible to everyone
+- Free for users to use and modify
+- Flexible
+- Powerful
+- Easy to use
+- Fast
+
+## Values
+
+Is in the core of _pandas_ to be respectful and welcoming with everybody,
+users, contributors and the broader community. Regardless of level of experience,
+gender, gender identity and expression, sexual orientation, disability,
+personal appearance, body size, race, ethnicity, age, religion, or nationality.
diff --git a/web/pandas/community/citing.md b/web/pandas/community/citing.md
new file mode 100644
index 0000000000000..6bad948bb3736
--- /dev/null
+++ b/web/pandas/community/citing.md
@@ -0,0 +1,46 @@
+# Citing pandas
+
+## Citing
+
+If you use _pandas_ for a scientific publication, we would appreciate citations to one of the following papers:
+
+- [Data structures for statistical computing in python](http://conference.scipy.org/proceedings/scipy2010/pdfs/mckinney.pdf),
+ McKinney, Proceedings of the 9th Python in Science Conference, Volume 445, 2010.
+
+ @inproceedings{mckinney2010data,
+ title={Data structures for statistical computing in python},
+ author={Wes McKinney},
+ booktitle={Proceedings of the 9th Python in Science Conference},
+ volume={445},
+ pages={51--56},
+ year={2010},
+ organization={Austin, TX}
+ }
+
+
+- [pandas: a foundational Python library for data analysis and statistics](https://www.scribd.com/document/71048089/pandas-a-Foundational-Python-Library-for-Data-Analysis-and-Statistics),
+ McKinney, Python for High Performance and Scientific Computing, Volume 14, 2011.
+
+ @article{mckinney2011pandas,
+ title={pandas: a foundational Python library for data analysis and statistics},
+ author={Wes McKinney},
+ journal={Python for High Performance and Scientific Computing},
+ volume={14},
+ year={2011}
+ }
+
+## Brand and logo
+
+When using the project name _pandas_, please use it in lower case, even at the beginning of a sentence.
+
+The official logo of _pandas_ is:
+
+
+
+You can download a `svg` version of the logo [here]({{ base_url }}/static/img/pandas.svg).
+
+When using the logo, please follow the next directives:
+
+- Leave enough margin around the logo
+- Do not distort the logo by changing its proportions
+- Do not place text or other elements on top of the logo
diff --git a/web/pandas/community/coc.md b/web/pandas/community/coc.md
new file mode 100644
index 0000000000000..2841349fdb556
--- /dev/null
+++ b/web/pandas/community/coc.md
@@ -0,0 +1,63 @@
+# Contributor Code of Conduct
+
+As contributors and maintainers of this project, and in the interest of
+fostering an open and welcoming community, we pledge to respect all people who
+contribute through reporting issues, posting feature requests, updating
+documentation, submitting pull requests or patches, and other activities.
+
+We are committed to making participation in this project a harassment-free
+experience for everyone, regardless of level of experience, gender, gender
+identity and expression, sexual orientation, disability, personal appearance,
+body size, race, ethnicity, age, religion, or nationality.
+
+Examples of unacceptable behavior by participants include:
+
+* The use of sexualized language or imagery
+* Personal attacks
+* Trolling or insulting/derogatory comments
+* Public or private harassment
+* Publishing other's private information, such as physical or electronic
+ addresses, without explicit permission
+* Other unethical or unprofessional conduct
+
+Project maintainers have the right and responsibility to remove, edit, or
+reject comments, commits, code, wiki edits, issues, and other contributions
+that are not aligned to this Code of Conduct, or to ban temporarily or
+permanently any contributor for other behaviors that they deem inappropriate,
+threatening, offensive, or harmful.
+
+By adopting this Code of Conduct, project maintainers commit themselves to
+fairly and consistently applying these principles to every aspect of managing
+this project. Project maintainers who do not follow or enforce the Code of
+Conduct may be permanently removed from the project team.
+
+This Code of Conduct applies both within project spaces and in public spaces
+when an individual is representing the project or its community.
+
+A working group of community members is committed to promptly addressing any
+reported issues. The working group is made up of pandas contributors and users.
+Instances of abusive, harassing, or otherwise unacceptable behavior may be
+reported by contacting the working group by e-mail (pandas-coc@googlegroups.com).
+Messages sent to this e-mail address will not be publicly visible but only to
+the working group members. The working group currently includes
+
+<ul>
+ {% for person in maintainers.coc %}
+ <li>{{ person }}</li>
+ {% endfor %}
+</ul>
+
+All complaints will be reviewed and investigated and will result in a response
+that is deemed necessary and appropriate to the circumstances. Maintainers are
+obligated to maintain confidentiality with regard to the reporter of an
+incident.
+
+This Code of Conduct is adapted from the [Contributor Covenant][homepage],
+version 1.3.0, available at
+[http://contributor-covenant.org/version/1/3/0/][version],
+and the [Swift Code of Conduct][swift].
+
+[homepage]: http://contributor-covenant.org
+[version]: http://contributor-covenant.org/version/1/3/0/
+[swift]: https://swift.org/community/#code-of-conduct
+
diff --git a/web/pandas/community/ecosystem.md b/web/pandas/community/ecosystem.md
new file mode 100644
index 0000000000000..af27c31b52d50
--- /dev/null
+++ b/web/pandas/community/ecosystem.md
@@ -0,0 +1,370 @@
+# Pandas ecosystem
+
+Increasingly, packages are being built on top of pandas to address
+specific needs in data preparation, analysis and visualization. This is
+encouraging because it means pandas is not only helping users to handle
+their data tasks but also that it provides a better starting point for
+developers to build powerful and more focused data tools. The creation
+of libraries that complement pandas' functionality also allows pandas
+development to remain focused around it's original requirements.
+
+This is an inexhaustive list of projects that build on pandas in order
+to provide tools in the PyData space. For a list of projects that depend
+on pandas, see the [libraries.io usage page for
+pandas](https://libraries.io/pypi/pandas/usage) or [search pypi for
+pandas](https://pypi.org/search/?q=pandas).
+
+We'd like to make it easier for users to find these projects, if you
+know of other substantial projects that you feel should be on this list,
+please let us know.
+
+## Statistics and machine learning
+
+### [Statsmodels](https://www.statsmodels.org/)
+
+Statsmodels is the prominent Python "statistics and econometrics
+library" and it has a long-standing special relationship with pandas.
+Statsmodels provides powerful statistics, econometrics, analysis and
+modeling functionality that is out of pandas' scope. Statsmodels
+leverages pandas objects as the underlying data container for
+computation.
+
+### [sklearn-pandas](https://github.com/paulgb/sklearn-pandas)
+
+Use pandas DataFrames in your [scikit-learn](https://scikit-learn.org/)
+ML pipeline.
+
+### [Featuretools](https://github.com/featuretools/featuretools/)
+
+Featuretools is a Python library for automated feature engineering built
+on top of pandas. It excels at transforming temporal and relational
+datasets into feature matrices for machine learning using reusable
+feature engineering "primitives". Users can contribute their own
+primitives in Python and share them with the rest of the community.
+
+## Visualization
+
+### [Altair](https://altair-viz.github.io/)
+
+Altair is a declarative statistical visualization library for Python.
+With Altair, you can spend more time understanding your data and its
+meaning. Altair's API is simple, friendly and consistent and built on
+top of the powerful Vega-Lite JSON specification. This elegant
+simplicity produces beautiful and effective visualizations with a
+minimal amount of code. Altair works with Pandas DataFrames.
+
+### [Bokeh](https://bokeh.pydata.org)
+
+Bokeh is a Python interactive visualization library for large datasets
+that natively uses the latest web technologies. Its goal is to provide
+elegant, concise construction of novel graphics in the style of
+Protovis/D3, while delivering high-performance interactivity over large
+data to thin clients.
+
+[Pandas-Bokeh](https://github.com/PatrikHlobil/Pandas-Bokeh) provides a
+high level API for Bokeh that can be loaded as a native Pandas plotting
+backend via
+
+```
+pd.set_option("plotting.backend", "pandas_bokeh")
+```
+
+It is very similar to the matplotlib plotting backend, but provides
+interactive web-based charts and maps.
+
+### [seaborn](https://seaborn.pydata.org)
+
+Seaborn is a Python visualization library based on
+[matplotlib](https://matplotlib.org). It provides a high-level,
+dataset-oriented interface for creating attractive statistical graphics.
+The plotting functions in seaborn understand pandas objects and leverage
+pandas grouping operations internally to support concise specification
+of complex visualizations. Seaborn also goes beyond matplotlib and
+pandas with the option to perform statistical estimation while plotting,
+aggregating across observations and visualizing the fit of statistical
+models to emphasize patterns in a dataset.
+
+### [yhat/ggpy](https://github.com/yhat/ggpy)
+
+Hadley Wickham\'s [ggplot2](https://ggplot2.tidyverse.org/) is a
+foundational exploratory visualization package for the R language. Based
+on [\"The Grammar of
+Graphics\"](https://www.cs.uic.edu/~wilkinson/TheGrammarOfGraphics/GOG.html)
+it provides a powerful, declarative and extremely general way to
+generate bespoke plots of any kind of data. It\'s really quite
+incredible. Various implementations to other languages are available,
+but a faithful implementation for Python users has long been missing.
+Although still young (as of Jan-2014), the
+[yhat/ggpy](https://github.com/yhat/ggpy) project has been progressing
+quickly in that direction.
+
+### [IPython Vega](https://github.com/vega/ipyvega)
+
+[IPython Vega](https://github.com/vega/ipyvega) leverages [Vega
+\<https://github.com/trifacta/vega\>]\_\_ to create plots
+within Jupyter Notebook.
+
+### [Plotly](https://plot.ly/python)
+
+[Plotly's](https://plot.ly/) [Python API](https://plot.ly/python/)
+enables interactive figures and web shareability. Maps, 2D, 3D, and
+live-streaming graphs are rendered with WebGL and
+[D3.js](https://d3js.org/). The library supports plotting directly from
+a pandas DataFrame and cloud-based collaboration. Users of [matplotlib,
+ggplot for Python, and
+Seaborn](https://plot.ly/python/matplotlib-to-plotly-tutorial/) can
+convert figures into interactive web-based plots. Plots can be drawn in
+[IPython Notebooks](https://plot.ly/ipython-notebooks/) , edited with R
+or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly
+is free for unlimited sharing, and has
+[cloud](https://plot.ly/product/plans/),
+[offline](https://plot.ly/python/offline/), or
+[on-premise](https://plot.ly/product/enterprise/) accounts for private
+use.
+
+### [QtPandas](https://github.com/draperjames/qtpandas)
+
+Spun off from the main pandas library, the
+[qtpandas](https://github.com/draperjames/qtpandas) library enables
+DataFrame visualization and manipulation in PyQt4 and PySide
+applications.
+
+## IDE
+
+### [IPython](https://ipython.org/documentation.html)
+
+IPython is an interactive command shell and distributed computing
+environment. IPython tab completion works with Pandas methods and also
+attributes like DataFrame columns.
+
+### [Jupyter Notebook / Jupyter Lab](https://jupyter.org)
+
+Jupyter Notebook is a web application for creating Jupyter notebooks. A
+Jupyter notebook is a JSON document containing an ordered list of
+input/output cells which can contain code, text, mathematics, plots and
+rich media. Jupyter notebooks can be converted to a number of open
+standard output formats (HTML, HTML presentation slides, LaTeX, PDF,
+ReStructuredText, Markdown, Python) through 'Download As' in the web
+interface and `jupyter convert` in a shell.
+
+Pandas DataFrames implement `_repr_html_`and `_repr_latex` methods which
+are utilized by Jupyter Notebook for displaying (abbreviated) HTML or
+LaTeX tables. LaTeX output is properly escaped. (Note: HTML tables may
+or may not be compatible with non-HTML Jupyter output formats.)
+
+See `Options and Settings <options>` and
+`Available Options <options.available>`
+for pandas `display.` settings.
+
+### [quantopian/qgrid](https://github.com/quantopian/qgrid)
+
+qgrid is \"an interactive grid for sorting and filtering DataFrames in
+IPython Notebook\" built with SlickGrid.
+
+### [Spyder](https://www.spyder-ide.org/)
+
+Spyder is a cross-platform PyQt-based IDE combining the editing,
+analysis, debugging and profiling functionality of a software
+development tool with the data exploration, interactive execution, deep
+inspection and rich visualization capabilities of a scientific
+environment like MATLAB or Rstudio.
+
+Its [Variable
+Explorer](https://docs.spyder-ide.org/variableexplorer.html) allows
+users to view, manipulate and edit pandas `Index`, `Series`, and
+`DataFrame` objects like a \"spreadsheet\", including copying and
+modifying values, sorting, displaying a \"heatmap\", converting data
+types and more. Pandas objects can also be renamed, duplicated, new
+columns added, copyed/pasted to/from the clipboard (as TSV), and
+saved/loaded to/from a file. Spyder can also import data from a variety
+of plain text and binary files or the clipboard into a new pandas
+DataFrame via a sophisticated import wizard.
+
+Most pandas classes, methods and data attributes can be autocompleted in
+Spyder\'s [Editor](https://docs.spyder-ide.org/editor.html) and [IPython
+Console](https://docs.spyder-ide.org/ipythonconsole.html), and Spyder\'s
+[Help pane](https://docs.spyder-ide.org/help.html) can retrieve and
+render Numpydoc documentation on pandas objects in rich text with Sphinx
+both automatically and on-demand.
+
+## API
+
+### [pandas-datareader](https://github.com/pydata/pandas-datareader)
+
+`pandas-datareader` is a remote data access library for pandas
+(PyPI:`pandas-datareader`). It is based on functionality that was
+located in `pandas.io.data` and `pandas.io.wb` but was split off in
+v0.19. See more in the [pandas-datareader
+docs](https://pandas-datareader.readthedocs.io/en/latest/):
+
+The following data feeds are available:
+
+- Google Finance
+- Tiingo
+- Morningstar
+- IEX
+- Robinhood
+- Enigma
+- Quandl
+- FRED
+- Fama/French
+- World Bank
+- OECD
+- Eurostat
+- TSP Fund Data
+- Nasdaq Trader Symbol Definitions
+- Stooq Index Data
+- MOEX Data
+
+### [quandl/Python](https://github.com/quandl/Python)
+
+Quandl API for Python wraps the Quandl REST API to return Pandas
+DataFrames with timeseries indexes.
+
+### [pydatastream](https://github.com/vfilimonov/pydatastream)
+
+PyDatastream is a Python interface to the [Thomson Dataworks Enterprise
+(DWE/Datastream)](http://dataworks.thomson.com/Dataworks/Enterprise/1.0/)
+SOAP API to return indexed Pandas DataFrames with financial data. This
+package requires valid credentials for this API (non free).
+
+### [pandaSDMX](https://pandasdmx.readthedocs.io)
+
+pandaSDMX is a library to retrieve and acquire statistical data and
+metadata disseminated in [SDMX](https://www.sdmx.org) 2.1, an
+ISO-standard widely used by institutions such as statistics offices,
+central banks, and international organisations. pandaSDMX can expose
+datasets and related structural metadata including data flows,
+code-lists, and data structure definitions as pandas Series or
+MultiIndexed DataFrames.
+
+### [fredapi](https://github.com/mortada/fredapi)
+
+fredapi is a Python interface to the [Federal Reserve Economic Data
+(FRED)](https://fred.stlouisfed.org/) provided by the Federal Reserve
+Bank of St. Louis. It works with both the FRED database and ALFRED
+database that contains point-in-time data (i.e. historic data
+revisions). fredapi provides a wrapper in Python to the FRED HTTP API,
+and also provides several convenient methods for parsing and analyzing
+point-in-time data from ALFRED. fredapi makes use of pandas and returns
+data in a Series or DataFrame. This module requires a FRED API key that
+you can obtain for free on the FRED website.
+
+## Domain specific
+
+### [Geopandas](https://github.com/kjordahl/geopandas)
+
+Geopandas extends pandas data objects to include geographic information
+which support geometric operations. If your work entails maps and
+geographical coordinates, and you love pandas, you should take a close
+look at Geopandas.
+
+### [xarray](https://github.com/pydata/xarray)
+
+xarray brings the labeled data power of pandas to the physical sciences
+by providing N-dimensional variants of the core pandas data structures.
+It aims to provide a pandas-like and pandas-compatible toolkit for
+analytics on multi-dimensional arrays, rather than the tabular data for
+which pandas excels.
+
+## Out-of-core
+
+### [Blaze](http://blaze.pydata.org/)
+
+Blaze provides a standard API for doing computations with various
+in-memory and on-disk backends: NumPy, Pandas, SQLAlchemy, MongoDB,
+PyTables, PySpark.
+
+### [Dask](https://dask.readthedocs.io/en/latest/)
+
+Dask is a flexible parallel computing library for analytics. Dask
+provides a familiar `DataFrame` interface for out-of-core, parallel and
+distributed computing.
+
+### [Dask-ML](https://dask-ml.readthedocs.io/en/latest/)
+
+Dask-ML enables parallel and distributed machine learning using Dask
+alongside existing machine learning libraries like Scikit-Learn,
+XGBoost, and TensorFlow.
+
+### [Koalas](https://koalas.readthedocs.io/en/latest/)
+
+Koalas provides a familiar pandas DataFrame interface on top of Apache
+Spark. It enables users to leverage multi-cores on one machine or a
+cluster of machines to speed up or scale their DataFrame code.
+
+### [Odo](http://odo.pydata.org)
+
+Odo provides a uniform API for moving data between different formats. It
+uses pandas own `read_csv` for CSV IO and leverages many existing
+packages such as PyTables, h5py, and pymongo to move data between non
+pandas formats. Its graph based approach is also extensible by end users
+for custom formats that may be too specific for the core of odo.
+
+### [Ray](https://ray.readthedocs.io/en/latest/pandas_on_ray.html)
+
+Pandas on Ray is an early stage DataFrame library that wraps Pandas and
+transparently distributes the data and computation. The user does not
+need to know how many cores their system has, nor do they need to
+specify how to distribute the data. In fact, users can continue using
+their previous Pandas notebooks while experiencing a considerable
+speedup from Pandas on Ray, even on a single machine. Only a
+modification of the import statement is needed, as we demonstrate below.
+Once you've changed your import statement, you're ready to use Pandas on
+Ray just like you would Pandas.
+
+```
+# import pandas as pd
+import ray.dataframe as pd
+```
+
+### [Vaex](https://docs.vaex.io/)
+
+Increasingly, packages are being built on top of pandas to address
+specific needs in data preparation, analysis and visualization. Vaex is
+a python library for Out-of-Core DataFrames (similar to Pandas), to
+visualize and explore big tabular datasets. It can calculate statistics
+such as mean, sum, count, standard deviation etc, on an N-dimensional
+grid up to a billion (10^9^) objects/rows per second. Visualization is
+done using histograms, density plots and 3d volume rendering, allowing
+interactive exploration of big data. Vaex uses memory mapping, zero
+memory copy policy and lazy computations for best performance (no memory
+wasted).
+
+- ``vaex.from_pandas``
+- ``vaex.to_pandas_df``
+
+## Data cleaning and validation
+
+### [pyjanitor](https://github.com/ericmjl/pyjanitor/)
+
+Pyjanitor provides a clean API for cleaning data, using method chaining.
+
+### [Engarde](https://engarde.readthedocs.io/en/latest/)
+
+Engarde is a lightweight library used to explicitly state your
+assumptions about your datasets and check that they're *actually* true.
+
+## Extension data types
+
+Pandas provides an interface for defining
+`extension types <extending.extension-types>` to extend NumPy's type system. The following libraries
+implement that interface to provide types not found in NumPy or pandas,
+which work well with pandas' data containers.
+
+### [cyberpandas](https://cyberpandas.readthedocs.io/en/latest)
+
+Cyberpandas provides an extension type for storing arrays of IP
+Addresses. These arrays can be stored inside pandas\' Series and
+DataFrame.
+
+## Accessors
+
+A directory of projects providing
+`extension accessors <extending.register-accessors>`. This is for users to discover new accessors and for library
+authors to coordinate on the namespace.
+
+ Library Accessor Classes
+ ------------------------------------------------------------- ---------- -----------------------
+ [cyberpandas](https://cyberpandas.readthedocs.io/en/latest) `ip` `Series`
+ [pdvega](https://altair-viz.github.io/pdvega/) `vgplot` `Series`, `DataFrame`
diff --git a/web/pandas/community/roadmap.md b/web/pandas/community/roadmap.md
new file mode 100644
index 0000000000000..8a5c2735b3d93
--- /dev/null
+++ b/web/pandas/community/roadmap.md
@@ -0,0 +1,195 @@
+# Roadmap
+
+This page provides an overview of the major themes in pandas'
+development. Each of these items requires a relatively large amount of
+effort to implement. These may be achieved more quickly with dedicated
+funding or interest from contributors.
+
+An item being on the roadmap does not mean that it will *necessarily*
+happen, even with unlimited funding. During the implementation period we
+may discover issues preventing the adoption of the feature.
+
+Additionally, an item *not* being on the roadmap does not exclude it
+from inclusion in pandas. The roadmap is intended for larger,
+fundamental changes to the project that are likely to take months or
+years of developer time. Smaller-scoped items will continue to be
+tracked on our [issue tracker](https://github.com/pandas-dev/pandas/issues).
+
+See [Roadmap evolution](#roadmap-evolution) for proposing
+changes to this document.
+
+## Extensibility
+
+Pandas `extending.extension-types` allow
+for extending NumPy types with custom data types and array storage.
+Pandas uses extension types internally, and provides an interface for
+3rd-party libraries to define their own custom data types.
+
+Many parts of pandas still unintentionally convert data to a NumPy
+array. These problems are especially pronounced for nested data.
+
+We'd like to improve the handling of extension arrays throughout the
+library, making their behavior more consistent with the handling of
+NumPy arrays. We'll do this by cleaning up pandas' internals and
+adding new methods to the extension array interface.
+
+## String data type
+
+Currently, pandas stores text data in an `object` -dtype NumPy array.
+The current implementation has two primary drawbacks: First, `object`
+-dtype is not specific to strings: any Python object can be stored in an
+`object` -dtype array, not just strings. Second: this is not efficient.
+The NumPy memory model isn't especially well-suited to variable width
+text data.
+
+To solve the first issue, we propose a new extension type for string
+data. This will initially be opt-in, with users explicitly requesting
+`dtype="string"`. The array backing this string dtype may initially be
+the current implementation: an `object` -dtype NumPy array of Python
+strings.
+
+To solve the second issue (performance), we'll explore alternative
+in-memory array libraries (for example, Apache Arrow). As part of the
+work, we may need to implement certain operations expected by pandas
+users (for example the algorithm used in, `Series.str.upper`). That work
+may be done outside of pandas.
+
+## Apache Arrow interoperability
+
+[Apache Arrow](https://arrow.apache.org) is a cross-language development
+platform for in-memory data. The Arrow logical types are closely aligned
+with typical pandas use cases.
+
+We'd like to provide better-integrated support for Arrow memory and
+data types within pandas. This will let us take advantage of its I/O
+capabilities and provide for better interoperability with other
+languages and libraries using Arrow.
+
+## Block manager rewrite
+
+We'd like to replace pandas current internal data structures (a
+collection of 1 or 2-D arrays) with a simpler collection of 1-D arrays.
+
+Pandas internal data model is quite complex. A DataFrame is made up of
+one or more 2-dimensional "blocks", with one or more blocks per dtype.
+This collection of 2-D arrays is managed by the BlockManager.
+
+The primary benefit of the BlockManager is improved performance on
+certain operations (construction from a 2D array, binary operations,
+reductions across the columns), especially for wide DataFrames. However,
+the BlockManager substantially increases the complexity and maintenance
+burden of pandas.
+
+By replacing the BlockManager we hope to achieve
+
+- Substantially simpler code
+- Easier extensibility with new logical types
+- Better user control over memory use and layout
+- Improved micro-performance
+- Option to provide a C / Cython API to pandas' internals
+
+See [these design
+documents](https://dev.pandas.io/pandas2/internal-architecture.html#removal-of-blockmanager-new-dataframe-internals)
+for more.
+
+## Decoupling of indexing and internals
+
+The code for getting and setting values in pandas' data structures
+needs refactoring. In particular, we must clearly separate code that
+converts keys (e.g., the argument to `DataFrame.loc`) to positions from
+code that uses these positions to get or set values. This is related to
+the proposed BlockManager rewrite. Currently, the BlockManager sometimes
+uses label-based, rather than position-based, indexing. We propose that
+it should only work with positional indexing, and the translation of
+keys to positions should be entirely done at a higher level.
+
+Indexing is a complicated API with many subtleties. This refactor will
+require care and attention. More details are discussed at
+<https://github.com/pandas-dev/pandas/wiki/(Tentative)-rules-for-restructuring-indexing-code>
+
+## Numba-accelerated operations
+
+[Numba](https://numba.pydata.org) is a JIT compiler for Python code.
+We'd like to provide ways for users to apply their own Numba-jitted
+functions where pandas accepts user-defined functions (for example,
+`Series.apply`,
+`DataFrame.apply`,
+`DataFrame.applymap`, and in groupby and
+window contexts). This will improve the performance of
+user-defined-functions in these operations by staying within compiled
+code.
+
+## Documentation improvements
+
+We'd like to improve the content, structure, and presentation of the
+pandas documentation. Some specific goals include
+
+- Overhaul the HTML theme with a modern, responsive design
+ (`15556`)
+- Improve the "Getting Started" documentation, designing and writing
+ learning paths for users different backgrounds (e.g. brand new to
+ programming, familiar with other languages like R, already familiar
+ with Python).
+- Improve the overall organization of the documentation and specific
+ subsections of the documentation to make navigation and finding
+ content easier.
+
+## Package docstring validation
+
+To improve the quality and consistency of pandas docstrings, we've
+developed tooling to check docstrings in a variety of ways.
+<https://github.com/pandas-dev/pandas/blob/master/scripts/validate_docstrings.py>
+contains the checks.
+
+Like many other projects, pandas uses the
+[numpydoc](https://numpydoc.readthedocs.io/en/latest/) style for writing
+docstrings. With the collaboration of the numpydoc maintainers, we'd
+like to move the checks to a package other than pandas so that other
+projects can easily use them as well.
+
+## Performance monitoring
+
+Pandas uses [airspeed velocity](https://asv.readthedocs.io/en/stable/)
+to monitor for performance regressions. ASV itself is a fabulous tool,
+but requires some additional work to be integrated into an open source
+project's workflow.
+
+The [asv-runner](https://github.com/asv-runner) organization, currently
+made up of pandas maintainers, provides tools built on top of ASV. We
+have a physical machine for running a number of project's benchmarks,
+and tools managing the benchmark runs and reporting on results.
+
+We'd like to fund improvements and maintenance of these tools to
+
+- Be more stable. Currently, they're maintained on the nights and
+ weekends when a maintainer has free time.
+- Tune the system for benchmarks to improve stability, following
+ <https://pyperf.readthedocs.io/en/latest/system.html>
+- Build a GitHub bot to request ASV runs *before* a PR is merged.
+ Currently, the benchmarks are only run nightly.
+
+## Roadmap Evolution
+
+Pandas continues to evolve. The direction is primarily determined by
+community interest. Everyone is welcome to review existing items on the
+roadmap and to propose a new item.
+
+Each item on the roadmap should be a short summary of a larger design
+proposal. The proposal should include
+
+1. Short summary of the changes, which would be appropriate for
+ inclusion in the roadmap if accepted.
+2. Motivation for the changes.
+3. An explanation of why the change is in scope for pandas.
+4. Detailed design: Preferably with example-usage (even if not
+ implemented yet) and API documentation
+5. API Change: Any API changes that may result from the proposal.
+
+That proposal may then be submitted as a GitHub issue, where the pandas
+maintainers can review and comment on the design. The [pandas mailing
+list](https://mail.python.org/mailman/listinfo/pandas-dev) should be
+notified of the proposal.
+
+When there's agreement that an implementation would be welcome, the
+roadmap should be updated to include the summary and a link to the
+discussion issue.
diff --git a/web/pandas/community/team.md b/web/pandas/community/team.md
new file mode 100644
index 0000000000000..c0a15081e1fa8
--- /dev/null
+++ b/web/pandas/community/team.md
@@ -0,0 +1,101 @@
+# Team
+
+## Contributors
+
+_pandas_ is made with love by more than [1,500 volunteer contributors](https://github.com/pandas-dev/pandas/graphs/contributors).
+
+If you want to support pandas development, you can find information in the [donations page](../donate.html).
+
+## Maintainers
+
+<div class="row maintainers">
+ {% for row in maintainers.people | batch(6, "") %}
+ <div class="card-group maintainers">
+ {% for person in row %}
+ {% if person %}
+ <div class="card">
+ <img class="card-img-top" alt="" src="{{ person.avatar_url }}"/>
+ <div class="card-body">
+ <h6 class="card-title">
+ {% if person.blog %}
+ <a href="{{ person.blog }}">
+ {{ person.name or person.login }}
+ </a>
+ {% else %}
+ {{ person.name or person.login }}
+ {% endif %}
+ </h6>
+ <p class="card-text small"><a href="{{ person.html_url }}">{{ person.login }}</a></p>
+ </div>
+ </div>
+ {% else %}
+ <div class="card border-0"></div>
+ {% endif %}
+ {% endfor %}
+ </div>
+ {% endfor %}
+</div>
+
+## BDFL
+
+Wes McKinney is the Benevolent Dictator for Life (BDFL).
+
+## Governance
+
+The project governance is available in the [project governance documents](https://github.com/pandas-dev/pandas-governance).
+
+## NumFOCUS
+
+
+
+_pandas_ is a Sponsored Project of [NumFOCUS](https://numfocus.org/), a 501(c)(3) nonprofit charity in the United States.
+NumFOCUS provides _pandas_ with fiscal, legal, and administrative support to help ensure the
+health and sustainability of the project. Visit numfocus.org for more information.
+
+Donations to _pandas_ are managed by NumFOCUS. For donors in the United States, your gift is tax-deductible
+to the extent provided by law. As with any donation, you should consult with your tax adviser about your particular tax situation.
+
+## Code of conduct committee
+
+<ul>
+ {% for person in maintainers.coc %}
+ <li>{{ person }}</li>
+ {% endfor %}
+</ul>
+
+## NumFOCUS committee
+
+<ul>
+ {% for person in maintainers.numfocus %}
+ <li>{{ person }}</li>
+ {% endfor %}
+</ul>
+
+## Institutional partners
+
+<ul>
+ {% for company in partners.active if company.employs %}
+ <li><a href="{{ company.url }}">{{ company.name }}</a> ({{ company.employs }})</li>
+ {% endfor %}
+</ul>
+
+In-kind sponsors
+
+- [Indeed](https://opensource.indeedeng.io/): Logo and website design
+- Can we find a donor for the hosting (website, benchmarks,...?)
+
+## Emeritus maintainers
+
+<ul>
+ {% for person in maintainers.emeritus %}
+ <li>{{ person }}</li>
+ {% endfor %}
+</ul>
+
+## Past institutional partners
+
+<ul>
+ {% for company in partners.past %}
+ <li><a href="{{ company.url }}">{{ company.name }}</a></li>
+ {% endfor %}
+</ul>
diff --git a/web/pandas/config.yml b/web/pandas/config.yml
new file mode 100644
index 0000000000000..ba979e220f3bd
--- /dev/null
+++ b/web/pandas/config.yml
@@ -0,0 +1,129 @@
+main:
+ templates_path: _templates
+ base_template: "layout.html"
+ ignore:
+ - _templates/layout.html
+ - config.yml
+ - blog.html # blog will be added at a later stage
+ - try.md # the binder page will be added later
+ github_repo_url: pandas-dev/pandas
+ context_preprocessors:
+ - pandas_web.Preprocessors.navbar_add_info
+ # - pandas_web.Preprocessors.blog_add_posts
+ - pandas_web.Preprocessors.maintainers_add_info
+ - pandas_web.Preprocessors.home_add_releases
+ markdown_extensions:
+ - toc
+ - tables
+ - fenced_code
+static:
+ logo: # path to the logo when it's in the repo
+ css:
+ - /static/css/pandas.css
+navbar:
+ - name: "Install"
+ target: /install.html
+ - name: "Documentation"
+ target:
+ - name: "Getting started"
+ target: https://pandas.pydata.org/pandas-docs/stable/getting_started/index.html
+ - name: "User guide"
+ target: https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html
+ - name: "API reference"
+ target: https://pandas.pydata.org/pandas-docs/stable/reference/index.html
+ - name: "Contributing to pandas"
+ target: https://pandas.pydata.org/pandas-docs/stable/development/index.html
+ - name: "Release notes"
+ target: https://pandas.pydata.org/pandas-docs/stable/whatsnew/index.html
+ - name: "Community"
+ target:
+ - name: "About pandas"
+ target: /community/about.html
+ - name: "Project roadmap"
+ target: /community/roadmap.html
+ - name: "Ecosystem"
+ target: /community/ecosystem.html
+ - name: "Ask a question (StackOverflow)"
+ target: https://stackoverflow.com/questions/tagged/pandas
+ - name: "Discuss (mailing list)"
+ target: https://groups.google.com/forum/#!forum/pydata
+ - name: "Team"
+ target: /community/team.html
+ - name: "Code of Conduct"
+ target: /community/coc.html
+ - name: "Citing pandas"
+ target: /community/citing.html
+ # - name: "Blog"
+ # target: /blog.html
+ - name: "Donate"
+ target: /donate.html
+blog:
+ num_posts: 8
+ feed:
+ - https://wesmckinney.com/feeds/pandas.atom.xml
+ - https://tomaugspurger.github.io/feed
+ - https://jorisvandenbossche.github.io/feeds/all.atom.xml
+ - https://datapythonista.github.io/blog/feeds/pandas.atom.xml
+ - https://numfocus.org/tag/pandas/feed/
+maintainers:
+ active:
+ - wesm
+ - jorisvandenbossche
+ - TomAugspurger
+ - shoyer
+ - jreback
+ - chris-b1
+ - sinhrks
+ - cpcloud
+ - gfyoung
+ - toobaz
+ - WillAyd
+ - mroeschke
+ - jschendel
+ - jbrockmendel
+ - datapythonista
+ - simonjayhawkins
+ - topper-123
+ emeritus:
+ - Wouter Overmeire
+ - Skipper Seabold
+ - Jeff Tratner
+ coc:
+ - Safia Abdalla
+ - Tom Augspurger
+ - Joris Van den Bossche
+ - Camille Scott
+ - Nathaniel Smith
+ numfocus:
+ - Phillip Cloud
+ - Stephan Hoyer
+ - Wes McKinney
+ - Jeff Reback
+ - Joris Van den Bossche
+partners:
+ active:
+ - name: "NumFOCUS"
+ url: https://numfocus.org/
+ logo: /static/img/partners/numfocus.svg
+ - name: "Anaconda"
+ url: https://www.anaconda.com/
+ logo: /static/img/partners/anaconda.svg
+ employs: "Tom Augspurger, Brock Mendel"
+ - name: "Two Sigma"
+ url: https://www.twosigma.com/
+ logo: /static/img/partners/two_sigma.svg
+ employs: "Phillip Cloud, Jeff Reback"
+ - name: "RStudio"
+ url: https://www.rstudio.com/
+ logo: /static/img/partners/r_studio.svg
+ employs: "Wes McKinney"
+ - name: "Ursa Labs"
+ url: https://ursalabs.org/
+ logo: /static/img/partners/ursa_labs.svg
+ employs: "Wes McKinney, Joris Van den Bossche"
+ - name: "Tidelift"
+ url: https://tidelift.com
+ logo: /static/img/partners/tidelift.svg
+ past:
+ - name: "Paris-Saclay Center for Data Science"
+ url: https://www.datascience-paris-saclay.fr/
diff --git a/web/pandas/donate.md b/web/pandas/donate.md
new file mode 100644
index 0000000000000..5badb4c5a2031
--- /dev/null
+++ b/web/pandas/donate.md
@@ -0,0 +1,25 @@
+# Donate to pandas
+
+_pandas_ is and always will be **free**. To make de development sustainable, we need _pandas_ users, corporate
+or individual, to support the development by providing their time and money.
+
+You can find more information about current developers and supporters in the [team page](community/team.html).
+Financial contributions will mainly be used to advance in the [pandas roadmap](community/roadmap.html).
+
+- If your **company or organization** is interested in helping make pandas better, please contact us at [info@numfocus.org](mailto:info@numfocus.org)
+- If you want to contribute to _pandas_ with your **time**, please visit the [contributing page](https://pandas.pydata.org/pandas-docs/stable/development/index.html)
+- If you want to support _pandas_ with a **donation**, please use the form below:
+
+
+<div id="salsalabs-donate-container">
+</div>
+<script type="text/javascript"
+ src="https://default.salsalabs.org/api/widget/template/4ba4e328-1855-47c8-9a89-63e4757d2151/?tId=salsalabs-donate-container">
+</script>
+
+_pandas_ is a Sponsored Project of [NumFOCUS](https://numfocus.org/), a 501(c)(3) nonprofit charity in the United States.
+NumFOCUS provides _pandas_ with fiscal, legal, and administrative support to help ensure the
+health and sustainability of the project. Visit numfocus.org for more information.
+
+Donations to _pandas_ are managed by NumFOCUS. For donors in the United States, your gift is tax-deductible
+to the extent provided by law. As with any donation, you should consult with your tax adviser about your particular tax situation.
diff --git a/web/pandas/index.html b/web/pandas/index.html
new file mode 100644
index 0000000000000..696f0862aa109
--- /dev/null
+++ b/web/pandas/index.html
@@ -0,0 +1,114 @@
+{% extends "layout.html" %}
+{% block body %}
+ <div class="container">
+ <div class="row">
+ <div class="col-md-9">
+ <section class="jumbotron text-center">
+ <h1>pandas</h1>
+ <p>
+ <strong>pandas</strong> is a fast, powerful, flexible and easy to use open source data analysis and manipulation tool,<br/>
+ built on top of the <a href="http://www.python.org">Python</a> programming language.
+ </p>
+ <p>
+ <a class="btn btn-primary" href="{{ base_url }}/install.html">Install pandas now!</a>
+ </p>
+ </section>
+
+ <div class="row">
+ <div class="col-md-4">
+ <h5>Getting started</h5>
+ <ul>
+ <!-- <li><a href="{{ base_url }}/try.html">Try pandas online</a></li> -->
+ <li><a href="{{ base_url }}/install.html">Install pandas</a></li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/getting_started/index.html">Getting started</a></li>
+ </ul>
+ </div>
+ <div class="col-md-4">
+ <h5>Documentation</h5>
+ <ul>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html">User guide</a></li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/index.html">API reference</a></li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/development/index.html">Contributing to pandas</a></li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/index.html">Release notes</a></li>
+ </ul>
+ </div>
+ <div class="col-md-4">
+ <h5>Community</h5>
+ <ul>
+ <li><a href="{{ base_url }}/community/about.html">About pandas</a></li>
+ <li><a href="https://stackoverflow.com/questions/tagged/pandas">Ask a question</a></li>
+ <li><a href="{{ base_url }}/community/ecosystem.html">Ecosystem</a></li>
+ </ul>
+ </div>
+ </div>
+ <section>
+ <h5>With the support of:</h5>
+ <div class="row h-100">
+ {% for company in partners.active %}
+ <div class="col-sm-6 col-md-2 my-auto">
+ <a href="{{ company.url }}" target="_blank">
+ <img class="img-fluid" alt="{{ company.name }}" src="{{ base_url }}{{ company.logo }}"/>
+ </a>
+ </div>
+ {% endfor %}
+ </div>
+ </section>
+ </div>
+ <div class="col-md-3">
+ {% if releases %}
+ <h4>Latest version: {{ releases[0].name }}</h4>
+ <ul>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/v0.25.0.html">What's new in {{ releases[0].name }}</a></li>
+ <li>Release date:<br/>{{ releases[0].published.strftime("%b %d, %Y") }}</li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/">Documentation (web)</a></li>
+ <li><a href="https://pandas.pydata.org/pandas-docs/stable/pandas.pdf">Documentation (pdf)</a></li>
+ <li><a href="{{ releases[0].url }}">Download source code</a></li>
+ </ul>
+ {% endif %}
+ <h4>Follow us</h4>
+ <div class="text-center">
+ <p>
+ <a href="https://twitter.com/pandas_dev?ref_src=twsrc%5Etfw" class="twitter-follow-button" data-show-count="false">Follow @pandas_dev</a><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>
+ </p>
+ </div>
+ <h4>Get the book</h4>
+ <p class="book">
+ <a href="https://amzn.to/2KI5JJw">
+ <img class="img-fluid" alt="Python for Data Analysis" src="{{ base_url }}/static/img/pydata_book.gif"/>
+ </a>
+ </p>
+ {% if releases[1:5] %}
+ <h4>Previous versions</h4>
+ <ul>
+ {% for release in releases[1:5] %}
+ <li class="small">
+ {{ release.name }} ({{ release.published.strftime("%b %d, %Y") }})<br/>
+ <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/{{ release.tag }}.html">changelog</a> |
+ <a href="https://pandas.pydata.org/pandas-docs/version/{{ release.name }}/">docs</a> |
+ <a href="https://pandas.pydata.org/pandas-docs/version/{{ release.name }}/pandas.pdf">pdf</a> |
+ <a href="{{ release.url }}">code</a>
+ </li>
+ {% endfor %}
+ </ul>
+ {% endif %}
+ {% if releases[5:] %}
+ <p class="text-center">
+ <a data-toggle="collapse" href="#show-more-releases" role="button" aria-expanded="false" aria-controls="show-more-releases">Show more</a>
+ </p>
+ <ul id="show-more-releases" class="collapse">
+ {% for release in releases[5:] %}
+ <li class="small">
+ {{ release.name }} ({{ release.published.strftime("%Y-%m-%d") }})<br/>
+ <a href="https://pandas.pydata.org/pandas-docs/stable/whatsnew/{{ release.tag }}.html">changelog</a> |
+ <a href="https://pandas.pydata.org/pandas-docs/version/{{ release.name }}/">docs</a> |
+ <a href="https://pandas.pydata.org/pandas-docs/version/{{ release.name }}/pandas.pdf">pdf</a> |
+ <a href="{{ release.url }}">code</a>
+ </li>
+ {% endfor %}
+ </ul>
+ {% endif %}
+ </div>
+ </div>
+ </div>
+
+{% endblock %}
diff --git a/web/pandas/install.md b/web/pandas/install.md
new file mode 100644
index 0000000000000..c6cccd803e33e
--- /dev/null
+++ b/web/pandas/install.md
@@ -0,0 +1,28 @@
+# Installation instructions
+
+The next steps provides the easiest and recommended way to set up your
+environment to use pandas. Other installation options can be found in
+the [advanced installation page](https://pandas.pydata.org/pandas-docs/stable/install.html).
+
+1. Download [Anaconda](https://www.anaconda.com/distribution/) for your operating system and
+ the latest Python version, run the installer, and follow the steps. Detailed instructions
+ on how to install Anaconda can be found in the
+ [Anaconda documentation](https://docs.anaconda.com/anaconda/install/)).
+
+2. In the Anaconda prompt (or terminal in Linux or MacOS), start JupyterLab:
+
+ <img class="img-fluid" alt="" src="{{ base_url }}/static/img/install/anaconda_prompt.png"/>
+
+3. In JupyterLab, create a new (Python 3) notebook:
+
+ <img class="img-fluid" alt="" src="{{ base_url }}/static/img/install/jupyterlab_home.png"/>
+
+4. In the first cell of the notebook, you can import pandas and check the version with:
+
+ <img class="img-fluid" alt="" src="{{ base_url }}/static/img/install/pandas_import_and_version.png"/>
+
+5. Now you are ready to use pandas you can write your code in the next cells.
+
+
+You can learn more about pandas in the [tutorials](#), and more about JupyterLab
+in the [JupyterLab documentation](https://jupyterlab.readthedocs.io/en/stable/user/interface.html).
diff --git a/web/pandas/static/css/pandas.css b/web/pandas/static/css/pandas.css
new file mode 100644
index 0000000000000..5911de96b5fa9
--- /dev/null
+++ b/web/pandas/static/css/pandas.css
@@ -0,0 +1,16 @@
+body {
+ padding-top: 5em;
+ padding-bottom: 3em;
+}
+code {
+ white-space: pre;
+}
+a.navbar-brand img {
+ max-height: 2em;
+}
+div.card {
+ margin: 0 0 .2em .2em !important;
+}
+.book {
+ padding: 0 20%;
+}
diff --git a/web/pandas/static/img/install/anaconda_prompt.png b/web/pandas/static/img/install/anaconda_prompt.png
new file mode 100644
index 0000000000000..7b547e4ebb02a
Binary files /dev/null and b/web/pandas/static/img/install/anaconda_prompt.png differ
diff --git a/web/pandas/static/img/install/jupyterlab_home.png b/web/pandas/static/img/install/jupyterlab_home.png
new file mode 100644
index 0000000000000..c62d33a5e0fc6
Binary files /dev/null and b/web/pandas/static/img/install/jupyterlab_home.png differ
diff --git a/web/pandas/static/img/install/pandas_import_and_version.png b/web/pandas/static/img/install/pandas_import_and_version.png
new file mode 100644
index 0000000000000..64c1303ac495c
Binary files /dev/null and b/web/pandas/static/img/install/pandas_import_and_version.png differ
diff --git a/web/pandas/static/img/pandas.svg b/web/pandas/static/img/pandas.svg
new file mode 120000
index 0000000000000..2e5d3872e4845
--- /dev/null
+++ b/web/pandas/static/img/pandas.svg
@@ -0,0 +1 @@
+../../../../doc/logo/pandas_logo.svg
\ No newline at end of file
diff --git a/web/pandas/static/img/partners/anaconda.svg b/web/pandas/static/img/partners/anaconda.svg
new file mode 100644
index 0000000000000..fcddf72ebaa28
--- /dev/null
+++ b/web/pandas/static/img/partners/anaconda.svg
@@ -0,0 +1,99 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ viewBox="0 0 530.44 90.053329"
+ height="90.053329"
+ width="530.44"
+ xml:space="preserve"
+ id="svg2"
+ version="1.1"><metadata
+ id="metadata8"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><defs
+ id="defs6" /><g
+ transform="matrix(1.3333333,0,0,-1.3333333,0,90.053333)"
+ id="g10"><g
+ transform="scale(0.1)"
+ id="g12"><path
+ id="path14"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 958.313,274.5 53.637,120.406 h 1.64 L 1068.32,274.5 Z m 67.867,251.754 c -1.65,3.285 -3.83,6.027 -9.31,6.027 h -5.47 c -4.93,0 -7.66,-2.742 -9.31,-6.027 L 831.887,157.93 c -3.282,-7.117 1.097,-14.231 9.304,-14.231 h 47.618 c 8.754,0 13.679,5.473 15.867,10.942 l 26.82,59.113 h 163.644 l 26.81,-59.113 c 3.83,-7.657 7.66,-10.942 15.88,-10.942 h 47.61 c 8.21,0 12.59,7.114 9.3,14.231 l -168.56,368.324" /><path
+ id="path16"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 1547.94,526.801 h -50.35 c -6.03,0 -10.4,-4.922 -10.4,-10.395 V 290.371 h -0.55 l -227.67,241.91 h -13.68 c -5.48,0 -10.4,-4.383 -10.4,-9.855 V 154.102 c 0,-5.481 4.92,-10.403 10.4,-10.403 h 49.8 c 6.02,0 10.4,4.922 10.4,10.403 v 235.332 h 0.54 L 1534.8,138.227 h 13.14 c 5.47,0 10.4,4.378 10.4,9.847 v 368.332 c 0,5.473 -4.93,10.395 -10.4,10.395" /><path
+ id="path18"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 1725.97,274.5 53.64,120.406 h 1.64 L 1835.98,274.5 Z m 67.87,251.754 c -1.64,3.285 -3.83,6.027 -9.31,6.027 h -5.47 c -4.93,0 -7.66,-2.742 -9.31,-6.027 L 1599.55,157.93 c -3.29,-7.117 1.09,-14.231 9.3,-14.231 h 47.62 c 8.75,0 13.68,5.473 15.87,10.942 l 26.82,59.113 h 163.64 l 26.81,-59.113 c 3.83,-7.657 7.67,-10.942 15.88,-10.942 h 47.61 c 8.21,0 12.59,7.114 9.3,14.231 l -168.56,368.324" /><path
+ id="path20"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 2261.6,241.117 c -3.29,3.285 -9.31,3.836 -13.69,0 -22.98,-18.605 -50.9,-31.191 -83.73,-31.191 -70.06,0 -122.6,58.008 -122.6,126.418 0,68.965 51.99,127.519 122.05,127.519 30.64,0 61.3,-12.039 84.28,-32.285 4.38,-4.379 9.85,-4.379 13.69,0 l 33.38,34.477 c 4.38,4.375 4.38,10.941 -0.55,15.328 -37.21,33.383 -77.17,50.898 -132.45,50.898 -109.45,0 -197.57,-88.117 -197.57,-197.574 0,-109.465 88.12,-196.48 197.57,-196.48 48.72,0 95.78,16.964 133,53.086 3.83,3.835 4.92,10.949 0.55,14.777 l -33.93,35.027" /><path
+ id="path22"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 2520.21,209.379 c -68.95,0 -125.33,56.371 -125.33,125.328 0,68.957 56.38,126.426 125.33,126.426 68.96,0 125.88,-57.469 125.88,-126.426 0,-68.957 -56.92,-125.328 -125.88,-125.328 z m 0,322.902 c -109.46,0 -196.48,-88.117 -196.48,-197.574 0,-109.465 87.02,-196.48 196.48,-196.48 109.46,0 197.03,87.015 197.03,196.48 0,109.457 -87.57,197.574 -197.03,197.574" /><path
+ id="path24"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 3090.17,526.801 h -50.35 c -6.02,0 -10.4,-4.922 -10.4,-10.395 V 290.371 h -0.54 l -227.68,241.91 h -13.68 c -5.47,0 -10.4,-4.383 -10.4,-9.855 V 154.102 c 0,-5.481 4.93,-10.403 10.4,-10.403 h 49.8 c 6.02,0 10.4,4.922 10.4,10.403 v 235.332 h 0.55 l 228.77,-251.207 h 13.13 c 5.47,0 10.4,4.378 10.4,9.847 v 368.332 c 0,5.473 -4.93,10.395 -10.4,10.395" /><path
+ id="path26"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 3303.16,210.465 h -62.39 v 250.121 h 62.39 c 71.15,0 123.14,-53.641 123.14,-124.785 0,-71.696 -51.99,-125.336 -123.14,-125.336 z m 6.57,316.336 h -129.71 c -5.47,0 -9.85,-4.922 -9.85,-10.395 V 154.102 c 0,-5.481 4.38,-10.403 9.85,-10.403 h 129.71 c 105.63,0 192.1,85.926 192.1,192.102 0,105.082 -86.47,191 -192.1,191" /><path
+ id="path28"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 3631.32,274.5 53.64,120.406 h 1.64 L 3741.33,274.5 Z m 236.43,-116.57 -168.57,368.324 c -1.64,3.285 -3.82,6.027 -9.29,6.027 h -5.48 c -4.93,0 -7.67,-2.742 -9.3,-6.027 L 3504.9,157.93 c -3.29,-7.117 1.09,-14.231 9.3,-14.231 h 47.62 c 8.76,0 13.68,5.473 15.87,10.942 l 26.82,59.113 h 163.63 l 26.83,-59.113 c 3.82,-7.657 7.66,-10.942 15.86,-10.942 h 47.62 c 8.21,0 12.59,7.114 9.3,14.231" /><path
+ id="path30"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 3940.9,176.27 h 7.99 c 2.7,0 4.5,-1.793 4.5,-4.403 0,-2.422 -1.8,-4.394 -4.5,-4.394 h -7.99 z m -4.85,-26.582 h 3.33 c 0.99,0 1.7,0.808 1.7,1.707 v 10.148 h 5.57 l 4.49,-10.598 c 0.27,-0.629 0.9,-1.257 1.62,-1.257 h 4.04 c 1.26,0 2.16,1.257 1.53,2.425 -1.53,3.235 -3.15,6.645 -4.76,9.969 2.69,0.984 6.82,3.5 6.82,9.879 0,6.824 -5.48,10.594 -11.04,10.594 h -13.3 c -0.98,0 -1.7,-0.809 -1.7,-1.703 v -29.457 c 0,-0.899 0.72,-1.707 1.7,-1.707" /><path
+ id="path32"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 3945.93,192.078 c 14.46,0 26.05,-11.586 26.05,-26.043 0,-14.371 -11.59,-26.047 -26.05,-26.047 -14.37,0 -26.04,11.676 -26.04,26.047 0,14.457 11.67,26.043 26.04,26.043 z m 0,-58.285 c 17.79,0 32.33,14.461 32.33,32.242 0,17.781 -14.54,32.328 -32.33,32.328 -17.78,0 -32.24,-14.547 -32.24,-32.328 0,-17.781 14.46,-32.242 32.24,-32.242" /><path
+ id="path34"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 125.527,158.422 0.051,2.484 c 0.414,19.649 1.977,39.149 4.684,57.961 l 0.254,1.77 -1.668,0.679 c -17.871,7.305 -35.4574,15.782 -52.2699,25.219 l -2.1172,1.184 -1.0742,-2.16 C 62.3164,223.238 52.9844,199.707 45.6836,175.602 l -0.7031,-2.254 2.2812,-0.629 C 72.0234,165.91 97.5195,161.184 123.051,158.66 l 2.476,-0.238" /><path
+ id="path36"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 177.781,500.941 c 0.032,0.196 0.063,0.395 0.094,0.59 -14.668,-0.258 -29.324,-1.265 -43.926,-2.965 1.891,-14.777 4.481,-29.437 7.828,-43.925 10.02,16.949 22.121,32.511 36.004,46.3" /><path
+ id="path38"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 125.527,140.855 -0.039,2.051 -2.043,0.199 c -21.406,2.02 -43.2223,5.661 -64.8278,10.821 l -5.668,1.355 3.211,-4.855 C 75.5742,121.098 99.3125,95.0195 126.73,72.9258 l 4.43,-3.5899 -0.719,5.668 c -2.906,22.6719 -4.554,44.8321 -4.914,65.8511" /><path
+ id="path40"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 230.566,657.227 c -26.32,-9.008 -51.164,-21.161 -74.101,-36.036 17.359,-3.07 34.469,-7.097 51.273,-12.027 6.696,16.375 14.297,32.426 22.828,48.063" /><path
+ id="path42"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 339.918,675.43 c -13.023,0 -25.848,-0.813 -38.488,-2.25 17.925,-12.489 35.066,-26.145 51.238,-41.051 l 13.43,-12.391 -13.168,-12.672 c -10.899,-10.488 -21.559,-21.898 -31.688,-33.918 l -0.512,-0.585 c -0.117,-0.125 -2.003,-2.219 -5.152,-6.055 8,0.84 16.117,1.293 24.34,1.293 127.07,0 230.086,-103.016 230.086,-230.086 0,-127.074 -103.016,-230.086 -230.086,-230.086 -44.094,0 -85.277,12.426 -120.277,33.934 -17.27,-1.918 -34.629,-2.922 -52.012,-2.922 -8.074,0 -16.152,0.211 -24.227,0.629 0.524,-26.172 3.016,-53.3052 7.477,-81.438 C 204.82,21.3242 269.879,0 339.918,0 c 186.516,0 337.715,151.199 337.715,337.715 0,186.512 -151.199,337.715 -337.715,337.715" /><path
+ id="path44"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 295.145,595.602 c 6.726,7.968 13.671,15.695 20.765,23.101 -15.824,13.469 -32.531,25.758 -50.004,36.856 -10.742,-18.161 -20.09,-36.977 -28.093,-56.282 15.195,-5.574 30.066,-11.953 44.589,-19.031 6.711,8.617 11.399,13.883 12.743,15.356" /><path
+ id="path46"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 65.9219,402.934 1.289,-2.09 2.0118,1.433 c 15.6289,11.235 32.0823,21.594 48.9103,30.789 l 1.582,0.864 -0.449,1.738 c -5.028,19.227 -8.868,39.055 -11.414,58.941 l -0.305,2.399 -2.387,-0.434 C 80.168,492.027 55.4609,485.344 31.7383,476.703 l -2.2227,-0.816 0.8789,-2.188 c 9.7422,-24.562 21.6914,-48.363 35.5274,-70.765" /><path
+ id="path48"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="M 62.0469,370.18 60.125,368.629 C 41.9492,353.844 24.7266,337.414 8.93359,319.797 L 7.375,318.066 9.13281,316.531 C 26.6641,301.188 45.5547,287.094 65.2734,274.645 l 2.0274,-1.293 1.2031,2.097 c 8.8828,15.781 18.8945,31.356 29.7695,46.278 l 1.0938,1.503 -1.2383,1.383 c -12.3281,13.746 -23.9883,28.395 -34.668,43.547 l -1.414,2.02" /><path
+ id="path50"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 194.48,157.273 5.868,0.348 -4.559,3.723 c -17.976,14.715 -33.625,32.09 -46.453,51.656 l -0.106,0.621 -3.75,1.649 -0.433,-3.184 c -2.262,-16.856 -3.586,-34.566 -3.945,-52.625 l -0.039,-2.215 2.207,-0.129 c 8.003,-0.429 16.078,-0.644 24.171,-0.644 9.004,0 18.032,0.269 27.039,0.8" /><path
+ id="path52"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 183.219,530.238 c 3.633,16.649 8.109,33.121 13.511,49.317 -21.125,6.078 -42.769,10.617 -64.789,13.523 -1.867,-22.047 -2.082,-44.082 -0.707,-65.941 17.278,1.988 34.629,3.011 51.985,3.101" /><path
+ id="path54"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 215.813,531.414 c 14.707,9.441 30.539,17.266 47.281,23.195 -11.875,5.59 -24,10.661 -36.348,15.184 -4.219,-12.633 -7.863,-25.441 -10.933,-38.379" /><path
+ id="path56"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 58.6914,257.121 -1.7773,1.113 C 39.4922,269.16 22.6055,281.363 6.74609,294.496 l -4.51953,3.742 0.76953,-5.812 C 7.30078,260.039 16.2734,228.496 29.6406,198.684 l 2.3672,-5.278 1.9024,5.465 c 6.6406,19.125 14.6601,38.102 23.8281,56.387 l 0.9531,1.863" /><path
+ id="path58"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="M 102.133,577.48 C 81.9766,557.492 64.3555,534.969 49.7266,510.445 c 17.4804,5.215 35.1836,9.371 53.0194,12.528 -1.23,18.082 -1.465,36.273 -0.613,54.507" /><path
+ id="path60"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 112.121,340.762 0.234,5.824 c 0.79,20.598 4.309,40.855 10.461,60.195 l 1.793,5.653 -5.129,-2.961 c -13.152,-7.59 -26.1792,-16.012 -38.7222,-25.047 l -1.8281,-1.328 1.293,-1.86 c 8.6992,-12.406 18.1562,-24.535 28.0973,-36.062 l 3.801,-4.414" /><path
+ id="path62"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 114.383,305.906 -0.805,5.707 -3.34,-4.691 C 100.836,293.727 92.082,279.945 84.2227,265.961 l -1.1133,-1.992 1.9922,-1.133 c 14.1562,-7.984 29.0114,-15.305 44.1564,-21.762 l 5.402,-2.316 -2.406,5.363 c -8.863,19.668 -14.875,40.453 -17.871,61.785" /><path
+ id="path64"
+ style="fill:#43b53b;fill-opacity:1;fill-rule:nonzero;stroke:none"
+ d="m 48.6602,386.676 1.5976,1.273 -1.0781,1.735 c -10.5859,16.918 -20.1836,34.707 -28.5469,52.867 l -2.457,5.355 -1.8125,-5.605 C 6.51172,411.789 1.05859,379.887 0.160156,347.473 L 0,341.523 4.10938,345.82 c 14.01172,14.598 28.99612,28.34 44.55082,40.856" /></g></g></svg>
\ No newline at end of file
diff --git a/web/pandas/static/img/partners/numfocus.svg b/web/pandas/static/img/partners/numfocus.svg
new file mode 100644
index 0000000000000..fcdd87b41e475
--- /dev/null
+++ b/web/pandas/static/img/partners/numfocus.svg
@@ -0,0 +1,60 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- Generator: Adobe Illustrator 19.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
+<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 432 135.7" style="enable-background:new 0 0 432 135.7;" xml:space="preserve">
+<style type="text/css">
+ .st0{fill:#F1563F;}
+ .st1{fill:#008896;}
+</style>
+<g>
+ <g>
+ <g>
+ <path class="st0" d="M97.9,12.2v51.9c0,12.7-6.8,19.7-19.1,19.7c-12.2,0-19-7-19-19.7V12.2h5v51.9c0,9.8,4.8,14.9,14,14.9 c9.2,0,14.1-5.2,14.1-14.9V12.2H97.9z"/>
+ </g>
+ <g>
+ <path class="st1" d="M329.8,29.8c0-0.3,0.1-0.7,0.1-1c0-8.3-6.9-16.7-20.1-16.7c-13.1,0-20.6,7.7-20.6,21.2v29.5 c0,13.5,7.4,21.2,20.3,21.2c13.4,0,20.4-8.4,20.4-16.7c0-0.3,0-0.8-0.1-1.4l-7.8,0c-0.7,4.6-1.7,10.4-12,10.4 c-9,0-13-4.1-13-13.4V33.3c0-9.2,4-13.4,12.7-13.4c7.7,0,11.8,3.4,12.2,10.1L329.8,29.8z"/>
+ </g>
+ <g>
+ <path class="st1" d="M376.2,12.4v50.3c0,13.6-7.3,21.2-20.5,21.2c-13.2,0-20.4-7.5-20.4-21.2V12.4h7.9v50.3 c0,9,4.1,13.4,12.5,13.4c8.4,0,12.6-4.5,12.6-13.4V12.4H376.2z"/>
+ </g>
+ <g>
+ <path class="st1" d="M414.9,22.6c-2-1-6-2.9-11.3-2.9c-8.4,0-12.6,3.4-12.6,10c0,7.1,4.8,9.1,12.5,11.8 c8.3,2.9,18.6,6.5,18.6,21.9c0,13-7.5,20.5-20.6,20.5c-8.2,0-14.2-2.7-17.3-6c-1.2-1.3-0.5-0.6-1.2-1.6l5.3-5.1 c1.9,2.2,5,5,12.8,5c8.7,0,13.2-4.1,13.2-12.3c0-9.9-6.6-12.3-14.3-15.1C392,46,383,42.8,383,30.1c0-11.3,7.7-18.1,20.6-18.1 c5.5,0,12.5,1.3,15.4,4.1L414.9,22.6z"/>
+ </g>
+ <g>
+ <path class="st1" d="M283.5,47.2c0-21.2-17.2-38.5-38.5-38.5c-21.2,0-38.5,17.2-38.5,38.5c0,21.2,17.2,38.5,38.5,38.5 C266.2,85.6,283.5,68.4,283.5,47.2z M213.1,47.2c0-17.6,14.3-31.9,31.9-31.9c17.6,0,31.9,14.3,31.9,31.9 c0,17.6-14.3,31.9-31.9,31.9C227.4,79.1,213.1,64.8,213.1,47.2z"/>
+ </g>
+ <g>
+ <path class="st0" d="M233.9,32.3c1.2,0,2.1-1,2.1-2.3c0-1.3-0.9-2.3-2.1-2.3h-7.3c-1.3,0-2.3,1-2.3,2.2v34.5c0,1.2,1,2.2,2.3,2.2 h7.3c1.2,0,2.1-1,2.1-2.3c0-1.3-1-2.3-2.1-2.3h-4.9V32.3H233.9z"/>
+ </g>
+ <g>
+ <path class="st0" d="M256.1,62c-1.2,0-2.2,1-2.2,2.3c0,1.3,1,2.3,2.2,2.3h7.3c1.3,0,2.3-1,2.3-2.2V29.9c0-1.2-1-2.2-2.3-2.2h-7.3 c-1.2,0-2.2,1-2.2,2.3c0,1.3,1,2.3,2.2,2.3h4.9V62H256.1z"/>
+ </g>
+ <polygon class="st1" points="208.7,19.8 208.7,12.1 171.8,12.1 171.8,83.7 179.7,83.7 179.7,51.5 196.2,51.5 196.2,43.9 179.7,43.9 179.7,19.8 "/>
+ <polygon class="st0" points="156.6,12.2 152.3,12.2 133.2,51.9 113.9,12.2 109.7,12.2 109.7,83.7 114.6,83.7 114.6,24.3 133.1,61.9 151.6,24.4 151.6,83.7 156.6,83.7 "/>
+ <polygon class="st0" points="44.6,83.7 48.1,83.7 48.1,12.1 43.1,12.1 43.2,70.5 14.2,12.2 10.1,12.2 10.1,83.7 15.1,83.7 14.9,23.1 "/>
+ </g>
+ <g id="XMLID_3_">
+ <path class="st1" d="M34.9,125.3c-1.2,0-2.3-0.2-3.2-0.5c-0.9-0.3-1.6-0.8-2.1-1.5c-0.5-0.7-0.9-1.4-1.1-2.3 c-0.2-0.9-0.4-1.9-0.4-3v-8.5c0-2.3,0.5-4.1,1.6-5.3c1.1-1.2,2.8-1.8,5.2-1.8c2.4,0,4.1,0.6,5.2,1.8c1.1,1.2,1.6,3,1.6,5.3v8.5 c0,2.3-0.5,4.1-1.6,5.4C39.1,124.7,37.3,125.3,34.9,125.3z M33.3,122.5c0.4,0.2,1,0.3,1.7,0.3c0.7,0,1.2-0.1,1.7-0.3 c0.4-0.2,0.8-0.5,1-0.9c0.2-0.4,0.4-0.8,0.5-1.3c0.1-0.5,0.1-1,0.1-1.7v-9.9c0-0.7,0-1.2-0.1-1.7c-0.1-0.5-0.2-0.9-0.5-1.2 c-0.2-0.4-0.6-0.7-1-0.8c-0.4-0.2-1-0.3-1.7-0.3c-0.7,0-1.2,0.1-1.7,0.3c-0.4,0.2-0.8,0.5-1,0.8c-0.2,0.4-0.4,0.8-0.5,1.2 c-0.1,0.5-0.1,1-0.1,1.7v9.9c0,0.7,0,1.3,0.1,1.7c0.1,0.5,0.2,0.9,0.5,1.3C32.5,122.1,32.8,122.4,33.3,122.5z"/>
+ <path class="st1" d="M46.7,125v-22.5h6.2c2.2,0,3.7,0.5,4.7,1.6c1,1.1,1.5,2.6,1.5,4.6c0,1.8-0.5,3.3-1.6,4.3 c-1,1-2.6,1.5-4.6,1.5h-2.7V125H46.7z M50.2,112.3h1.6c1.5,0,2.6-0.2,3.1-0.7c0.6-0.5,0.9-1.4,0.9-2.8c0-0.6,0-1,0-1.4 c0-0.4-0.1-0.7-0.2-1c-0.1-0.3-0.2-0.6-0.4-0.8c-0.2-0.2-0.4-0.3-0.7-0.5c-0.3-0.1-0.7-0.2-1.1-0.3c-0.4,0-0.9-0.1-1.5-0.1h-1.6 V112.3z"/>
+ <path class="st1" d="M63,125v-22.5h9.6v2.3h-6.2v7.4h5v2.2h-5v8.3h6.2v2.3H63z"/>
+ <path class="st1" d="M77.1,125v-22.5h2.4l7.1,15v-15h2.9V125h-2.2L80,109.7V125H77.1z"/>
+ <path class="st1" d="M109.6,125.3c-1,0-1.9-0.1-2.7-0.4c-0.8-0.3-1.4-0.6-1.9-1c-0.5-0.4-0.9-1-1.2-1.6c-0.3-0.6-0.5-1.3-0.7-2 c-0.1-0.7-0.2-1.5-0.2-2.4v-8c0-1,0.1-1.8,0.2-2.5c0.1-0.7,0.3-1.4,0.7-2.1c0.3-0.6,0.7-1.2,1.2-1.6c0.5-0.4,1.1-0.7,1.9-1 c0.8-0.2,1.7-0.4,2.7-0.4c2.2,0,3.8,0.5,4.8,1.6c1,1.1,1.5,2.7,1.5,4.8v1.8h-3.3V109c0-0.3,0-0.6,0-0.8c0-0.2,0-0.4,0-0.7 c0-0.3,0-0.5-0.1-0.7c0-0.2-0.1-0.4-0.2-0.6c-0.1-0.2-0.1-0.4-0.2-0.5c-0.1-0.1-0.2-0.3-0.4-0.4c-0.1-0.1-0.3-0.2-0.5-0.3 c-0.2-0.1-0.4-0.1-0.7-0.2c-0.3,0-0.6-0.1-0.9-0.1c-0.5,0-0.9,0-1.3,0.1c-0.4,0.1-0.7,0.2-0.9,0.4c-0.2,0.2-0.4,0.4-0.6,0.7 c-0.1,0.2-0.3,0.6-0.3,0.9c-0.1,0.4-0.1,0.8-0.1,1.1s0,0.8,0,1.3v8.9c0,1.7,0.2,2.9,0.7,3.5c0.5,0.7,1.3,1,2.5,1 c0.5,0,0.9,0,1.2-0.1c0.3-0.1,0.6-0.2,0.8-0.4c0.2-0.2,0.4-0.4,0.5-0.7c0.1-0.2,0.2-0.5,0.3-0.9c0.1-0.4,0.1-0.7,0.1-1.1 c0-0.3,0-0.8,0-1.3v-1.7h3.3v1.7c0,0.9-0.1,1.6-0.2,2.3c-0.1,0.7-0.3,1.3-0.6,1.9c-0.3,0.6-0.7,1.1-1.1,1.5 c-0.5,0.4-1.1,0.7-1.8,0.9C111.4,125.2,110.6,125.3,109.6,125.3z"/>
+ <path class="st1" d="M127.1,125.3c-1.2,0-2.3-0.2-3.2-0.5c-0.9-0.3-1.6-0.8-2.1-1.5c-0.5-0.7-0.9-1.4-1.1-2.3 c-0.2-0.9-0.4-1.9-0.4-3v-8.5c0-2.3,0.5-4.1,1.6-5.3c1.1-1.2,2.8-1.8,5.2-1.8c2.4,0,4.1,0.6,5.2,1.8c1.1,1.2,1.6,3,1.6,5.3v8.5 c0,2.3-0.5,4.1-1.6,5.4C131.2,124.7,129.5,125.3,127.1,125.3z M125.4,122.5c0.4,0.2,1,0.3,1.7,0.3c0.7,0,1.2-0.1,1.7-0.3 c0.4-0.2,0.8-0.5,1-0.9c0.2-0.4,0.4-0.8,0.5-1.3c0.1-0.5,0.1-1,0.1-1.7v-9.9c0-0.7,0-1.2-0.1-1.7c-0.1-0.5-0.2-0.9-0.5-1.2 c-0.2-0.4-0.6-0.7-1-0.8c-0.4-0.2-1-0.3-1.7-0.3c-0.7,0-1.2,0.1-1.7,0.3c-0.4,0.2-0.8,0.5-1,0.8c-0.2,0.4-0.4,0.8-0.5,1.2 c-0.1,0.5-0.1,1-0.1,1.7v9.9c0,0.7,0,1.3,0.1,1.7c0.1,0.5,0.2,0.9,0.5,1.3C124.6,122.1,125,122.4,125.4,122.5z"/>
+ <path class="st1" d="M138.9,125v-22.5h5.4c2.7,0,4.6,0.6,5.7,1.7c1.1,1.1,1.7,2.8,1.7,5.2v8.3c0,2.5-0.5,4.3-1.6,5.5 c-1.1,1.2-2.9,1.8-5.5,1.8H138.9z M142.3,122.8h2c0.5,0,1,0,1.4-0.1c0.4-0.1,0.7-0.2,1-0.3c0.3-0.1,0.5-0.3,0.7-0.6 c0.2-0.3,0.3-0.6,0.4-0.8c0.1-0.2,0.2-0.6,0.2-1.1c0-0.5,0.1-0.9,0.1-1.2c0-0.3,0-0.8,0-1.5v-7.3c0-0.5,0-1,0-1.4 c0-0.4-0.1-0.7-0.1-1.1c-0.1-0.4-0.2-0.7-0.3-0.9c-0.1-0.2-0.3-0.5-0.5-0.7c-0.2-0.2-0.4-0.4-0.7-0.5c-0.3-0.1-0.6-0.2-1-0.3 c-0.4-0.1-0.8-0.1-1.3-0.1h-1.9V122.8z"/>
+ <path class="st1" d="M156.6,125v-22.5h9.6v2.3h-6.2v7.4h5v2.2h-5v8.3h6.2v2.3H156.6z"/>
+ <path class="st1" d="M178.9,112.4v-2.3h9.4v2.3H178.9z M178.9,117.3v-2.3h9.4v2.3H178.9z"/>
+ <path class="st1" d="M202.1,125v-22.5h5.7c2.3,0,3.9,0.5,5,1.4c1.1,0.9,1.6,2.3,1.6,4.3c0,2.9-1.2,4.5-3.6,4.8 c1.5,0.3,2.5,1,3.2,1.9c0.7,0.9,1,2.2,1,3.8c0,2-0.5,3.6-1.6,4.7c-1,1.1-2.7,1.7-4.8,1.7H202.1z M205.6,111.9h2 c1.4,0,2.4-0.3,3-0.9c0.6-0.6,0.8-1.5,0.8-2.9c0-0.4,0-0.8-0.1-1.2s-0.2-0.6-0.3-0.8c-0.1-0.2-0.3-0.4-0.5-0.6 c-0.2-0.2-0.5-0.3-0.7-0.4c-0.2-0.1-0.5-0.2-0.9-0.2c-0.4,0-0.8-0.1-1.1-0.1c-0.4,0-0.8,0-1.4,0h-0.8V111.9z M205.6,122.8h2.3 c1.5,0,2.5-0.3,3.1-1c0.6-0.6,0.8-1.7,0.8-3.2c0-1.4-0.3-2.5-1-3.2c-0.7-0.7-1.7-1.1-3.2-1.1h-2.1V122.8z"/>
+ <path class="st1" d="M219.7,125v-22.5h9.6v2.3h-6.2v7.4h5v2.2h-5v8.3h6.2v2.3H219.7z"/>
+ <path class="st1" d="M236.4,125v-20.2h-4.6v-2.3h12.4v2.3h-4.4V125H236.4z"/>
+ <path class="st1" d="M250.2,125v-20.2h-4.6v-2.3h12.4v2.3h-4.4V125H250.2z"/>
+ <path class="st1" d="M261.5,125v-22.5h9.6v2.3H265v7.4h5v2.2h-5v8.3h6.2v2.3H261.5z"/>
+ <path class="st1" d="M275.6,125v-22.5h5c2.5,0,4.4,0.5,5.5,1.4c1.2,0.9,1.8,2.5,1.8,4.6c0,2.9-1,4.7-3.1,5.3l3.6,11.3H285 l-3.3-10.6H279V125H275.6z M279,112.2h1.3c1.5,0,2.6-0.3,3.2-0.8c0.6-0.5,1-1.5,1-2.9c0-1.4-0.3-2.3-0.8-2.9 c-0.6-0.6-1.6-0.8-3.1-0.8H279V112.2z"/>
+ <path class="st1" d="M307.5,125.3c-2.1,0-3.7-0.6-4.8-1.8c-1.1-1.2-1.7-2.8-1.8-4.8l3.1-0.8c0.2,3.2,1.4,4.8,3.5,4.8 c0.9,0,1.6-0.2,2-0.7c0.5-0.4,0.7-1.1,0.7-2c0-0.5-0.1-0.9-0.2-1.3c-0.1-0.4-0.3-0.8-0.6-1.1c-0.3-0.4-0.6-0.7-0.8-0.9 c-0.3-0.2-0.6-0.6-1.1-1l-4.2-3.4c-0.8-0.7-1.5-1.4-1.8-2.2c-0.4-0.8-0.6-1.7-0.6-2.8c0-1.6,0.5-2.9,1.6-3.8 c1-0.9,2.5-1.4,4.3-1.4c2,0,3.5,0.4,4.5,1.4c1,1,1.6,2.4,1.8,4.4l-2.9,0.7c0-0.5-0.1-0.9-0.2-1.3c-0.1-0.4-0.2-0.8-0.3-1.1 c-0.2-0.4-0.4-0.7-0.6-0.9c-0.2-0.2-0.5-0.4-0.9-0.6c-0.4-0.1-0.8-0.2-1.3-0.2c-1.8,0.1-2.7,0.9-2.7,2.5c0,0.7,0.1,1.2,0.4,1.7 c0.3,0.4,0.7,0.9,1.3,1.4l4.2,3.4c1.1,0.9,2,1.8,2.6,2.9c0.6,1,1,2.2,1,3.5c0,1.6-0.5,3-1.7,3.9 C310.7,124.8,309.3,125.3,307.5,125.3z"/>
+ <path class="st1" d="M323.9,125.3c-1,0-1.9-0.1-2.7-0.4c-0.8-0.3-1.4-0.6-1.9-1c-0.5-0.4-0.9-1-1.2-1.6c-0.3-0.6-0.5-1.3-0.7-2 c-0.1-0.7-0.2-1.5-0.2-2.4v-8c0-1,0.1-1.8,0.2-2.5c0.1-0.7,0.3-1.4,0.7-2.1c0.3-0.6,0.7-1.2,1.2-1.6c0.5-0.4,1.1-0.7,1.9-1 c0.8-0.2,1.7-0.4,2.7-0.4c2.2,0,3.9,0.5,4.8,1.6c1,1.1,1.5,2.7,1.5,4.8v1.8h-3.3V109c0-0.3,0-0.6,0-0.8c0-0.2,0-0.4,0-0.7 c0-0.3,0-0.5-0.1-0.7c0-0.2-0.1-0.4-0.2-0.6c-0.1-0.2-0.1-0.4-0.2-0.5c-0.1-0.1-0.2-0.3-0.4-0.4c-0.1-0.1-0.3-0.2-0.5-0.3 c-0.2-0.1-0.4-0.1-0.7-0.2c-0.3,0-0.5-0.1-0.9-0.1c-0.5,0-0.9,0-1.3,0.1c-0.4,0.1-0.7,0.2-0.9,0.4c-0.2,0.2-0.4,0.4-0.6,0.7 c-0.1,0.2-0.3,0.6-0.3,0.9c-0.1,0.4-0.1,0.8-0.1,1.1c0,0.4,0,0.8,0,1.3v8.9c0,1.7,0.2,2.9,0.7,3.5c0.5,0.7,1.3,1,2.5,1 c0.5,0,0.9,0,1.2-0.1c0.3-0.1,0.6-0.2,0.8-0.4c0.2-0.2,0.4-0.4,0.5-0.7c0.1-0.2,0.2-0.5,0.3-0.9c0.1-0.4,0.1-0.7,0.1-1.1 c0-0.3,0-0.8,0-1.3v-1.7h3.3v1.7c0,0.9-0.1,1.6-0.2,2.3c-0.1,0.7-0.3,1.3-0.6,1.9c-0.3,0.6-0.7,1.1-1.1,1.5 c-0.5,0.4-1.1,0.7-1.8,0.9C325.7,125.2,324.9,125.3,323.9,125.3z"/>
+ <path class="st1" d="M335.2,125v-22.5h3.4V125H335.2z"/>
+ <path class="st1" d="M344.2,125v-22.5h9.6v2.3h-6.2v7.4h5v2.2h-5v8.3h6.2v2.3H344.2z"/>
+ <path class="st1" d="M358.3,125v-22.5h2.4l7.1,15v-15h2.9V125h-2.2l-7.2-15.4V125H358.3z"/>
+ <path class="st1" d="M382.3,125.3c-1,0-1.9-0.1-2.7-0.4c-0.8-0.3-1.4-0.6-1.9-1c-0.5-0.4-0.9-1-1.2-1.6c-0.3-0.6-0.5-1.3-0.7-2 c-0.1-0.7-0.2-1.5-0.2-2.4v-8c0-1,0.1-1.8,0.2-2.5c0.1-0.7,0.3-1.4,0.7-2.1c0.3-0.6,0.7-1.2,1.2-1.6c0.5-0.4,1.1-0.7,1.9-1 c0.8-0.2,1.7-0.4,2.7-0.4c2.2,0,3.9,0.5,4.8,1.6c1,1.1,1.5,2.7,1.5,4.8v1.8h-3.3V109c0-0.3,0-0.6,0-0.8c0-0.2,0-0.4,0-0.7 c0-0.3,0-0.5-0.1-0.7c0-0.2-0.1-0.4-0.2-0.6c-0.1-0.2-0.1-0.4-0.2-0.5c-0.1-0.1-0.2-0.3-0.4-0.4c-0.1-0.1-0.3-0.2-0.5-0.3 c-0.2-0.1-0.4-0.1-0.7-0.2c-0.3,0-0.5-0.1-0.9-0.1c-0.5,0-0.9,0-1.3,0.1c-0.4,0.1-0.7,0.2-0.9,0.4c-0.2,0.2-0.4,0.4-0.6,0.7 c-0.1,0.2-0.3,0.6-0.3,0.9c-0.1,0.4-0.1,0.8-0.1,1.1c0,0.4,0,0.8,0,1.3v8.9c0,1.7,0.2,2.9,0.7,3.5c0.5,0.7,1.3,1,2.5,1 c0.5,0,0.9,0,1.2-0.1c0.3-0.1,0.6-0.2,0.8-0.4c0.2-0.2,0.4-0.4,0.5-0.7c0.1-0.2,0.2-0.5,0.3-0.9c0.1-0.4,0.1-0.7,0.1-1.1 c0-0.3,0-0.8,0-1.3v-1.7h3.3v1.7c0,0.9-0.1,1.6-0.2,2.3c-0.1,0.7-0.3,1.3-0.6,1.9c-0.3,0.6-0.7,1.1-1.1,1.5 c-0.5,0.4-1.1,0.7-1.8,0.9C384.1,125.2,383.2,125.3,382.3,125.3z"/>
+ <path class="st1" d="M393.4,125v-22.5h9.6v2.3h-6.2v7.4h5v2.2h-5v8.3h6.2v2.3H393.4z"/>
+ </g>
+</g>
+</svg>
\ No newline at end of file
diff --git a/web/pandas/static/img/partners/r_studio.svg b/web/pandas/static/img/partners/r_studio.svg
new file mode 100644
index 0000000000000..15a1d2a30ff30
--- /dev/null
+++ b/web/pandas/static/img/partners/r_studio.svg
@@ -0,0 +1,50 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- Generator: Adobe Illustrator 22.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
+<svg version="1.1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px" viewBox="0 0 1784.1 625.9" style="enable-background:new 0 0 1784.1 625.9;" xml:space="preserve">
+<style type="text/css">
+ .st0{fill:#75AADB;}
+ .st1{fill:#4D4D4D;}
+ .st2{fill:#FFFFFF;}
+ .st3{fill:url(#SVGID_1_);}
+ .st4{fill:url(#SVGID_2_);}
+ .st5{fill:url(#SVGID_3_);}
+ .st6{fill:url(#SVGID_4_);}
+ .st7{fill:url(#SVGID_5_);}
+ .st8{fill:url(#SVGID_6_);}
+ .st9{fill:url(#SVGID_7_);}
+ .st10{fill:url(#SVGID_8_);}
+ .st11{fill:url(#SVGID_9_);}
+ .st12{fill:url(#SVGID_10_);}
+ .st13{opacity:0.18;fill:url(#SVGID_11_);}
+ .st14{opacity:0.3;}
+</style>
+<g id="Gray_Logo">
+</g>
+<g id="Black_Letters">
+</g>
+<g id="Blue_Gradient_Letters">
+ <g>
+
+ <ellipse transform="matrix(0.7071 -0.7071 0.7071 0.7071 -127.9265 317.0317)" class="st0" cx="318.7" cy="312.9" rx="309.8" ry="309.8"/>
+ <g>
+ <path class="st1" d="M694.4,404.8c16.1,10.3,39.1,18.1,63.9,18.1c36.7,0,58.1-19.4,58.1-47.4c0-25.5-14.8-40.8-52.3-54.8 c-45.3-16.5-73.3-40.4-73.3-79.1c0-43.3,35.8-75.4,89.8-75.4c28,0,49,6.6,61,13.6l-9.9,29.3c-8.7-5.4-27.2-13.2-52.3-13.2 c-37.9,0-52.3,22.7-52.3,41.6c0,26,16.9,38.7,55.2,53.6c47,18.1,70.5,40.8,70.5,81.6c0,42.8-31.3,80.3-96.8,80.3 c-26.8,0-56-8.2-70.9-18.1L694.4,404.8z"/>
+ <path class="st1" d="M943.3,201.3v47.8h51.9v27.6h-51.9v107.5c0,24.7,7,38.7,27.2,38.7c9.9,0,15.7-0.8,21-2.5l1.6,27.6 c-7,2.5-18.1,4.9-32.1,4.9c-16.9,0-30.5-5.8-39.1-15.2c-9.9-11.1-14-28.8-14-52.3V276.7h-30.9v-27.6h30.9V212L943.3,201.3z"/>
+ <path class="st1" d="M1202.8,393.7c0,21,0.4,39.1,1.6,54.8h-32.1l-2.1-32.5h-0.8c-9.1,16.1-30.5,37.1-65.9,37.1 c-31.3,0-68.8-17.7-68.8-87.3V249.1h36.3v110c0,37.9,11.9,63.9,44.5,63.9c24.3,0,41.2-16.9,47.8-33.4c2.1-4.9,3.3-11.5,3.3-18.5 v-122h36.3V393.7z"/>
+ <path class="st1" d="M1434.8,156v241c0,17.7,0.8,37.9,1.6,51.5h-32.1l-1.6-34.6h-1.2c-10.7,22.2-34.6,39.1-67.2,39.1 c-48.2,0-85.7-40.8-85.7-101.4c-0.4-66.3,41.2-106.7,89.4-106.7c30.9,0,51.1,14.4,60.2,30.1h0.8V156H1434.8z M1398.9,330.2 c0-4.5-0.4-10.7-1.6-15.2c-5.4-22.7-25.1-41.6-52.3-41.6c-37.5,0-59.7,33-59.7,76.6c0,40.4,20.2,73.8,58.9,73.8 c24.3,0,46.6-16.5,53.1-43.3c1.2-4.9,1.6-9.9,1.6-15.7V330.2z"/>
+ <path class="st1" d="M1535.7,193c0,12.4-8.7,22.2-23.1,22.2c-13.2,0-21.8-9.9-21.8-22.2c0-12.4,9.1-22.7,22.7-22.7 C1526.6,170.4,1535.7,180.3,1535.7,193z M1495.3,448.5V249.1h36.3v199.4H1495.3z"/>
+ <path class="st1" d="M1772.2,347.1c0,73.7-51.5,105.9-99.3,105.9c-53.6,0-95.6-39.6-95.6-102.6c0-66.3,44.1-105.5,98.9-105.5 C1733.5,245,1772.2,286.6,1772.2,347.1z M1614.4,349.2c0,43.7,24.7,76.6,60.2,76.6c34.6,0,60.6-32.5,60.6-77.5 c0-33.8-16.9-76.2-59.7-76.2C1632.9,272.1,1614.4,311.7,1614.4,349.2z"/>
+ </g>
+ <g>
+ <path class="st2" d="M424.7,411.8h33.6v26.1h-51.3L322,310.5h-45.3v101.3h44.3v26.1H209.5v-26.1h38.3V187.3l-38.3-4.7v-24.7 c14.5,3.3,27.1,5.6,42.9,5.6c23.8,0,48.1-5.6,71.9-5.6c46.2,0,89.1,21,89.1,72.3c0,39.7-23.8,64.9-60.7,75.6L424.7,411.8z M276.7,285.3l24.3,0.5c59.3,0.9,82.1-21.9,82.1-52.3c0-35.5-25.7-49.5-58.3-49.5c-15.4,0-31.3,1.4-48.1,3.3V285.3z"/>
+ </g>
+ <g>
+ <path class="st1" d="M1751.8,170.4c-12.9,0-23.4,10.5-23.4,23.4c0,12.9,10.5,23.4,23.4,23.4c12.9,0,23.4-10.5,23.4-23.4 C1775.2,180.9,1764.7,170.4,1751.8,170.4z M1771.4,193.8c0,10.8-8.8,19.5-19.5,19.5c-10.8,0-19.5-8.8-19.5-19.5 c0-10.8,8.8-19.5,19.5-19.5C1762.6,174.2,1771.4,183,1771.4,193.8z"/>
+ <path class="st1" d="M1760.1,203.3l-5.8-8.5c3.3-1.2,5-3.6,5-7c0-5.1-4.3-6.9-8.4-6.9c-1.1,0-2.2,0.1-3.2,0.3 c-1,0.1-2.1,0.2-3.1,0.2c-1.4,0-2.5-0.2-3.7-0.5l-0.6-0.1v3.3l3.4,0.4v18.8h-3.4v3.4h10.9v-3.4h-3.9v-7.9h3.2l7.3,11l0.2,0.2h5.3 v-3.4H1760.1z M1755.6,188.1c0,1.2-0.5,2.2-1.4,2.9c-1.1,0.8-2.8,1.2-5,1.2l-1.9,0v-7.7c1.4-0.1,2.6-0.2,3.7-0.2 C1753.1,184.3,1755.6,185,1755.6,188.1z"/>
+ </g>
+ </g>
+</g>
+<g id="White_Letters">
+</g>
+<g id="R_Ball">
+</g>
+</svg>
\ No newline at end of file
diff --git a/web/pandas/static/img/partners/tidelift.svg b/web/pandas/static/img/partners/tidelift.svg
new file mode 100644
index 0000000000000..af12d68417235
--- /dev/null
+++ b/web/pandas/static/img/partners/tidelift.svg
@@ -0,0 +1,33 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 21.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
+<svg version="1.1" id="Artwork" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
+ viewBox="0 0 190.1 33" style="enable-background:new 0 0 190.1 33;" xml:space="preserve">
+<style type="text/css">
+ .st0{fill:#4B5168;}
+ .st1{fill:#F6914D;}
+</style>
+<g>
+ <path class="st0" d="M33.4,27.7V5.3c0-2.3,0-2.3,2.4-2.3c2.4,0,2.4,0,2.4,2.3v22.4c0,2.3,0,2.3-2.4,2.3
+ C33.4,29.9,33.4,29.9,33.4,27.7z"/>
+ <path class="st0" d="M45,26.4V6.6c0-3.6,0-3.6,3.6-3.6h5.8c7.8,0,12.5,3.9,13,10.2c0.2,2.2,0.2,3.4,0,5.5
+ c-0.5,6.3-5.3,11.2-13,11.2h-5.8C45,29.9,45,29.9,45,26.4z M54.3,25.4c5.3,0,8-3,8.3-7.1c0.1-1.8,0.1-2.8,0-4.6
+ c-0.3-4.2-3-6.1-8.3-6.1h-4.5v17.8H54.3z"/>
+ <path class="st0" d="M73.8,26.4V6.6c0-3.6,0-3.6,3.6-3.6h13.5c2.3,0,2.3,0,2.3,2.2c0,2.2,0,2.2-2.3,2.2H78.6v6.9h11
+ c2.2,0,2.2,0,2.2,2.1c0,2.1,0,2.1-2.2,2.1h-11v6.9h12.3c2.3,0,2.3,0,2.3,2.2c0,2.3,0,2.3-2.3,2.3H77.4
+ C73.8,29.9,73.8,29.9,73.8,26.4z"/>
+ <path class="st0" d="M100,26.4v-21c0-2.3,0-2.3,2.4-2.3c2.4,0,2.4,0,2.4,2.3v20.2h11.9c2.4,0,2.4,0,2.4,2.2c0,2.2,0,2.2-2.4,2.2
+ h-13.1C100,29.9,100,29.9,100,26.4z"/>
+ <path class="st0" d="M125.8,27.7V5.3c0-2.3,0-2.3,2.4-2.3c2.4,0,2.4,0,2.4,2.3v22.4c0,2.3,0,2.3-2.4,2.3
+ C125.8,29.9,125.8,29.9,125.8,27.7z"/>
+ <path class="st0" d="M137.4,27.7V6.6c0-3.6,0-3.6,3.6-3.6h13.5c2.3,0,2.3,0,2.3,2.2c0,2.2,0,2.2-2.3,2.2h-12.2v7.2h11.3
+ c2.3,0,2.3,0,2.3,2.2c0,2.2,0,2.2-2.3,2.2h-11.3v8.6c0,2.3,0,2.3-2.4,2.3S137.4,29.9,137.4,27.7z"/>
+ <path class="st0" d="M24.2,3.1H5.5c-2.4,0-2.4,0-2.4,2.2c0,2.2,0,2.2,2.4,2.2h7v4.7v3.2l4.8-3.7v-1.1V7.5h7c2.4,0,2.4,0,2.4-2.2
+ C26.6,3.1,26.6,3.1,24.2,3.1z"/>
+ <path class="st1" d="M12.5,20v7.6c0,2.3,0,2.3,2.4,2.3c2.4,0,2.4,0,2.4-2.3V16.3L12.5,20z"/>
+ <g>
+ <path class="st0" d="M165.9,3.1h18.7c2.4,0,2.4,0,2.4,2.2c0,2.2,0,2.2-2.4,2.2h-7v4.7v3.2l-4.8-3.7v-1.1V7.5h-7
+ c-2.4,0-2.4,0-2.4-2.2C163.5,3.1,163.5,3.1,165.9,3.1z"/>
+ <path class="st1" d="M177.6,20v7.6c0,2.3,0,2.3-2.4,2.3c-2.4,0-2.4,0-2.4-2.3V16.3L177.6,20z"/>
+ </g>
+</g>
+</svg>
diff --git a/web/pandas/static/img/partners/two_sigma.svg b/web/pandas/static/img/partners/two_sigma.svg
new file mode 100644
index 0000000000000..d38df12766ed6
--- /dev/null
+++ b/web/pandas/static/img/partners/two_sigma.svg
@@ -0,0 +1 @@
+<svg width="230" height="42" viewBox="0 0 230 42" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><title>Logo</title><defs><path id="a" d="M19.436 21.668V1.025H0v20.643h19.435z"></path></defs><g fill="none" fill-rule="evenodd"><path fill="#2D2D2D" d="M59.06 13.464h-7.137v-3.155h17.811v3.155H62.6V30.95h-3.54zm14.01-3.155h3.745l4.747 15.66h.06l4.483-15.66h3.301l4.454 15.66h.059l4.777-15.66h3.716L95.895 30.95H92.09l-4.335-15.127h-.059L83.361 30.95h-3.804zm41.214-.355c5.986 0 10.527 4.158 10.527 10.556 0 6.55-4.541 10.794-10.527 10.794-5.985 0-10.558-4.245-10.558-10.794 0-6.398 4.573-10.556 10.558-10.556m0 18.285c3.892 0 6.93-2.89 6.93-7.729 0-4.658-3.007-7.518-6.93-7.518-3.922 0-6.93 2.86-6.93 7.518 0 4.839 3.038 7.73 6.93 7.73m40.846-17.931h3.539V30.95h-3.54V19.41zm18.744-.355c2.832 0 5.222.885 7.313 2.33 0 0-2.026 2.374-2.128 2.311-1.56-1-3.21-1.574-5.096-1.574-4.247 0-7.048 3.068-7.048 7.433 0 4.746 2.624 7.785 7.048 7.785 1.534 0 3.067-.385 4.13-1.003v-4.897h-5.19v-2.623h8.462v9.347c-2.007 1.416-4.63 2.24-7.49 2.24-6.46 0-10.587-4.363-10.587-10.85 0-6.075 4.187-10.499 10.586-10.499m12.506.355h3.57l6.812 9.701 6.811-9.701h3.541V30.95h-3.421V15.558l-6.962 9.73-6.958-9.73V30.95h-3.392z"></path><g transform="translate(210.418 9.283)"><mask id="b" fill="#fff"><use xlink:href="#a"></use></mask><path d="M7.639 1.025h4.158l7.64 20.643H15.63l-1.561-4.454H5.368l-1.533 4.454H0L7.639 1.025zM6.34 14.354h6.725L9.734 4.74h-.06L6.34 14.354z" fill="#2D2D2D" mask="url(#b)"></path></g><path d="M136.826 26.498c1.861 1.007 3.618 1.68 5.887 1.68 2.715 0 4.069-1.18 4.069-2.83 0-4.66-11.616-1.594-11.616-9.466 0-3.303 2.74-5.928 7.37-5.928 2.714 0 5.443.653 7.579 1.902l-2.314 2.361c-1.68-.72-3.11-1.137-5.146-1.137-2.389 0-3.806 1.21-3.806 2.744 0 4.63 11.62 1.473 11.62 9.494 0 3.393-2.567 5.985-7.756 5.985-3.035 0-6.33-1.076-8.273-2.419l2.386-2.386z" fill="#2D2D2D"></path><path fill="#009AA6" d="M20.625 0L0 20.63l20.625 20.628 20.63-20.628z"></path><path d="M9.748 26.478c-.16-6.605 7.789-5.746 7.789-9.13 0-1.1-.863-2.041-2.784-2.041-1.401 0-2.743.701-3.724 1.602l-1.46-1.463c1.259-1.18 3.223-2.14 5.284-2.14 3.304 0 4.986 1.842 4.986 4.003 0 4.986-7.728 4.104-7.728 8.27h7.607v1.98h-9.95l-.02-1.081zm15.937-.5c-1.521 0-2.423-.98-2.423-2.862 0-2.404 1.602-4.566 3.525-4.566 1.5 0 2.402.981 2.402 2.883 0 2.401-1.582 4.545-3.504 4.545zm9.713-9.25h-8.444v.003c-3.437.005-6.033 2.745-6.033 6.403 0 2.905 1.881 4.666 4.544 4.666 3.464 0 6.067-2.743 6.067-6.386 0-1.182-.313-2.173-.856-2.935h2.947l1.775-1.75z" fill="#FFF"></path></g></svg>
diff --git a/web/pandas/static/img/partners/ursa_labs.svg b/web/pandas/static/img/partners/ursa_labs.svg
new file mode 100644
index 0000000000000..cacc80e337d25
--- /dev/null
+++ b/web/pandas/static/img/partners/ursa_labs.svg
@@ -0,0 +1,106 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!-- Generator: Adobe Illustrator 23.0.3, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
+<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
+ viewBox="0 0 359 270" style="enable-background:new 0 0 359 270;" xml:space="preserve">
+<style type="text/css">
+ .st0{fill-rule:evenodd;clip-rule:evenodd;fill:#404040;}
+ .st1{filter:url(#Adobe_OpacityMaskFilter);}
+ .st2{fill-rule:evenodd;clip-rule:evenodd;fill:#FFFFFF;}
+ .st3{mask:url(#mask-2_1_);}
+</style>
+<title>HOME 1 Copy 8</title>
+<desc>Created with Sketch.</desc>
+<g id="HOME-1-Copy-8">
+ <g id="Group" transform="translate(20.000000, 20.000000)">
+ <path id="URSA-LABS-Copy" class="st0" d="M0,158.4h9.1V214c0,0.3,0,0.7,0.1,1.1c0,0.3,0,0.9,0.1,1.6s0.2,1.5,0.6,2.3
+ c0.3,0.8,0.9,1.5,1.6,2.1c0.7,0.6,1.8,0.9,3.3,0.9c0.3,0,0.9,0,1.6-0.1c0.7-0.1,1.4-0.4,2.1-0.9c1-0.9,1.6-2,1.8-3.3
+ s0.3-3.2,0.4-5.5v-53.8h9.2v54.4c0,0.6,0,1.3-0.1,2.1c-0.1,0.8-0.2,1.7-0.3,2.6s-0.3,1.8-0.5,2.6c-0.7,2.3-1.7,4.1-3,5.4
+ c-1.3,1.3-2.7,2.3-4.2,2.9c-1.5,0.7-2.9,1.1-4.2,1.2c-1.3,0.1-2.3,0.2-3,0.2c-0.6,0-1.5-0.1-2.7-0.2c-1.2-0.1-2.5-0.5-3.8-1
+ s-2.6-1.4-3.8-2.5c-1.2-1.1-2.2-2.7-3-4.6c-0.4-1-0.7-2.1-0.9-3.3c-0.2-1.2-0.3-2.9-0.4-5V158.4z M44,158.4h17
+ c0.6,0,1.2,0,1.7,0.1c0.6,0.1,1.3,0.2,2.2,0.3c0.9,0.1,1.7,0.4,2.6,0.8c0.8,0.4,1.6,1.1,2.3,2c0.7,0.9,1.2,2.1,1.6,3.7
+ c0.4,1.8,0.6,5.1,0.6,10.1c0,1.3,0,2.7-0.1,4.1c0,1.4-0.1,2.8-0.2,4.2c-0.1,0.9-0.3,1.9-0.4,2.9s-0.4,1.9-0.7,2.7
+ c-0.4,0.9-0.9,1.6-1.6,2.1s-1.3,0.8-2,1c-0.7,0.2-1.3,0.3-1.9,0.3H64v0.5c1.3,0.1,2.4,0.3,3.3,0.6c0.9,0.3,1.8,1,2.5,2.1
+ c0.8,1.3,1.3,2.7,1.5,4.3c0.2,1.6,0.3,3.9,0.3,6.8v7.7c0,2,0,3.6,0.1,4.9c0.1,1.3,0.2,2.4,0.3,3.3c0.1,0.9,0.3,1.8,0.5,2.7
+ c0.2,0.9,0.6,1.8,1,2.9h-9.7c-0.3-1.7-0.6-3-0.8-4.1s-0.3-2.2-0.4-3.2c-0.1-1-0.2-2.1-0.2-3.2c0-1.1-0.1-2.5-0.1-4.2v-5
+ c-0.1-1.2-0.1-2.4-0.2-3.6c0-1.2-0.1-2.4-0.3-3.6c-0.1-0.9-0.3-1.7-0.5-2.5c-0.2-0.8-0.6-1.5-1.2-2c-0.5-0.3-1-0.5-1.5-0.6
+ s-1-0.2-1.6-0.2h-3.8v32.4H44V158.4z M53.4,166.9v21.7h4.4c1.2,0,2.2-0.2,2.9-0.6c0.7-0.4,1.2-1.2,1.6-2.5
+ c0.2-0.9,0.3-2.3,0.4-4.2s0.1-4.1,0.1-6.6c0-0.7,0-1.5-0.1-2.2c0-0.8-0.1-1.5-0.2-2.2c-0.1-1.4-0.4-2.3-1-2.8
+ c-0.3-0.3-0.8-0.5-1.3-0.5c-0.5,0-1.2,0-2.2,0H53.4z M110.6,169.1v12.4h-8.5v-12.4c0-0.2,0-0.6-0.1-1.1c0-0.5-0.2-1.1-0.4-1.6
+ c-0.2-0.5-0.6-1-1.1-1.4s-1.3-0.6-2.3-0.6c-1.1,0-2,0.2-2.6,0.6c-0.6,0.4-1.1,1-1.4,1.7c-0.3,0.7-0.5,1.5-0.6,2.3
+ c-0.1,0.9-0.1,1.7-0.1,2.5c0,1.5,0.1,2.8,0.3,4c0.2,1.2,0.5,2.3,0.9,3.4s0.9,2.2,1.5,3.2s1.3,2.2,2.1,3.4c0.7,1.1,1.3,2.1,2,3.1
+ c0.7,1,1.4,2,2.1,3.1c1,1.4,2,2.9,3.1,4.6c1.2,1.9,2.2,3.7,2.9,5.3c0.7,1.6,1.3,3.1,1.7,4.5c0.4,1.4,0.7,2.7,0.8,3.9
+ c0.1,1.2,0.2,2.3,0.2,3.3c0,0.4,0,1.3-0.1,2.6c-0.1,1.3-0.4,2.8-0.9,4.4c-0.5,1.6-1.3,3.3-2.3,4.9c-1,1.6-2.6,2.9-4.6,3.7
+ c-0.6,0.3-1.4,0.5-2.4,0.7c-1,0.2-2.3,0.3-3.8,0.3c-2.9,0-5.1-0.5-6.8-1.4s-2.8-1.9-3.6-2.8c-1.5-1.7-2.3-3.4-2.6-5.3
+ s-0.4-3.8-0.5-5.9V203h8.6v12.8c0,1.7,0.2,3,0.5,3.8c0.3,0.8,0.8,1.5,1.6,2c0.2,0.1,0.5,0.3,1,0.5c0.5,0.2,1.1,0.3,1.8,0.3
+ c1.1,0,2-0.3,2.7-0.8c0.6-0.6,1.1-1.3,1.4-2.1c0.3-0.8,0.4-1.7,0.5-2.7c0-1,0.1-1.8,0.1-2.6c0-2.5-0.3-4.6-0.8-6.4
+ c-0.5-1.7-1.4-3.7-2.7-5.9c-1.3-2.3-2.8-4.5-4.3-6.6s-2.9-4.3-4.3-6.5c-0.4-0.6-0.9-1.4-1.5-2.4c-0.6-1-1.2-2.2-1.8-3.6
+ c-0.6-1.4-1.1-3-1.5-4.7s-0.7-3.7-0.7-5.7c0-3.9,0.7-6.9,2.1-9s2.8-3.7,4.3-4.5c0.7-0.5,1.8-0.9,3.1-1.3c1.3-0.4,3-0.6,5-0.6
+ c0.5,0,1.2,0,2.3,0.1c1,0.1,2.1,0.3,3.3,0.7c1.1,0.4,2.2,1.1,3.3,2c1.1,0.9,1.9,2.3,2.4,4c0.2,0.7,0.4,1.4,0.5,2.1
+ C110.5,166.6,110.5,167.7,110.6,169.1z M140.1,158.4l10.9,70.3h-9.1l-1.8-12.9h-10.6l-1.6,12.9h-9.1l10-70.3H140.1z M133.5,183
+ l-3,24.2h8.4l-3.5-24.4c-0.1-0.6-0.2-1.2-0.3-1.8c0-0.6-0.1-1.2-0.2-1.8c-0.1-1.3-0.1-2.6-0.1-3.8c0-1.3,0-2.5-0.1-3.8H134
+ c-0.1,1.9-0.1,3.8-0.2,5.7C133.7,179.2,133.6,181.1,133.5,183z M190.2,158.4V220h15.4v8.7h-24.7v-70.3H190.2z M232,158.4
+ l10.9,70.3h-9.1l-1.8-12.9h-10.6l-1.6,12.9h-9.1l10-70.3H232z M225.4,183l-3,24.2h8.4l-3.5-24.4c-0.1-0.6-0.2-1.2-0.3-1.8
+ c0-0.6-0.1-1.2-0.2-1.8c-0.1-1.3-0.1-2.6-0.1-3.8c0-1.3,0-2.5-0.1-3.8h-0.8c-0.1,1.9-0.1,3.8-0.2,5.7
+ C225.6,179.2,225.5,181.1,225.4,183z M251.9,158.4h16.5c1.5,0,2.9,0.1,4.4,0.2s2.8,0.8,3.9,1.8c1.3,1.2,2,2.7,2.2,4.5
+ c0.2,1.8,0.3,4.3,0.4,7.4c0,0.6,0,1.2,0.1,1.8c0,0.6,0.1,1.2,0.1,1.8c0,1.1,0,2.2-0.1,3.3c0,1.1-0.1,2.2-0.2,3.3
+ c0,0.2,0,0.9-0.1,2.1c-0.1,1.2-0.3,2.3-0.8,3.3c-0.4,0.7-1,1.3-1.7,1.8c-0.7,0.5-1.4,0.8-2.2,1c-0.4,0.1-0.8,0.2-1.3,0.2
+ c-0.5,0-0.8,0-0.9,0.1v0.5c1.3,0.1,2.4,0.4,3.5,0.7c1,0.4,1.9,1.1,2.6,2.2c0.5,1,0.8,2.2,0.9,3.7c0.1,1.5,0.1,3.4,0.1,5.9
+ c0.1,0.9,0.1,1.9,0.1,2.8v7c0,1.4-0.1,2.8-0.2,4.3c0,0.2,0,0.6-0.1,1.2c0,0.6-0.2,1.3-0.4,2.1c-0.2,0.8-0.5,1.6-0.9,2.5
+ s-1,1.6-1.7,2.3c-1.4,1.1-3,1.8-4.9,1.9s-3.6,0.2-5.3,0.2h-14.2V158.4z M260.9,166.8v21.1h3.6c1.5-0.1,2.7-0.2,3.7-0.5
+ c1-0.3,1.6-1.3,1.8-3c0.2-1.4,0.3-3.8,0.3-7.1c0-2.2-0.1-4.4-0.3-6.6c-0.1-1.7-0.4-2.8-1-3.3c-0.3-0.3-0.8-0.5-1.3-0.5
+ c-0.5,0-1.2,0-2.1,0H260.9z M260.9,195.5V220h4.8c0.5,0,1,0,1.5,0c0.5,0,0.9-0.1,1.3-0.2c0.4-0.1,0.7-0.3,1-0.6
+ c0.3-0.3,0.5-0.8,0.6-1.4c0-0.3,0-0.7,0.1-1.4c0-0.7,0.1-1.5,0.1-2.4c0-0.9,0.1-1.9,0.1-2.9c0-1.1,0.1-2.1,0.1-3.1
+ c0-1.2,0-2.4-0.1-3.5c0-1.2-0.1-2.3-0.2-3.5c-0.1-0.7-0.2-1.4-0.3-2.3c-0.1-0.9-0.4-1.6-1-2.1c-0.4-0.3-0.9-0.5-1.4-0.6
+ s-1-0.1-1.5-0.1H260.9z M318.4,169.1v12.4h-8.5v-12.4c0-0.2,0-0.6-0.1-1.1c0-0.5-0.2-1.1-0.4-1.6c-0.2-0.5-0.6-1-1.1-1.4
+ c-0.5-0.4-1.3-0.6-2.3-0.6c-1.1,0-2,0.2-2.6,0.6s-1.1,1-1.4,1.7s-0.5,1.5-0.6,2.3c-0.1,0.9-0.1,1.7-0.1,2.5c0,1.5,0.1,2.8,0.3,4
+ s0.5,2.3,0.9,3.4s0.9,2.2,1.5,3.2c0.6,1.1,1.3,2.2,2.1,3.4c0.7,1.1,1.3,2.1,2,3.1c0.7,1,1.4,2,2.1,3.1c1,1.4,2,2.9,3.1,4.6
+ c1.2,1.9,2.2,3.7,2.9,5.3c0.7,1.6,1.3,3.1,1.7,4.5c0.4,1.4,0.7,2.7,0.8,3.9c0.1,1.2,0.2,2.3,0.2,3.3c0,0.4,0,1.3-0.1,2.6
+ c-0.1,1.3-0.4,2.8-0.9,4.4c-0.5,1.6-1.3,3.3-2.3,4.9c-1,1.6-2.6,2.9-4.6,3.7c-0.6,0.3-1.4,0.5-2.4,0.7c-1,0.2-2.3,0.3-3.8,0.3
+ c-2.9,0-5.1-0.5-6.8-1.4c-1.6-0.9-2.8-1.9-3.6-2.8c-1.5-1.7-2.3-3.4-2.6-5.3c-0.3-1.9-0.4-3.8-0.5-5.9V203h8.6v12.8
+ c0,1.7,0.2,3,0.5,3.8c0.3,0.8,0.8,1.5,1.6,2c0.2,0.1,0.5,0.3,1,0.5c0.5,0.2,1.1,0.3,1.8,0.3c1.1,0,2-0.3,2.7-0.8
+ c0.6-0.6,1.1-1.3,1.4-2.1c0.3-0.8,0.4-1.7,0.5-2.7c0-1,0.1-1.8,0.1-2.6c0-2.5-0.3-4.6-0.8-6.4c-0.5-1.7-1.4-3.7-2.7-5.9
+ c-1.3-2.3-2.8-4.5-4.3-6.6c-1.5-2.1-2.9-4.3-4.3-6.5c-0.4-0.6-0.9-1.4-1.5-2.4c-0.6-1-1.2-2.2-1.8-3.6c-0.6-1.4-1.1-3-1.5-4.7
+ c-0.4-1.8-0.7-3.7-0.7-5.7c0-3.9,0.7-6.9,2.1-9s2.8-3.7,4.3-4.5c0.7-0.5,1.8-0.9,3.1-1.3c1.3-0.4,3-0.6,5-0.6c0.5,0,1.2,0,2.3,0.1
+ c1,0.1,2.1,0.3,3.3,0.7s2.2,1.1,3.3,2s1.9,2.3,2.4,4c0.2,0.7,0.4,1.4,0.5,2.1C318.2,166.6,318.3,167.7,318.4,169.1z"/>
+ <g id="Group-3-Copy" transform="translate(47.000000, 0.000000)">
+ <g id="Clip-2">
+ </g>
+ <defs>
+ <filter id="Adobe_OpacityMaskFilter" filterUnits="userSpaceOnUse" x="0" y="0" width="225.8" height="123.9">
+ <feColorMatrix type="matrix" values="1 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 0"/>
+ </filter>
+ </defs>
+ <mask maskUnits="userSpaceOnUse" x="0" y="0" width="225.8" height="123.9" id="mask-2_1_">
+ <g class="st1">
+ <polygon id="path-1_1_" class="st2" points="0,0 225.9,0 225.9,123.9 0,123.9 "/>
+ </g>
+ </mask>
+ <g id="Page-1" class="st3">
+ <g id="Mask">
+ <path class="st0" d="M177.2,54.3c6.1,21.2,19.4,48.5,24,54.7c5.3-1.2,9.1,1.2,12.4,5.1c-1.2,0.9-2.7,1.5-3.4,2.6
+ c-2.7,4.4-6.9,3-10.7,3.2c-2.8,0.2-5.6,0.3-8.4,0.3c-0.9,0-1.8-0.3-2.7-0.5c-1-0.3-1.9-1-2.8-1c-2.5,0.1-4.7,0-7.1-1.1
+ c-1-0.5-2.6,0.9-3.6-0.8c-1.1-1.8-2.2-3.6-3.4-5.5c-1.2,0.2-2.2,0.4-3.4,0.6c-2.4-3-3.4-14.8-6.1-17.7
+ c-0.6-0.7-2.1-2.2-3.8-2.7c-0.3-0.9-5.4-7.2-5.9-8.7c-0.2-0.5-0.3-1.2-0.7-1.4c-3.1-2-4.2-4.9-4-8.5c0-0.4-0.2-0.7-0.4-1.7
+ c-1.2,2.7-2.2,4.8-3.2,7.1c-0.6,1.4-1,2.9-1.8,4.3c-0.5,0.9-1.3,1.6-2,2.3c-2.4,2.2-1.8,0.9-3.2,3.6c-1.1,2-2,4-3,6.1
+ c-0.5,1.1-0.9,2.2-1.1,3.3c-0.7,4.1-3.2,7.6-1.5,11.2c3,0.6,6.3,0.5,8.6,2c2.2,1.5,3.5,4.5,5,6.7c-3.1,0.5-5.9,1.2-8.7,1.4
+ c-3.8,0.3-7.6,0.2-11.3,0.2c-5,0-10.1-0.1-15.1-0.1c-2.6,0-3.9-1.5-5.4-3.7c-2.1-3.1-1.1-6-0.8-9.1c0.1-0.8,0-3.3-0.1-4.2
+ c-0.1-0.9-0.1-1.9,0-2.9c0.2-1.3,0.8-2.6,0.9-3.9c0.1-1.5-0.4-3-0.4-4.5c0-1.5,0.1-3.1,0.5-4.6c0.7-2.7-0.1,0,0.7-2.7
+ c0.1-0.2,0-0.7,0-0.8c-0.9-3.6,1.8-6,2.8-8.8c0-0.1,0-0.1-0.1-0.5c-1.8,1.8-4.1,0.8-6.1,1.2c-2.9,0.6-5.7,2.1-8,3
+ c-1.4-0.1-2.5-0.4-3.5-0.2c-2,0.5-3.9,1.1-6.2,0.9c-2.5-0.2-5.1,0.6-7.7,0.8c-2.2,0.2-4.8,0.9-6.5,0c-1.5-0.7-2.8-0.9-4.4-1
+ c-1.6-0.1-2.4,0.7-2.6,2.1c-1.1,6.3-2.3,12.7-3.1,19.1c-0.4,3.3-0.2,6.6-0.2,9.9c0,1.5,0.6,2.5,1.9,3.5
+ c1.5,1.1,2.6,2.7,3.6,4.3c0.8,1.3,0.6,2.6-1.5,2.7c-7.3,0.2-14.6,0.5-21.9,0.4c-2.1,0-4.2-1.5-6.2-2.5
+ c-0.3-0.2-0.4-1.1-0.4-1.7c0-4.4,0-13.5,0-18.4c-1,0.6-1.3,0.8-1.6,1c-2.5,2.3-4.9,4.1-7.3,6.4c-1.9,1.8-1.6,3.3,0.2,5.4
+ c2.4,2.7,4.4,5.7,4.4,9.5c0,2.5-2.2,3.2-3.8,3.3c-5.7,0.4-11.5,0.4-17.2,0.4c-2.8,0-3.8-1.5-4.4-4.2
+ c-1.2-5.4-2.2-10.8-4.3-16.1c-1.6-4.1-2-8.9,1.5-13c5.1-5.9,9.5-12.3,12.8-19.5c1-2.2,1.4-3.8,0.4-6.1c-4.9-1-7.1-3.7-8.2-8.7
+ c-1-4.6-0.2-8.9,1-13.2c2.3-7.8,4.1-11,8.4-18c5.6-9,13.4-15.5,22.8-20.2c11.3-5.6,23.3-5.5,35.3-4.2
+ c16.2,1.6,32.4,3.6,48.6,5.3c1.3,0.1,2.9-0.2,4.1-0.8c7.7-3.9,15.5-4.2,23.6-1.4c5.6,1.9,11.4,3.6,17.1,5.2
+ c2,0.6,4.1,0.8,6.2,1.1c5.7,0.9,11.5,1.8,17.3,2.4c2.9,0.3,5.9,0.1,8.8,0.3c0.7,0,1.5,0.3,2.1,0.7c2.6,1.8,5.1,3.7,7.5,5.6
+ c1.6,1.2,3.2,2.3,4.5,3.8c0.6,0.7,0.7,1.9,0.9,2.9c0.3,1.1,0.3,2.6,0.9,3.4c2.6,3.1,5.3,6,8.1,8.9c0.9,1,1.1,1.7,0.3,2.9
+ c-1.2,1.6-1.8,3.7-3.3,4.8c-3.1,2.2-6.3,4.3-10.7,3.2c-2.5-0.6-5.5,0.5-8.2,0.8c-2.1,0.3-4.3,0.2-6.2,0.9
+ c-4.1,1.6-8.5,1.1-12.5,2.3c-1.5,0.4-2.8,1.2-4.3,1.6C179.2,54.8,178.3,54.5,177.2,54.3"/>
+ </g>
+ </g>
+ </g>
+ </g>
+</g>
+</svg>
diff --git a/web/pandas/static/img/pydata_book.gif b/web/pandas/static/img/pydata_book.gif
new file mode 100644
index 0000000000000..db05c209704a2
Binary files /dev/null and b/web/pandas/static/img/pydata_book.gif differ
diff --git a/web/pandas/try.md b/web/pandas/try.md
new file mode 100644
index 0000000000000..20e119759df6f
--- /dev/null
+++ b/web/pandas/try.md
@@ -0,0 +1,21 @@
+# Try pandas online
+
+<section>
+ <pre data-executable>
+import pandas
+fibonacci = pandas.Series([1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144])
+fibonacci.sum()
+ </pre>
+ <script src="https://combinatronics.com/ines/juniper/v0.1.0/dist/juniper.min.js"></script>
+ <script>new Juniper({ repo: 'datapythonista/pandas-web' })</script>
+</section>
+
+## Interactive tutorials
+
+You can also try _pandas_ on [Binder](https://mybinder.org/) for one of the next topics:
+
+- Exploratory analysis of US presidents
+- Preprocessing the Titanic dataset to train a machine learning model
+- Forecasting the stock market
+
+_(links will be added soon)_
diff --git a/web/pandas_web.py b/web/pandas_web.py
new file mode 100644
index 0000000000000..d515d8a0e1cd7
--- /dev/null
+++ b/web/pandas_web.py
@@ -0,0 +1,286 @@
+#!/usr/bin/env python
+"""
+Simple static site generator for the pandas web.
+
+pandas_web.py takes a directory as parameter, and copies all the files into the
+target directory after converting markdown files into html and rendering both
+markdown and html files with a context. The context is obtained by parsing
+the file ``config.yml`` in the root of the source directory.
+
+The file should contain:
+```
+main:
+ template_path: <path_to_the_jinja2_templates_directory>
+ base_template: <template_file_all_other_files_will_extend>
+ ignore:
+ - <list_of_files_in_the_source_that_will_not_be_copied>
+ github_repo_url: <organization/repo-name>
+ context_preprocessors:
+ - <list_of_functions_that_will_enrich_the_context_parsed_in_this_file>
+ markdown_extensions:
+ - <list_of_markdown_extensions_that_will_be_loaded>
+```
+
+The rest of the items in the file will be added directly to the context.
+"""
+import argparse
+import datetime
+import importlib
+import operator
+import os
+import shutil
+import sys
+import time
+import typing
+
+import feedparser
+import markdown
+import jinja2
+import requests
+import yaml
+
+
+class Preprocessors:
+ """
+ Built-in context preprocessors.
+
+ Context preprocessors are functions that receive the context used to
+ render the templates, and enriches it with additional information.
+
+ The original context is obtained by parsing ``config.yml``, and
+ anything else needed just be added with context preprocessors.
+ """
+
+ @staticmethod
+ def navbar_add_info(context):
+ """
+ Items in the main navigation bar can be direct links, or dropdowns with
+ subitems. This context preprocessor adds a boolean field
+ ``has_subitems`` that tells which one of them every element is. It
+ also adds a ``slug`` field to be used as a CSS id.
+ """
+ for i, item in enumerate(context["navbar"]):
+ context["navbar"][i] = dict(
+ item,
+ has_subitems=isinstance(item["target"], list),
+ slug=(item["name"].replace(" ", "-").lower()),
+ )
+ return context
+
+ @staticmethod
+ def blog_add_posts(context):
+ """
+ Given the blog feed defined in the configuration yaml, this context
+ preprocessor fetches the posts in the feeds, and returns the relevant
+ information for them (sorted from newest to oldest).
+ """
+ posts = []
+ for feed_url in context["blog"]["feed"]:
+ feed_data = feedparser.parse(feed_url)
+ for entry in feed_data.entries:
+ published = datetime.datetime.fromtimestamp(
+ time.mktime(entry.published_parsed)
+ )
+ posts.append(
+ {
+ "title": entry.title,
+ "author": entry.author,
+ "published": published,
+ "feed": feed_data["feed"]["title"],
+ "link": entry.link,
+ "description": entry.description,
+ "summary": entry.summary,
+ }
+ )
+ posts.sort(key=operator.itemgetter("published"), reverse=True)
+ context["blog"]["posts"] = posts[: context["blog"]["num_posts"]]
+ return context
+
+ @staticmethod
+ def maintainers_add_info(context):
+ """
+ Given the active maintainers defined in the yaml file, it fetches
+ the GitHub user information for them.
+ """
+ context["maintainers"]["people"] = []
+ for user in context["maintainers"]["active"]:
+ resp = requests.get(f"https://api.github.com/users/{user}")
+ if context["ignore_io_errors"] and resp.status_code == 403:
+ return context
+ resp.raise_for_status()
+ context["maintainers"]["people"].append(resp.json())
+ return context
+
+ @staticmethod
+ def home_add_releases(context):
+ context["releases"] = []
+
+ github_repo_url = context["main"]["github_repo_url"]
+ resp = requests.get(f"https://api.github.com/repos/{github_repo_url}/releases")
+ if context["ignore_io_errors"] and resp.status_code == 403:
+ return context
+ resp.raise_for_status()
+
+ for release in resp.json():
+ if release["prerelease"]:
+ continue
+ published = datetime.datetime.strptime(
+ release["published_at"], "%Y-%m-%dT%H:%M:%SZ"
+ )
+ context["releases"].append(
+ {
+ "name": release["tag_name"].lstrip("v"),
+ "tag": release["tag_name"],
+ "published": published,
+ "url": (
+ release["assets"][0]["browser_download_url"]
+ if release["assets"]
+ else ""
+ ),
+ }
+ )
+ return context
+
+
+def get_callable(obj_as_str: str) -> object:
+ """
+ Get a Python object from its string representation.
+
+ For example, for ``sys.stdout.write`` would import the module ``sys``
+ and return the ``write`` function.
+ """
+ components = obj_as_str.split(".")
+ attrs = []
+ while components:
+ try:
+ obj = importlib.import_module(".".join(components))
+ except ImportError:
+ attrs.insert(0, components.pop())
+ else:
+ break
+
+ if not obj:
+ raise ImportError(f'Could not import "{obj_as_str}"')
+
+ for attr in attrs:
+ obj = getattr(obj, attr)
+
+ return obj
+
+
+def get_context(config_fname: str, ignore_io_errors: bool, **kwargs):
+ """
+ Load the config yaml as the base context, and enrich it with the
+ information added by the context preprocessors defined in the file.
+ """
+ with open(config_fname) as f:
+ context = yaml.safe_load(f)
+
+ context["ignore_io_errors"] = ignore_io_errors
+ context.update(kwargs)
+
+ preprocessors = (
+ get_callable(context_prep)
+ for context_prep in context["main"]["context_preprocessors"]
+ )
+ for preprocessor in preprocessors:
+ context = preprocessor(context)
+ msg = f"{preprocessor.__name__} is missing the return statement"
+ assert context is not None, msg
+
+ return context
+
+
+def get_source_files(source_path: str) -> typing.Generator[str, None, None]:
+ """
+ Generate the list of files present in the source directory.
+ """
+ for root, dirs, fnames in os.walk(source_path):
+ root = os.path.relpath(root, source_path)
+ for fname in fnames:
+ yield os.path.join(root, fname)
+
+
+def extend_base_template(content: str, base_template: str) -> str:
+ """
+ Wrap document to extend the base template, before it is rendered with
+ Jinja2.
+ """
+ result = '{% extends "' + base_template + '" %}'
+ result += "{% block body %}"
+ result += content
+ result += "{% endblock %}"
+ return result
+
+
+def main(
+ source_path: str, target_path: str, base_url: str, ignore_io_errors: bool
+) -> int:
+ """
+ Copy every file in the source directory to the target directory.
+
+ For ``.md`` and ``.html`` files, render them with the context
+ before copyings them. ``.md`` files are transformed to HTML.
+ """
+ config_fname = os.path.join(source_path, "config.yml")
+
+ shutil.rmtree(target_path, ignore_errors=True)
+ os.makedirs(target_path, exist_ok=True)
+
+ sys.stderr.write("Generating context...\n")
+ context = get_context(config_fname, ignore_io_errors, base_url=base_url)
+ sys.stderr.write("Context generated\n")
+
+ templates_path = os.path.join(source_path, context["main"]["templates_path"])
+ jinja_env = jinja2.Environment(loader=jinja2.FileSystemLoader(templates_path))
+
+ for fname in get_source_files(source_path):
+ if os.path.normpath(fname) in context["main"]["ignore"]:
+ continue
+
+ sys.stderr.write(f"Processing {fname}\n")
+ dirname = os.path.dirname(fname)
+ os.makedirs(os.path.join(target_path, dirname), exist_ok=True)
+
+ extension = os.path.splitext(fname)[-1]
+ if extension in (".html", ".md"):
+ with open(os.path.join(source_path, fname)) as f:
+ content = f.read()
+ if extension == ".md":
+ body = markdown.markdown(
+ content, extensions=context["main"]["markdown_extensions"]
+ )
+ content = extend_base_template(body, context["main"]["base_template"])
+ content = jinja_env.from_string(content).render(**context)
+ fname = os.path.splitext(fname)[0] + ".html"
+ with open(os.path.join(target_path, fname), "w") as f:
+ f.write(content)
+ else:
+ shutil.copy(
+ os.path.join(source_path, fname), os.path.join(target_path, dirname)
+ )
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Documentation builder.")
+ parser.add_argument(
+ "source_path", help="path to the source directory (must contain config.yml)"
+ )
+ parser.add_argument(
+ "--target-path", default="build", help="directory where to write the output"
+ )
+ parser.add_argument(
+ "--base-url", default="", help="base url where the website is served from"
+ )
+ parser.add_argument(
+ "--ignore-io-errors",
+ action="store_true",
+ help="do not fail if errors happen when fetching "
+ "data from http sources, and those fail "
+ "(mostly useful to allow github quota errors "
+ "when running the script locally)",
+ )
+ args = parser.parse_args()
+ sys.exit(
+ main(args.source_path, args.target_path, args.base_url, args.ignore_io_errors)
+ )
| The website can be seen here: https://datapythonista.github.io/pandas-web/
It's still pending that the designer polish the styles. And also the content can surely be improved with everybody's feedback. But I think will be useful to merge this once we iterate a bit on people's feedback, so we can start working on integrating the docs.
I'm coordinating with the devs of numpy/scipy, scikit-learn and matplotlib to see if we can reshare things implemented here. Ideally I'd like to have the `pysuerga.py` script in a separate repo (that's why it's not named pandas_web.py`), and also the layout and pages that can be shared (the code of conduct, the donate, the blog,...). For now I add everything here for simplicity, and will create a new project and have it as a dependency here if they are interested.
CC: @pandas-dev/pandas-core | https://api.github.com/repos/pandas-dev/pandas/pulls/28014 | 2019-08-19T14:34:12Z | 2019-09-18T13:25:08Z | 2019-09-18T13:25:08Z | 2019-09-18T13:38:56Z |
Backport PR #27882 on branch 0.25.x | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index d1324bc759ea1..87e46f97d3157 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -103,7 +103,6 @@ MultiIndex
I/O
^^^
-
- Avoid calling ``S3File.s3`` when reading parquet, as this was removed in s3fs version 0.3.0 (:issue:`27756`)
- Better error message when a negative header is passed in :func:`pandas.read_csv` (:issue:`27779`)
-
@@ -160,6 +159,14 @@ Other
-
-
+I/O and LZMA
+~~~~~~~~~~~~
+
+Some users may unknowingly have an incomplete Python installation, which lacks the `lzma` module from the standard library. In this case, `import pandas` failed due to an `ImportError` (:issue: `27575`).
+Pandas will now warn, rather than raising an `ImportError` if the `lzma` module is not present. Any subsequent attempt to use `lzma` methods will raise a `RuntimeError`.
+A possible fix for the lack of the `lzma` module is to ensure you have the necessary libraries and then re-install Python.
+For example, on MacOS installing Python with `pyenv` may lead to an incomplete Python installation due to unmet system dependencies at compilation time (like `xz`). Compilation will succeed, but Python might fail at run time. The issue can be solved by installing the necessary dependencies and then re-installing Python.
+
.. _whatsnew_0.251.contributors:
Contributors
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index cafc31dad3568..6cc9dd22ce7c9 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -2,7 +2,6 @@
# See LICENSE for the license
import bz2
import gzip
-import lzma
import os
import sys
import time
@@ -59,9 +58,12 @@ from pandas.core.arrays import Categorical
from pandas.core.dtypes.concat import union_categoricals
import pandas.io.common as icom
+from pandas.compat import _import_lzma, _get_lzma_file
from pandas.errors import (ParserError, DtypeWarning,
EmptyDataError, ParserWarning)
+lzma = _import_lzma()
+
# Import CParserError as alias of ParserError for backwards compatibility.
# Ultimately, we want to remove this import. See gh-12665 and gh-14479.
CParserError = ParserError
@@ -645,9 +647,9 @@ cdef class TextReader:
'zip file %s', str(zip_names))
elif self.compression == 'xz':
if isinstance(source, str):
- source = lzma.LZMAFile(source, 'rb')
+ source = _get_lzma_file(lzma)(source, 'rb')
else:
- source = lzma.LZMAFile(filename=source)
+ source = _get_lzma_file(lzma)(filename=source)
else:
raise ValueError('Unrecognized compression type: %s' %
self.compression)
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 5ecd641fc68be..b32da8da3a1fb 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -10,6 +10,7 @@
import platform
import struct
import sys
+import warnings
PY35 = sys.version_info[:2] == (3, 5)
PY36 = sys.version_info >= (3, 6)
@@ -65,3 +66,32 @@ def is_platform_mac():
def is_platform_32bit():
return struct.calcsize("P") * 8 < 64
+
+
+def _import_lzma():
+ """Attempts to import lzma, warning the user when lzma is not available.
+ """
+ try:
+ import lzma
+
+ return lzma
+ except ImportError:
+ msg = (
+ "Could not import the lzma module. "
+ "Your installed Python is incomplete. "
+ "Attempting to use lzma compression will result in a RuntimeError."
+ )
+ warnings.warn(msg)
+
+
+def _get_lzma_file(lzma):
+ """Returns the lzma method LZMAFile when the module was correctly imported.
+ Otherwise, raises a RuntimeError.
+ """
+ if lzma is None:
+ raise RuntimeError(
+ "lzma module not available. "
+ "A Python re-install with the proper "
+ "dependencies might be required to solve this issue."
+ )
+ return lzma.LZMAFile
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 9a9620e2d0663..57e710fdfc2ec 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -6,7 +6,6 @@
import gzip
from http.client import HTTPException # noqa
from io import BytesIO
-import lzma
import mmap
import os
import pathlib
@@ -22,6 +21,7 @@
from urllib.request import pathname2url, urlopen
import zipfile
+from pandas.compat import _get_lzma_file, _import_lzma
from pandas.errors import ( # noqa
AbstractMethodError,
DtypeWarning,
@@ -32,6 +32,8 @@
from pandas.core.dtypes.common import is_file_like
+lzma = _import_lzma()
+
# gh-12665: Alias for now and remove later.
CParserError = ParserError
@@ -382,7 +384,7 @@ def _get_handle(
# XZ Compression
elif compression == "xz":
- f = lzma.LZMAFile(path_or_buf, mode)
+ f = _get_lzma_file(lzma)(path_or_buf, mode)
# Unrecognized Compression
else:
diff --git a/pandas/tests/io/test_compression.py b/pandas/tests/io/test_compression.py
index ce459ab24afe0..16ca1109f266c 100644
--- a/pandas/tests/io/test_compression.py
+++ b/pandas/tests/io/test_compression.py
@@ -1,5 +1,7 @@
import contextlib
import os
+import subprocess
+import textwrap
import warnings
import pytest
@@ -125,3 +127,33 @@ def test_compression_warning(compression_only):
with tm.assert_produces_warning(RuntimeWarning, check_stacklevel=False):
with f:
df.to_csv(f, compression=compression_only)
+
+
+def test_with_missing_lzma():
+ """Tests if import pandas works when lzma is not present."""
+ # https://github.com/pandas-dev/pandas/issues/27575
+ code = textwrap.dedent(
+ """\
+ import sys
+ sys.modules['lzma'] = None
+ import pandas
+ """
+ )
+ subprocess.check_output(["python", "-c", code])
+
+
+def test_with_missing_lzma_runtime():
+ """Tests if RuntimeError is hit when calling lzma without
+ having the module available."""
+ code = textwrap.dedent(
+ """
+ import sys
+ import pytest
+ sys.modules['lzma'] = None
+ import pandas
+ df = pandas.DataFrame()
+ with pytest.raises(RuntimeError, match='lzma module'):
+ df.to_csv('foo.csv', compression='xz')
+ """
+ )
+ subprocess.check_output(["python", "-c", code])
diff --git a/pandas/tests/io/test_pickle.py b/pandas/tests/io/test_pickle.py
index 076d0c9f947c7..30555508f0998 100644
--- a/pandas/tests/io/test_pickle.py
+++ b/pandas/tests/io/test_pickle.py
@@ -13,7 +13,6 @@
import bz2
import glob
import gzip
-import lzma
import os
import pickle
import shutil
@@ -22,7 +21,7 @@
import pytest
-from pandas.compat import is_platform_little_endian
+from pandas.compat import _get_lzma_file, _import_lzma, is_platform_little_endian
import pandas as pd
from pandas import Index
@@ -30,6 +29,8 @@
from pandas.tseries.offsets import Day, MonthEnd
+lzma = _import_lzma()
+
@pytest.fixture(scope="module")
def current_pickle_data():
@@ -270,7 +271,7 @@ def compress_file(self, src_path, dest_path, compression):
with zipfile.ZipFile(dest_path, "w", compression=zipfile.ZIP_DEFLATED) as f:
f.write(src_path, os.path.basename(src_path))
elif compression == "xz":
- f = lzma.LZMAFile(dest_path, "w")
+ f = _get_lzma_file(lzma)(dest_path, "w")
else:
msg = "Unrecognized compression type: {}".format(compression)
raise ValueError(msg)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index cf8452cdd0c59..a8f0d0da52e1f 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -5,7 +5,6 @@
from functools import wraps
import gzip
import http.client
-import lzma
import os
import re
from shutil import rmtree
@@ -26,7 +25,7 @@
)
import pandas._libs.testing as _testing
-from pandas.compat import raise_with_traceback
+from pandas.compat import _get_lzma_file, _import_lzma, raise_with_traceback
from pandas.core.dtypes.common import (
is_bool,
@@ -70,6 +69,8 @@
from pandas.io.common import urlopen
from pandas.io.formats.printing import pprint_thing
+lzma = _import_lzma()
+
N = 30
K = 4
_RAISE_NETWORK_ERROR_DEFAULT = False
@@ -211,7 +212,7 @@ def decompress_file(path, compression):
elif compression == "bz2":
f = bz2.BZ2File(path, "rb")
elif compression == "xz":
- f = lzma.LZMAFile(path, "rb")
+ f = _get_lzma_file(lzma)(path, "rb")
elif compression == "zip":
zip_file = zipfile.ZipFile(path)
zip_names = zip_file.namelist()
@@ -264,9 +265,7 @@ def write_to_compressed(compression, path, data, dest="test"):
compress_method = bz2.BZ2File
elif compression == "xz":
- import lzma
-
- compress_method = lzma.LZMAFile
+ compress_method = _get_lzma_file(lzma)
else:
msg = "Unrecognized compression type: {}".format(compression)
raise ValueError(msg)
| https://api.github.com/repos/pandas-dev/pandas/pulls/28012 | 2019-08-19T13:11:42Z | 2019-08-19T14:45:17Z | 2019-08-19T14:45:17Z | 2019-08-19T14:45:21Z | |
BUG: Avoid duplicating entire exploded column when joining back with origi… | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 5b9e3a7dbad06..f00393e2e1ea1 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -169,6 +169,7 @@ Indexing
^^^^^^^^
- Bug in assignment using a reverse slicer (:issue:`26939`)
+- Bug in :meth:`DataFrame.explode` would duplicate frame in the presence of duplicates in the index (:issue:`28010`)
- Bug in reindexing a :meth:`PeriodIndex` with another type of index that contained a `Period` (:issue:`28323`) (:issue:`28337`)
- Fix assignment of column via `.loc` with numpy non-ns datetime type (:issue:`27395`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f1ed3a125f60c..d3656f881a886 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6267,12 +6267,13 @@ def explode(self, column: Union[str, Tuple]) -> "DataFrame":
if not self.columns.is_unique:
raise ValueError("columns must be unique")
- result = self[column].explode()
- return (
- self.drop([column], axis=1)
- .join(result)
- .reindex(columns=self.columns, copy=False)
- )
+ df = self.reset_index(drop=True)
+ result = df[column].explode()
+ result = df.drop([column], axis=1).join(result)
+ result.index = self.index.take(result.index)
+ result = result.reindex(columns=self.columns, copy=False)
+
+ return result
def unstack(self, level=-1, fill_value=None):
"""
diff --git a/pandas/tests/frame/test_explode.py b/pandas/tests/frame/test_explode.py
index b4330aadbfba3..c07de35f8bf34 100644
--- a/pandas/tests/frame/test_explode.py
+++ b/pandas/tests/frame/test_explode.py
@@ -118,3 +118,47 @@ def test_usecase():
index=[0, 0, 1, 1],
)
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "input_dict, input_index, expected_dict, expected_index",
+ [
+ (
+ {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]},
+ [0, 0],
+ {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]},
+ [0, 0, 0, 0],
+ ),
+ (
+ {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]},
+ pd.Index([0, 0], name="my_index"),
+ {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]},
+ pd.Index([0, 0, 0, 0], name="my_index"),
+ ),
+ (
+ {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]},
+ pd.MultiIndex.from_arrays(
+ [[0, 0], [1, 1]], names=["my_first_index", "my_second_index"]
+ ),
+ {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]},
+ pd.MultiIndex.from_arrays(
+ [[0, 0, 0, 0], [1, 1, 1, 1]],
+ names=["my_first_index", "my_second_index"],
+ ),
+ ),
+ (
+ {"col1": [[1, 2], [3, 4]], "col2": ["foo", "bar"]},
+ pd.MultiIndex.from_arrays([[0, 0], [1, 1]], names=["my_index", None]),
+ {"col1": [1, 2, 3, 4], "col2": ["foo", "foo", "bar", "bar"]},
+ pd.MultiIndex.from_arrays(
+ [[0, 0, 0, 0], [1, 1, 1, 1]], names=["my_index", None]
+ ),
+ ),
+ ],
+)
+def test_duplicate_index(input_dict, input_index, expected_dict, expected_index):
+ # GH 28005
+ df = pd.DataFrame(input_dict, index=input_index)
+ result = df.explode("col1")
+ expected = pd.DataFrame(expected_dict, index=expected_index, dtype=object)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/test_explode.py b/pandas/tests/series/test_explode.py
index 331546f7dc73d..e4974bd0af145 100644
--- a/pandas/tests/series/test_explode.py
+++ b/pandas/tests/series/test_explode.py
@@ -111,3 +111,11 @@ def test_nested_EA():
pd.date_range("20170101", periods=6, tz="UTC"), index=[0, 0, 0, 1, 1, 1]
)
tm.assert_series_equal(result, expected)
+
+
+def test_duplicate_index():
+ # GH 28005
+ s = pd.Series([[1, 2], [3, 4]], index=[0, 0])
+ result = s.explode()
+ expected = pd.Series([1, 2, 3, 4], index=[0, 0, 0, 0], dtype=object)
+ tm.assert_series_equal(result, expected)
| …nal frame
- [x] closes #28005
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28010 | 2019-08-19T12:44:12Z | 2019-09-17T13:38:14Z | 2019-09-17T13:38:14Z | 2019-09-17T13:43:38Z |
DOC: Corrected file description in read_fwf() | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 7ba103c5ff996..1d49dbdee9c03 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -28,7 +28,7 @@ The pandas I/O API is a set of top level ``reader`` functions accessed like
:delim: ;
text;`CSV <https://en.wikipedia.org/wiki/Comma-separated_values>`__;:ref:`read_csv<io.read_csv_table>`;:ref:`to_csv<io.store_in_csv>`
- text;`TXT <https://www.oracle.com/webfolder/technetwork/data-quality/edqhelp/Content/introduction/getting_started/configuring_fixed_width_text_file_formats.htm>`__;:ref:`read_fwf<io.fwf_reader>`
+ text;Fixed-Width Text File;:ref:`read_fwf<io.fwf_reader>`
text;`JSON <https://www.json.org/>`__;:ref:`read_json<io.json_reader>`;:ref:`to_json<io.json_writer>`
text;`HTML <https://en.wikipedia.org/wiki/HTML>`__;:ref:`read_html<io.read_html>`;:ref:`to_html<io.html>`
text; Local clipboard;:ref:`read_clipboard<io.clipboard>`;:ref:`to_clipboard<io.clipboard>`
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/28009 | 2019-08-19T12:16:53Z | 2019-08-20T18:40:21Z | 2019-08-20T18:40:21Z | 2019-08-20T18:40:25Z |
[CI] Add pip dependence explicitly | diff --git a/ci/deps/azure-macos-35.yaml b/ci/deps/azure-macos-35.yaml
index cb2ac08cbf758..39315b15a018b 100644
--- a/ci/deps/azure-macos-35.yaml
+++ b/ci/deps/azure-macos-35.yaml
@@ -22,6 +22,7 @@ dependencies:
- xlrd
- xlsxwriter
- xlwt
+ - pip
- pip:
- pyreadstat
# universal
| Close #27374. Get rid of the conda warning.
| https://api.github.com/repos/pandas-dev/pandas/pulls/28008 | 2019-08-19T06:14:58Z | 2019-08-21T14:21:45Z | 2019-08-21T14:21:45Z | 2019-08-21T14:21:49Z |
ENH: Add support for dataclasses in the DataFrame constructor | diff --git a/doc/source/user_guide/dsintro.rst b/doc/source/user_guide/dsintro.rst
index 200d567a62732..d7f7690f8c3d0 100644
--- a/doc/source/user_guide/dsintro.rst
+++ b/doc/source/user_guide/dsintro.rst
@@ -397,6 +397,28 @@ The result will be a DataFrame with the same index as the input Series, and
with one column whose name is the original name of the Series (only if no other
column name provided).
+.. _basics.dataframe.from_list_dataclasses:
+
+From a list of dataclasses
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. versionadded:: 1.1.0
+
+Data Classes as introduced in `PEP557 <https://www.python.org/dev/peps/pep-0557>`__,
+can be passed into the DataFrame constructor.
+Passing a list of dataclasses is equivilent to passing a list of dictionaries.
+
+Please be aware, that that all values in the list should be dataclasses, mixing
+types in the list would result in a TypeError.
+
+.. ipython:: python
+
+ from dataclasses import make_dataclass
+
+ Point = make_dataclass("Point", [("x", int), ("y", int)])
+
+ pd.DataFrame([Point(0, 0), Point(0, 3), Point(2, 3)])
+
**Missing data**
Much more will be said on this topic in the :ref:`Missing data <missing_data>`
diff --git a/pandas/core/dtypes/common.py b/pandas/core/dtypes/common.py
index 1afe7edf2641b..f5997a13e785d 100644
--- a/pandas/core/dtypes/common.py
+++ b/pandas/core/dtypes/common.py
@@ -24,6 +24,7 @@
is_array_like,
is_bool,
is_complex,
+ is_dataclass,
is_decimal,
is_dict_like,
is_file_like,
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index 56b880dca1241..d1607b5ede6c3 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -386,3 +386,39 @@ def is_sequence(obj) -> bool:
return not isinstance(obj, (str, bytes))
except (TypeError, AttributeError):
return False
+
+
+def is_dataclass(item):
+ """
+ Checks if the object is a data-class instance
+
+ Parameters
+ ----------
+ item : object
+
+ Returns
+ --------
+ is_dataclass : bool
+ True if the item is an instance of a data-class,
+ will return false if you pass the data class itself
+
+ Examples
+ --------
+ >>> from dataclasses import dataclass
+ >>> @dataclass
+ ... class Point:
+ ... x: int
+ ... y: int
+
+ >>> is_dataclass(Point)
+ False
+ >>> is_dataclass(Point(0,2))
+ True
+
+ """
+ try:
+ from dataclasses import is_dataclass
+
+ return is_dataclass(item) and not isinstance(item, type)
+ except ImportError:
+ return False
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 72d9ef7d0d35f..9b140238a9389 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -77,6 +77,7 @@
ensure_platform_int,
infer_dtype_from_object,
is_bool_dtype,
+ is_dataclass,
is_datetime64_any_dtype,
is_dict_like,
is_dtype_equal,
@@ -117,6 +118,7 @@
from pandas.core.internals import BlockManager
from pandas.core.internals.construction import (
arrays_to_mgr,
+ dataclasses_to_dicts,
get_names_from_index,
init_dict,
init_ndarray,
@@ -474,6 +476,8 @@ def __init__(
if not isinstance(data, (abc.Sequence, ExtensionArray)):
data = list(data)
if len(data) > 0:
+ if is_dataclass(data[0]):
+ data = dataclasses_to_dicts(data)
if is_list_like(data[0]) and getattr(data[0], "ndim", 1) == 1:
if is_named_tuple(data[0]) and columns is None:
columns = data[0]._fields
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index ab363e10eb098..c4416472d451c 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -429,6 +429,33 @@ def _get_axes(N, K, index, columns):
return index, columns
+def dataclasses_to_dicts(data):
+ """ Converts a list of dataclass instances to a list of dictionaries
+
+ Parameters
+ ----------
+ data : List[Type[dataclass]]
+
+ Returns
+ --------
+ list_dict : List[dict]
+
+ Examples
+ --------
+ >>> @dataclass
+ >>> class Point:
+ ... x: int
+ ... y: int
+
+ >>> dataclasses_to_dicts([Point(1,2), Point(2,3)])
+ [{"x":1,"y":2},{"x":2,"y":3}]
+
+ """
+ from dataclasses import asdict
+
+ return list(map(asdict, data))
+
+
# ---------------------------------------------------------------------
# Conversion of Inputs to Arrays
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index d938c0f6f1066..058b706cfe3aa 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -9,7 +9,7 @@
import pytest
import pytz
-from pandas.compat import is_platform_little_endian
+from pandas.compat import PY37, is_platform_little_endian
from pandas.compat.numpy import _is_numpy_dev
from pandas.core.dtypes.common import is_integer_dtype
@@ -1364,6 +1364,46 @@ def test_constructor_list_of_namedtuples(self):
result = DataFrame(tuples, columns=["y", "z"])
tm.assert_frame_equal(result, expected)
+ @pytest.mark.skipif(not PY37, reason="Requires Python >= 3.7")
+ def test_constructor_list_of_dataclasses(self):
+ # GH21910
+ from dataclasses import make_dataclass
+
+ Point = make_dataclass("Point", [("x", int), ("y", int)])
+
+ datas = [Point(0, 3), Point(1, 3)]
+ expected = DataFrame({"x": [0, 1], "y": [3, 3]})
+ result = DataFrame(datas)
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.skipif(not PY37, reason="Requires Python >= 3.7")
+ def test_constructor_list_of_dataclasses_with_varying_types(self):
+ # GH21910
+ from dataclasses import make_dataclass
+
+ # varying types
+ Point = make_dataclass("Point", [("x", int), ("y", int)])
+ HLine = make_dataclass("HLine", [("x0", int), ("x1", int), ("y", int)])
+
+ datas = [Point(0, 3), HLine(1, 3, 3)]
+
+ expected = DataFrame(
+ {"x": [0, np.nan], "y": [3, 3], "x0": [np.nan, 1], "x1": [np.nan, 3]}
+ )
+ result = DataFrame(datas)
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.skipif(not PY37, reason="Requires Python >= 3.7")
+ def test_constructor_list_of_dataclasses_error_thrown(self):
+ # GH21910
+ from dataclasses import make_dataclass
+
+ Point = make_dataclass("Point", [("x", int), ("y", int)])
+
+ # expect TypeError
+ with pytest.raises(TypeError):
+ DataFrame([Point(0, 0), {"x": 1, "y": 0}])
+
def test_constructor_list_of_dict_order(self):
# GH10056
data = [
| Added support for data-classes when used in the construction of a new dataframe.
- [X] closes #21910
- [X] tests added / passed
- [X] passes `black pandas`
- [X] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
Simply put, added support to use dataclasses in the following way:
```python
from dataclasses import dataclass
@dataclass
class Person:
name: str
age: int
df = DataFrame([Person("me", 25), Person("you", 35)])
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/27999 | 2019-08-18T16:54:11Z | 2020-03-15T01:09:41Z | 2020-03-15T01:09:41Z | 2020-04-09T23:34:29Z |
PLT: plot('line') or plot('area') produces wrong xlim in xaxis in 0.25.0 | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 0be4ebc627b30..c5bc865e59fa5 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -168,6 +168,7 @@ Plotting
-
- Bug in :meth:`DataFrame.plot` producing incorrect legend markers when plotting multiple series on the same axis (:issue:`18222`)
- Bug in :meth:`DataFrame.plot` when ``kind='box'`` and data contains datetime or timedelta data. These types are now automatically dropped (:issue:`22799`)
+- Bug in :meth:`DataFrame.plot.line` and :meth:`DataFrame.plot.area` produce wrong xlim in x-axis (:issue:`27686`, :issue:`25160`, :issue:`24784`)
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 287cc2f4130f4..fbca57206e163 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -33,8 +33,6 @@
from pandas.plotting._matplotlib.style import _get_standard_colors
from pandas.plotting._matplotlib.tools import (
_flatten,
- _get_all_lines,
- _get_xlim,
_handle_shared_axes,
_subplots,
format_date_labels,
@@ -1101,9 +1099,8 @@ def _make_plot(self):
)
self._add_legend_handle(newlines[0], label, index=i)
- lines = _get_all_lines(ax)
- left, right = _get_xlim(lines)
- ax.set_xlim(left, right)
+ # GH27686 set_xlim will truncate xaxis to fixed space
+ ax.relim()
@classmethod
def _plot(cls, ax, x, y, style=None, column_num=None, stacking_id=None, **kwds):
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index 8472eb3a3d887..fd2913ca51ac3 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -343,27 +343,6 @@ def _flatten(axes):
return np.array(axes)
-def _get_all_lines(ax):
- lines = ax.get_lines()
-
- if hasattr(ax, "right_ax"):
- lines += ax.right_ax.get_lines()
-
- if hasattr(ax, "left_ax"):
- lines += ax.left_ax.get_lines()
-
- return lines
-
-
-def _get_xlim(lines):
- left, right = np.inf, -np.inf
- for l in lines:
- x = l.get_xdata(orig=False)
- left = min(np.nanmin(x), left)
- right = max(np.nanmax(x), right)
- return left, right
-
-
def _set_ticks_props(axes, xlabelsize=None, xrot=None, ylabelsize=None, yrot=None):
import matplotlib.pyplot as plt
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 69070ea11e478..be87929b4545a 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -419,6 +419,8 @@ def test_get_finder(self):
assert conv.get_finder("A") == conv._annual_finder
assert conv.get_finder("W") == conv._daily_finder
+ # TODO: The finder should be retested due to wrong xlim values on x-axis
+ @pytest.mark.xfail(reason="TODO: check details in GH28021")
@pytest.mark.slow
def test_finder_daily(self):
day_lst = [10, 40, 252, 400, 950, 2750, 10000]
@@ -442,6 +444,8 @@ def test_finder_daily(self):
assert rs1 == xpl1
assert rs2 == xpl2
+ # TODO: The finder should be retested due to wrong xlim values on x-axis
+ @pytest.mark.xfail(reason="TODO: check details in GH28021")
@pytest.mark.slow
def test_finder_quarterly(self):
yrs = [3.5, 11]
@@ -465,6 +469,8 @@ def test_finder_quarterly(self):
assert rs1 == xpl1
assert rs2 == xpl2
+ # TODO: The finder should be retested due to wrong xlim values on x-axis
+ @pytest.mark.xfail(reason="TODO: check details in GH28021")
@pytest.mark.slow
def test_finder_monthly(self):
yrs = [1.15, 2.5, 4, 11]
@@ -498,6 +504,8 @@ def test_finder_monthly_long(self):
xp = Period("1989Q1", "M").ordinal
assert rs == xp
+ # TODO: The finder should be retested due to wrong xlim values on x-axis
+ @pytest.mark.xfail(reason="TODO: check details in GH28021")
@pytest.mark.slow
def test_finder_annual(self):
xp = [1987, 1988, 1990, 1990, 1995, 2020, 2070, 2170]
@@ -522,7 +530,7 @@ def test_finder_minutely(self):
_, ax = self.plt.subplots()
ser.plot(ax=ax)
xaxis = ax.get_xaxis()
- rs = xaxis.get_majorticklocs()[0]
+ rs = xaxis.get_majorticklocs()[1]
xp = Period("1/1/1999", freq="Min").ordinal
assert rs == xp
@@ -534,7 +542,7 @@ def test_finder_hourly(self):
_, ax = self.plt.subplots()
ser.plot(ax=ax)
xaxis = ax.get_xaxis()
- rs = xaxis.get_majorticklocs()[0]
+ rs = xaxis.get_majorticklocs()[1]
xp = Period("1/1/1999", freq="H").ordinal
assert rs == xp
@@ -1410,7 +1418,9 @@ def test_plot_outofbounds_datetime(self):
def test_format_timedelta_ticks_narrow(self):
- expected_labels = ["00:00:00.0000000{:0>2d}".format(i) for i in range(10)]
+ expected_labels = [
+ "00:00:00.0000000{:0>2d}".format(i) for i in np.arange(0, 10, 2)
+ ]
rng = timedelta_range("0", periods=10, freq="ns")
df = DataFrame(np.random.randn(len(rng), 3), rng)
@@ -1420,8 +1430,8 @@ def test_format_timedelta_ticks_narrow(self):
labels = ax.get_xticklabels()
result_labels = [x.get_text() for x in labels]
- assert len(result_labels) == len(expected_labels)
- assert result_labels == expected_labels
+ assert (len(result_labels) - 2) == len(expected_labels)
+ assert result_labels[1:-1] == expected_labels
def test_format_timedelta_ticks_wide(self):
expected_labels = [
@@ -1444,8 +1454,8 @@ def test_format_timedelta_ticks_wide(self):
labels = ax.get_xticklabels()
result_labels = [x.get_text() for x in labels]
- assert len(result_labels) == len(expected_labels)
- assert result_labels == expected_labels
+ assert (len(result_labels) - 2) == len(expected_labels)
+ assert result_labels[1:-1] == expected_labels
def test_timedelta_plot(self):
# test issue #8711
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 7fdc0252b71e3..f672cd3a6aa58 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -3177,6 +3177,58 @@ def test_x_multiindex_values_ticks(self):
assert labels_position["(2013, 1)"] == 2.0
assert labels_position["(2013, 2)"] == 3.0
+ @pytest.mark.parametrize("kind", ["line", "area"])
+ def test_xlim_plot_line(self, kind):
+ # test if xlim is set correctly in plot.line and plot.area
+ # GH 27686
+ df = pd.DataFrame([2, 4], index=[1, 2])
+ ax = df.plot(kind=kind)
+ xlims = ax.get_xlim()
+ assert xlims[0] < 1
+ assert xlims[1] > 2
+
+ def test_xlim_plot_line_correctly_in_mixed_plot_type(self):
+ # test if xlim is set correctly when ax contains multiple different kinds
+ # of plots, GH 27686
+ fig, ax = self.plt.subplots()
+
+ indexes = ["k1", "k2", "k3", "k4"]
+ df = pd.DataFrame(
+ {
+ "s1": [1000, 2000, 1500, 2000],
+ "s2": [900, 1400, 2000, 3000],
+ "s3": [1500, 1500, 1600, 1200],
+ "secondary_y": [1, 3, 4, 3],
+ },
+ index=indexes,
+ )
+ df[["s1", "s2", "s3"]].plot.bar(ax=ax, stacked=False)
+ df[["secondary_y"]].plot(ax=ax, secondary_y=True)
+
+ xlims = ax.get_xlim()
+ assert xlims[0] < 0
+ assert xlims[1] > 3
+
+ # make sure axis labels are plotted correctly as well
+ xticklabels = [t.get_text() for t in ax.get_xticklabels()]
+ assert xticklabels == indexes
+
+ def test_subplots_sharex_false(self):
+ # test when sharex is set to False, two plots should have different
+ # labels, GH 25160
+ df = pd.DataFrame(np.random.rand(10, 2))
+ df.iloc[5:, 1] = np.nan
+ df.iloc[:5, 0] = np.nan
+
+ figs, axs = self.plt.subplots(2, 1)
+ df.plot.line(ax=axs, subplots=True, sharex=False)
+
+ expected_ax1 = np.arange(4.5, 10, 0.5)
+ expected_ax2 = np.arange(-0.5, 5, 0.5)
+
+ tm.assert_numpy_array_equal(axs[0].get_xticks(), expected_ax1)
+ tm.assert_numpy_array_equal(axs[1].get_xticks(), expected_ax2)
+
def _generate_4_axes_via_gridspec():
import matplotlib.pyplot as plt
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 111c3a70fc09c..2c4c8aa7461a3 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -897,3 +897,15 @@ def test_plot_accessor_updates_on_inplace(self):
_, ax = self.plt.subplots()
after = ax.xaxis.get_ticklocs()
tm.assert_numpy_array_equal(before, after)
+
+ @pytest.mark.parametrize("kind", ["line", "area"])
+ def test_plot_xlim_for_series(self, kind):
+ # test if xlim is also correctly plotted in Series for line and area
+ # GH 27686
+ s = Series([2, 3])
+ _, ax = self.plt.subplots()
+ s.plot(kind=kind, ax=ax)
+ xlims = ax.get_xlim()
+
+ assert xlims[0] < 0
+ assert xlims[1] > 1
| Due to previous PRs, there is an issue with xlim wrongly plotted for lines. I digged into database a bit, and found it also affects `plot(kind='area')` because they share the same `LinePlot`. And this issue is produced in both DataFrame and Series. Now looks like the issue is gone:

Duplicated closed issue 27796 is solved as well:

Also looks like issue in #25160 is gone as well.

The same to #24784 looks like issue is also gone

- [x] closes #27686 , #25160, #24784
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27993 | 2019-08-18T11:41:09Z | 2019-08-20T14:18:56Z | 2019-08-20T14:18:55Z | 2019-08-20T14:18:59Z |
BUG/TST: fix and test for timezone drop in GroupBy.shift/bfill/ffill | diff --git a/doc/source/whatsnew/v0.25.2.rst b/doc/source/whatsnew/v0.25.2.rst
index 1cdf213d81a74..69f324211e5b2 100644
--- a/doc/source/whatsnew/v0.25.2.rst
+++ b/doc/source/whatsnew/v0.25.2.rst
@@ -78,6 +78,7 @@ Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug incorrectly raising an ``IndexError`` when passing a list of quantiles to :meth:`pandas.core.groupby.DataFrameGroupBy.quantile` (:issue:`28113`).
+- Bug in :meth:`pandas.core.groupby.GroupBy.shift`, :meth:`pandas.core.groupby.GroupBy.bfill` and :meth:`pandas.core.groupby.GroupBy.ffill` where timezone information would be dropped (:issue:`19995`, :issue:`27992`)
-
-
-
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 55def024cb1d4..e010e615e176e 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2263,26 +2263,28 @@ def _get_cythonized_result(
base_func = getattr(libgroupby, how)
for name, obj in self._iterate_slices():
+ values = obj._data._values
+
if aggregate:
result_sz = ngroups
else:
- result_sz = len(obj.values)
+ result_sz = len(values)
if not cython_dtype:
- cython_dtype = obj.values.dtype
+ cython_dtype = values.dtype
result = np.zeros(result_sz, dtype=cython_dtype)
func = partial(base_func, result, labels)
inferences = None
if needs_values:
- vals = obj.values
+ vals = values
if pre_processing:
vals, inferences = pre_processing(vals)
func = partial(func, vals)
if needs_mask:
- mask = isna(obj.values).view(np.uint8)
+ mask = isna(values).view(np.uint8)
func = partial(func, mask)
if needs_ngroups:
@@ -2291,7 +2293,7 @@ def _get_cythonized_result(
func(**kwargs) # Call func to modify indexer values in place
if result_is_index:
- result = algorithms.take_nd(obj.values, result)
+ result = algorithms.take_nd(values, result)
if post_processing:
result = post_processing(result, inferences)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 4556b22b57279..bec5cbc5fecb8 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1882,3 +1882,69 @@ def test_groupby_axis_1(group_name):
results = df.groupby(group_name, axis=1).sum()
expected = df.T.groupby(group_name).sum().T
assert_frame_equal(results, expected)
+
+
+@pytest.mark.parametrize(
+ "op, expected",
+ [
+ (
+ "shift",
+ {
+ "time": [
+ None,
+ None,
+ Timestamp("2019-01-01 12:00:00"),
+ Timestamp("2019-01-01 12:30:00"),
+ None,
+ None,
+ ]
+ },
+ ),
+ (
+ "bfill",
+ {
+ "time": [
+ Timestamp("2019-01-01 12:00:00"),
+ Timestamp("2019-01-01 12:30:00"),
+ Timestamp("2019-01-01 14:00:00"),
+ Timestamp("2019-01-01 14:30:00"),
+ Timestamp("2019-01-01 14:00:00"),
+ Timestamp("2019-01-01 14:30:00"),
+ ]
+ },
+ ),
+ (
+ "ffill",
+ {
+ "time": [
+ Timestamp("2019-01-01 12:00:00"),
+ Timestamp("2019-01-01 12:30:00"),
+ Timestamp("2019-01-01 12:00:00"),
+ Timestamp("2019-01-01 12:30:00"),
+ Timestamp("2019-01-01 14:00:00"),
+ Timestamp("2019-01-01 14:30:00"),
+ ]
+ },
+ ),
+ ],
+)
+def test_shift_bfill_ffill_tz(tz_naive_fixture, op, expected):
+ # GH19995, GH27992: Check that timezone does not drop in shift, bfill, and ffill
+ tz = tz_naive_fixture
+ data = {
+ "id": ["A", "B", "A", "B", "A", "B"],
+ "time": [
+ Timestamp("2019-01-01 12:00:00"),
+ Timestamp("2019-01-01 12:30:00"),
+ None,
+ None,
+ Timestamp("2019-01-01 14:00:00"),
+ Timestamp("2019-01-01 14:30:00"),
+ ],
+ }
+ df = DataFrame(data).assign(time=lambda x: x.time.dt.tz_localize(tz))
+
+ grouped = df.groupby("id")
+ result = getattr(grouped, op)()
+ expected = DataFrame(expected).assign(time=lambda x: x.time.dt.tz_localize(tz))
+ assert_frame_equal(result, expected)
| closes #19995
Timezone info is dropped in GroupBy.shift, bfill, and ffill
since index calculated by Cythonized functions are applied to
NumPy representation of values without timezone.
Could you please find and review this fix?
Many thanks,
-noritada
```
In [1]: import pandas as pd
...: pd.__version__
Out[1]: '0.24.2'
In [2]: df = pd.DataFrame([
...: ['a', pd.Timestamp('2019-01-01 00:00:00+09:00')],
...: ['b', pd.Timestamp('2019-01-01 00:05:00+09:00')],
...: ['a', None],
...: ['b', None],
...: ['a', pd.Timestamp('2019-01-01 00:20:00+09:00')],
...: ['b', pd.Timestamp('2019-01-01 00:25:00+09:00')],
...: ], columns=['id', 'time'])
In [3]: df['time'].shift(1)
Out[3]:
0 NaT
1 2019-01-01 00:00:00+09:00
2 2019-01-01 00:05:00+09:00
3 NaT
4 NaT
5 2019-01-01 00:20:00+09:00
Name: time, dtype: datetime64[ns, pytz.FixedOffset(540)]
In [4]: df.groupby('id')['time'].shift(1)
Out[4]:
0 NaT
1 NaT
2 2018-12-31 15:00:00
3 2018-12-31 15:05:00
4 NaT
5 NaT
Name: time, dtype: datetime64[ns]
In [5]: df.groupby('id')['time'].bfill()
Out[5]:
0 2018-12-31 15:00:00
1 2018-12-31 15:05:00
2 2018-12-31 15:20:00
3 2018-12-31 15:25:00
4 2018-12-31 15:20:00
5 2018-12-31 15:25:00
Name: time, dtype: datetime64[ns]
In [6]: df.groupby('id')['time'].ffill()
Out[6]:
0 2018-12-31 15:00:00
1 2018-12-31 15:05:00
2 2018-12-31 15:00:00
3 2018-12-31 15:05:00
4 2018-12-31 15:20:00
5 2018-12-31 15:25:00
Name: time, dtype: datetime64[ns]
```
- [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27992 | 2019-08-18T11:25:59Z | 2019-09-09T12:06:01Z | 2019-09-09T12:06:01Z | 2019-09-09T15:24:05Z |
DataFrame html repr: also follow min_rows setting | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 34b149a6b8261..3116d4be30cbf 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -105,6 +105,7 @@ I/O
- Avoid calling ``S3File.s3`` when reading parquet, as this was removed in s3fs version 0.3.0 (:issue:`27756`)
- Better error message when a negative header is passed in :func:`pandas.read_csv` (:issue:`27779`)
+- Follow the ``min_rows`` display option (introduced in v0.25.0) correctly in the html repr in the notebook (:issue:`27991`).
-
Plotting
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 603a615c1f8cb..b377e571a5abc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -669,15 +669,18 @@ def _repr_html_(self):
if get_option("display.notebook_repr_html"):
max_rows = get_option("display.max_rows")
+ min_rows = get_option("display.min_rows")
max_cols = get_option("display.max_columns")
show_dimensions = get_option("display.show_dimensions")
- return self.to_html(
+ formatter = fmt.DataFrameFormatter(
+ self,
max_rows=max_rows,
+ min_rows=min_rows,
max_cols=max_cols,
show_dimensions=show_dimensions,
- notebook=True,
)
+ return formatter.to_html(notebook=True)
else:
return None
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index a048e3bb867bd..c0451a0672c89 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -471,28 +471,35 @@ def test_repr_min_rows(self):
# default setting no truncation even if above min_rows
assert ".." not in repr(df)
+ assert ".." not in df._repr_html_()
df = pd.DataFrame({"a": range(61)})
# default of max_rows 60 triggers truncation if above
assert ".." in repr(df)
+ assert ".." in df._repr_html_()
with option_context("display.max_rows", 10, "display.min_rows", 4):
# truncated after first two rows
assert ".." in repr(df)
assert "2 " not in repr(df)
+ assert "..." in df._repr_html_()
+ assert "<td>2</td>" not in df._repr_html_()
with option_context("display.max_rows", 12, "display.min_rows", None):
# when set to None, follow value of max_rows
assert "5 5" in repr(df)
+ assert "<td>5</td>" in df._repr_html_()
with option_context("display.max_rows", 10, "display.min_rows", 12):
# when set value higher as max_rows, use the minimum
assert "5 5" not in repr(df)
+ assert "<td>5</td>" not in df._repr_html_()
with option_context("display.max_rows", None, "display.min_rows", 12):
# max_rows of None -> never truncate
assert ".." not in repr(df)
+ assert ".." not in df._repr_html_()
def test_str_max_colwidth(self):
# GH 7856
| Follow-up on https://github.com/pandas-dev/pandas/pull/27095, where I forgot to apply this setting in the html repr as well.
Thoughts on including this in 0.25.1 or not? It's kind of a oversight of the 0.25 feature, but also an actual change of course in the user experience in the notebook. | https://api.github.com/repos/pandas-dev/pandas/pulls/27991 | 2019-08-18T09:20:31Z | 2019-08-20T19:24:47Z | 2019-08-20T19:24:47Z | 2019-08-20T19:24:49Z |
DOC: remove savefig references from the docs v0.7.3 | diff --git a/doc/source/whatsnew/v0.7.3.rst b/doc/source/whatsnew/v0.7.3.rst
index a8697f60d7467..020cf3bdc2d59 100644
--- a/doc/source/whatsnew/v0.7.3.rst
+++ b/doc/source/whatsnew/v0.7.3.rst
@@ -25,8 +25,6 @@ New features
from pandas.tools.plotting import scatter_matrix
scatter_matrix(df, alpha=0.2) # noqa F821
-.. image:: ../savefig/scatter_matrix_kde.png
- :width: 5in
- Add ``stacked`` argument to Series and DataFrame's ``plot`` method for
:ref:`stacked bar plots <visualization.barplot>`.
@@ -35,15 +33,11 @@ New features
df.plot(kind='bar', stacked=True) # noqa F821
-.. image:: ../savefig/bar_plot_stacked_ex.png
- :width: 4in
.. code-block:: python
df.plot(kind='barh', stacked=True) # noqa F821
-.. image:: ../savefig/barh_plot_stacked_ex.png
- :width: 4in
- Add log x and y :ref:`scaling options <visualization.basic>` to
``DataFrame.plot`` and ``Series.plot``
| - [x] closes #27971
- [ ] tests added / passed
- [ ] passes `black pandas`
- [ ] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27990 | 2019-08-18T06:04:01Z | 2019-08-18T08:23:48Z | 2019-08-18T08:23:48Z | 2019-08-18T08:23:49Z |
DOC: Fix GL01 and GL02 errors in the docstrings | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 2cf7bf6a6df41..b032e14d8f7e1 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -123,18 +123,22 @@ def ip():
@pytest.fixture(params=[True, False, None])
def observed(request):
- """ pass in the observed keyword to groupby for [True, False]
+ """
+ Pass in the observed keyword to groupby for [True, False]
This indicates whether categoricals should return values for
values which are not in the grouper [False / None], or only values which
appear in the grouper [True]. [None] is supported for future compatibility
if we decide to change the default (and would need to warn if this
- parameter is not passed)"""
+ parameter is not passed).
+ """
return request.param
@pytest.fixture(params=[True, False, None])
def ordered_fixture(request):
- """Boolean 'ordered' parameter for Categorical."""
+ """
+ Boolean 'ordered' parameter for Categorical.
+ """
return request.param
@@ -234,7 +238,8 @@ def cython_table_items(request):
def _get_cython_table_params(ndframe, func_names_and_expected):
- """combine frame, functions from SelectionMixin._cython_table
+ """
+ Combine frame, functions from SelectionMixin._cython_table
keys and expected result.
Parameters
@@ -242,7 +247,7 @@ def _get_cython_table_params(ndframe, func_names_and_expected):
ndframe : DataFrame or Series
func_names_and_expected : Sequence of two items
The first item is a name of a NDFrame method ('sum', 'prod') etc.
- The second item is the expected return value
+ The second item is the expected return value.
Returns
-------
@@ -341,7 +346,8 @@ def strict_data_files(pytestconfig):
@pytest.fixture
def datapath(strict_data_files):
- """Get the path to a data file.
+ """
+ Get the path to a data file.
Parameters
----------
@@ -375,7 +381,9 @@ def deco(*args):
@pytest.fixture
def iris(datapath):
- """The iris dataset as a DataFrame."""
+ """
+ The iris dataset as a DataFrame.
+ """
return pd.read_csv(datapath("data", "iris.csv"))
@@ -504,7 +512,8 @@ def tz_aware_fixture(request):
@pytest.fixture(params=STRING_DTYPES)
def string_dtype(request):
- """Parametrized fixture for string dtypes.
+ """
+ Parametrized fixture for string dtypes.
* str
* 'str'
@@ -515,7 +524,8 @@ def string_dtype(request):
@pytest.fixture(params=BYTES_DTYPES)
def bytes_dtype(request):
- """Parametrized fixture for bytes dtypes.
+ """
+ Parametrized fixture for bytes dtypes.
* bytes
* 'bytes'
@@ -525,7 +535,8 @@ def bytes_dtype(request):
@pytest.fixture(params=OBJECT_DTYPES)
def object_dtype(request):
- """Parametrized fixture for object dtypes.
+ """
+ Parametrized fixture for object dtypes.
* object
* 'object'
@@ -535,7 +546,8 @@ def object_dtype(request):
@pytest.fixture(params=DATETIME64_DTYPES)
def datetime64_dtype(request):
- """Parametrized fixture for datetime64 dtypes.
+ """
+ Parametrized fixture for datetime64 dtypes.
* 'datetime64[ns]'
* 'M8[ns]'
@@ -545,7 +557,8 @@ def datetime64_dtype(request):
@pytest.fixture(params=TIMEDELTA64_DTYPES)
def timedelta64_dtype(request):
- """Parametrized fixture for timedelta64 dtypes.
+ """
+ Parametrized fixture for timedelta64 dtypes.
* 'timedelta64[ns]'
* 'm8[ns]'
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 9d2647f226f00..490c574463b9b 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -1,4 +1,5 @@
-""":mod:`pandas.io.html` is a module containing functionality for dealing with
+"""
+:mod:`pandas.io.html` is a module containing functionality for dealing with
HTML IO.
"""
@@ -58,7 +59,8 @@ def _importers():
def _remove_whitespace(s, regex=_RE_WHITESPACE):
- """Replace extra whitespace inside of a string with a single space.
+ """
+ Replace extra whitespace inside of a string with a single space.
Parameters
----------
@@ -77,7 +79,8 @@ def _remove_whitespace(s, regex=_RE_WHITESPACE):
def _get_skiprows(skiprows):
- """Get an iterator given an integer, slice or container.
+ """
+ Get an iterator given an integer, slice or container.
Parameters
----------
@@ -107,7 +110,8 @@ def _get_skiprows(skiprows):
def _read(obj):
- """Try to read from a url, file or string.
+ """
+ Try to read from a url, file or string.
Parameters
----------
@@ -136,7 +140,8 @@ def _read(obj):
class _HtmlFrameParser:
- """Base class for parsers that parse HTML into DataFrames.
+ """
+ Base class for parsers that parse HTML into DataFrames.
Parameters
----------
@@ -515,7 +520,8 @@ def _handle_hidden_tables(self, tbl_list, attr_name):
class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
- """HTML to DataFrame parser that uses BeautifulSoup under the hood.
+ """
+ HTML to DataFrame parser that uses BeautifulSoup under the hood.
See Also
--------
@@ -622,7 +628,8 @@ def _build_xpath_expr(attrs):
class _LxmlFrameParser(_HtmlFrameParser):
- """HTML to DataFrame parser that uses lxml under the hood.
+ """
+ HTML to DataFrame parser that uses lxml under the hood.
Warning
-------
@@ -937,7 +944,8 @@ def read_html(
keep_default_na=True,
displayed_only=True,
):
- r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
+ r"""
+ Read HTML tables into a ``list`` of ``DataFrame`` objects.
Parameters
----------
| - [x] closes #27986
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
Fixed docstrings text that aren’t on the next line after opening `"""` and followed immediately by the line with closing `"""`. Issue #27986 | https://api.github.com/repos/pandas-dev/pandas/pulls/27988 | 2019-08-18T04:54:18Z | 2019-08-24T09:32:55Z | 2019-08-24T09:32:55Z | 2019-08-24T09:34:15Z |
DOC: Added periods to end of docstrings in explode function | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 603a615c1f8cb..45a9869268cb8 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6177,14 +6177,14 @@ def stack(self, level=-1, dropna=True):
def explode(self, column: Union[str, Tuple]) -> "DataFrame":
"""
- Transform each element of a list-like to a row, replicating the
- index values.
+ Transform each element of a list-like to a row, replicating index values.
.. versionadded:: 0.25.0
Parameters
----------
column : str or tuple
+ Column to explode.
Returns
-------
@@ -6200,8 +6200,8 @@ def explode(self, column: Union[str, Tuple]) -> "DataFrame":
See Also
--------
DataFrame.unstack : Pivot a level of the (necessarily hierarchical)
- index labels
- DataFrame.melt : Unpivot a DataFrame from wide format to long format
+ index labels.
+ DataFrame.melt : Unpivot a DataFrame from wide format to long format.
Series.explode : Explode a DataFrame from list-like columns to long format.
Notes
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3f04970ee4e58..441b59aabb704 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3620,7 +3620,7 @@ def explode(self) -> "Series":
Series.str.split : Split string values on specified separator.
Series.unstack : Unstack, a.k.a. pivot, Series with MultiIndex
to produce DataFrame.
- DataFrame.melt : Unpivot a DataFrame from wide format to long format
+ DataFrame.melt : Unpivot a DataFrame from wide format to long format.
DataFrame.explode : Explode a DataFrame from list-like
columns to long format.
| xref #23630 | https://api.github.com/repos/pandas-dev/pandas/pulls/27973 | 2019-08-17T19:52:49Z | 2019-08-26T02:21:36Z | 2019-08-26T02:21:36Z | 2019-08-30T15:37:19Z |
added names, fastpath parameters explanation to pandas.Series | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3f04970ee4e58..b6fc9ae82048b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -170,6 +170,8 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
Data type for the output Series. If not specified, this will be
inferred from `data`.
See the :ref:`user guide <basics.dtypes>` for more usages.
+ name : str, optional
+ The name to give to the Series.
copy : bool, default False
Copy input data.
"""
| - [x] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27964 | 2019-08-17T16:56:23Z | 2019-11-08T16:42:42Z | 2019-11-08T16:42:41Z | 2019-11-08T16:42:47Z |
BUG: TimedeltaArray - Index result.name | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index b983117478c61..415255cdbad06 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2325,7 +2325,10 @@ def __sub__(self, other):
return Index(np.array(self) - other)
def __rsub__(self, other):
- return Index(other - np.array(self))
+ # wrap Series to ensure we pin name correctly
+ from pandas import Series
+
+ return Index(other - Series(self))
def __and__(self, other):
return self.intersection(other)
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index a3bfb2e10bb66..523ba5d42a69c 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -74,8 +74,9 @@ def masked_arith_op(x, y, op):
result[mask] = op(xrav[mask], yrav[mask])
else:
- assert is_scalar(y), type(y)
- assert isinstance(x, np.ndarray), type(x)
+ if not is_scalar(y):
+ raise TypeError(type(y))
+
# mask is only meaningful for x
result = np.empty(x.size, dtype=x.dtype)
mask = notna(xrav)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 33a5d45df3885..6d6b85a1e81e1 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1378,8 +1378,12 @@ def test_td64arr_add_offset_array(self, box):
@pytest.mark.parametrize(
"names", [(None, None, None), ("foo", "bar", None), ("foo", "foo", "foo")]
)
- def test_td64arr_sub_offset_index(self, names, box):
+ def test_td64arr_sub_offset_index(self, names, box_with_array):
# GH#18824, GH#19744
+ box = box_with_array
+ xbox = box if box is not tm.to_array else pd.Index
+ exname = names[2] if box is not tm.to_array else names[1]
+
if box is pd.DataFrame and names[1] == "bar":
pytest.skip(
"Name propagation for DataFrame does not behave like "
@@ -1390,11 +1394,11 @@ def test_td64arr_sub_offset_index(self, names, box):
other = pd.Index([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)], name=names[1])
expected = TimedeltaIndex(
- [tdi[n] - other[n] for n in range(len(tdi))], freq="infer", name=names[2]
+ [tdi[n] - other[n] for n in range(len(tdi))], freq="infer", name=exname
)
tdi = tm.box_expected(tdi, box)
- expected = tm.box_expected(expected, box)
+ expected = tm.box_expected(expected, xbox)
# The DataFrame operation is transposed and so operates as separate
# scalar operations, which do not issue a PerformanceWarning
| - [ ] closes #xxxx
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27962 | 2019-08-17T02:59:37Z | 2019-08-19T21:01:51Z | 2019-08-19T21:01:50Z | 2019-08-19T21:04:50Z |
REF: use dispatch_to_extension_op for bool ops | diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index dbcf09a401f27..e917a5c999238 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -818,6 +818,11 @@ def wrapper(self, other):
# Defer to DataFrame implementation; fail early
return NotImplemented
+ elif should_extension_dispatch(self, other):
+ # e.g. SparseArray
+ res_values = dispatch_to_extension_op(op, self, other)
+ return _construct_result(self, res_values, index=self.index, name=res_name)
+
elif isinstance(other, (ABCSeries, ABCIndexClass)):
is_other_int_dtype = is_integer_dtype(other.dtype)
other = fill_int(other) if is_other_int_dtype else fill_bool(other)
| Along with #27912 this completes the process of doing this for all Series ops. From there we can move the array-specific components to array_ops and define the PandasArray ops appropriately. | https://api.github.com/repos/pandas-dev/pandas/pulls/27959 | 2019-08-17T00:26:59Z | 2019-09-02T21:26:54Z | 2019-09-02T21:26:53Z | 2019-09-03T01:50:14Z |
Backport PR #27956 on branch 0.25.x (TST: xfail on 37, win) | diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 9b8c8e6d8a077..99cc4cf0ffbd1 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -4,6 +4,8 @@
import numpy as np
import pytest
+from pandas.compat import PY37, is_platform_windows
+
import pandas as pd
from pandas import (
Categorical,
@@ -208,6 +210,9 @@ def test_level_get_group(observed):
# GH#21636 previously flaky on py37
+@pytest.mark.xfail(
+ is_platform_windows() and PY37, reason="Flaky, GH-27902", strict=False
+)
@pytest.mark.parametrize("ordered", [True, False])
def test_apply(ordered):
# GH 10138
| Backport PR #27956: TST: xfail on 37, win | https://api.github.com/repos/pandas-dev/pandas/pulls/27957 | 2019-08-16T20:07:19Z | 2019-08-17T20:20:30Z | 2019-08-17T20:20:30Z | 2019-08-17T20:20:31Z |
TST: xfail on 37, win | diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index 756de3edd33dd..b5c2de267869d 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -4,6 +4,8 @@
import numpy as np
import pytest
+from pandas.compat import PY37, is_platform_windows
+
import pandas as pd
from pandas import (
Categorical,
@@ -208,6 +210,9 @@ def test_level_get_group(observed):
# GH#21636 previously flaky on py37
+@pytest.mark.xfail(
+ is_platform_windows() and PY37, reason="Flaky, GH-27902", strict=False
+)
@pytest.mark.parametrize("ordered", [True, False])
def test_apply(ordered):
# GH 10138
| Closes https://github.com/pandas-dev/pandas/issues/27902 | https://api.github.com/repos/pandas-dev/pandas/pulls/27956 | 2019-08-16T19:16:26Z | 2019-08-16T20:06:36Z | 2019-08-16T20:06:35Z | 2019-08-16T20:06:39Z |
BUG: Remove null values before sorting during groupby nunique calculation | diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 58892b316c940..2f72de25c579b 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -97,7 +97,7 @@ Datetimelike
- Bug in :meth:`Series.__setitem__` incorrectly casting ``np.timedelta64("NaT")`` to ``np.datetime64("NaT")`` when inserting into a :class:`Series` with datetime64 dtype (:issue:`27311`)
- Bug in :meth:`Series.dt` property lookups when the underlying data is read-only (:issue:`27529`)
- Bug in ``HDFStore.__getitem__`` incorrectly reading tz attribute created in Python 2 (:issue:`26443`)
--
+- Bug in :meth:`pandas.core.groupby.SeriesGroupBy.nunique` where ``NaT`` values were interfering with the count of unique values (:issue:`27951`)
Timedelta
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index c0436e9389078..e514162f84c37 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1147,6 +1147,10 @@ def nunique(self, dropna=True):
val = self.obj._internal_get_values()
+ # GH 27951
+ # temporary fix while we wait for NumPy bug 12629 to be fixed
+ val[isna(val)] = np.datetime64("NaT")
+
try:
sorter = np.lexsort((val, ids))
except TypeError: # catches object dtypes
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index d89233f2fd603..afb22a732691c 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -1,4 +1,5 @@
import builtins
+import datetime as dt
from io import StringIO
from itertools import product
from string import ascii_lowercase
@@ -9,7 +10,16 @@
from pandas.errors import UnsupportedFunctionCall
import pandas as pd
-from pandas import DataFrame, Index, MultiIndex, Series, Timestamp, date_range, isna
+from pandas import (
+ DataFrame,
+ Index,
+ MultiIndex,
+ NaT,
+ Series,
+ Timestamp,
+ date_range,
+ isna,
+)
import pandas.core.nanops as nanops
from pandas.util import _test_decorators as td, testing as tm
@@ -1015,6 +1025,42 @@ def test_nunique_with_timegrouper():
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize(
+ "key, data, dropna, expected",
+ [
+ (
+ ["x", "x", "x"],
+ [Timestamp("2019-01-01"), NaT, Timestamp("2019-01-01")],
+ True,
+ Series([1], index=pd.Index(["x"], name="key"), name="data"),
+ ),
+ (
+ ["x", "x", "x"],
+ [dt.date(2019, 1, 1), NaT, dt.date(2019, 1, 1)],
+ True,
+ Series([1], index=pd.Index(["x"], name="key"), name="data"),
+ ),
+ (
+ ["x", "x", "x", "y", "y"],
+ [dt.date(2019, 1, 1), NaT, dt.date(2019, 1, 1), NaT, dt.date(2019, 1, 1)],
+ False,
+ Series([2, 2], index=pd.Index(["x", "y"], name="key"), name="data"),
+ ),
+ (
+ ["x", "x", "x", "x", "y"],
+ [dt.date(2019, 1, 1), NaT, dt.date(2019, 1, 1), NaT, dt.date(2019, 1, 1)],
+ False,
+ Series([2, 1], index=pd.Index(["x", "y"], name="key"), name="data"),
+ ),
+ ],
+)
+def test_nunique_with_NaT(key, data, dropna, expected):
+ # GH 27951
+ df = pd.DataFrame({"key": key, "data": data})
+ result = df.groupby(["key"])["data"].nunique(dropna=dropna)
+ tm.assert_series_equal(result, expected)
+
+
def test_nunique_preserves_column_level_names():
# GH 23222
test = pd.DataFrame([1, 2, 2], columns=pd.Index(["A"], name="level_0"))
| - [x] Closes #27904
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/27951 | 2019-08-16T16:52:28Z | 2019-09-07T11:29:41Z | 2019-09-07T11:29:40Z | 2019-09-07T12:17:38Z |
TST: parametrize and de-duplicate arith tests | diff --git a/pandas/tests/arithmetic/conftest.py b/pandas/tests/arithmetic/conftest.py
index f047154f2c636..774ff14398bdb 100644
--- a/pandas/tests/arithmetic/conftest.py
+++ b/pandas/tests/arithmetic/conftest.py
@@ -190,7 +190,12 @@ def box(request):
@pytest.fixture(
- params=[pd.Index, pd.Series, pytest.param(pd.DataFrame, marks=pytest.mark.xfail)],
+ params=[
+ pd.Index,
+ pd.Series,
+ pytest.param(pd.DataFrame, marks=pytest.mark.xfail),
+ tm.to_array,
+ ],
ids=id_func,
)
def box_df_fail(request):
@@ -206,6 +211,7 @@ def box_df_fail(request):
(pd.Series, False),
(pd.DataFrame, False),
pytest.param((pd.DataFrame, True), marks=pytest.mark.xfail),
+ (tm.to_array, False),
],
ids=id_func,
)
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 5931cd93cc8c5..bc7b979d2c7d0 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -348,28 +348,6 @@ def test_dt64arr_timestamp_equality(self, box_with_array):
expected = tm.box_expected([False, False], xbox)
tm.assert_equal(result, expected)
- @pytest.mark.parametrize(
- "op",
- [operator.eq, operator.ne, operator.gt, operator.ge, operator.lt, operator.le],
- )
- def test_comparison_tzawareness_compat(self, op):
- # GH#18162
- dr = pd.date_range("2016-01-01", periods=6)
- dz = dr.tz_localize("US/Pacific")
-
- # Check that there isn't a problem aware-aware and naive-naive do not
- # raise
- naive_series = Series(dr)
- aware_series = Series(dz)
- msg = "Cannot compare tz-naive and tz-aware"
- with pytest.raises(TypeError, match=msg):
- op(dz, naive_series)
- with pytest.raises(TypeError, match=msg):
- op(dr, aware_series)
-
- # TODO: implement _assert_tzawareness_compat for the reverse
- # comparison with the Series on the left-hand side
-
class TestDatetimeIndexComparisons:
@@ -599,15 +577,18 @@ def test_comparison_tzawareness_compat(self, op, box_df_fail):
with pytest.raises(TypeError, match=msg):
op(dz, np.array(list(dr), dtype=object))
- # Check that there isn't a problem aware-aware and naive-naive do not
- # raise
+ # The aware==aware and naive==naive comparisons should *not* raise
assert_all(dr == dr)
- assert_all(dz == dz)
+ assert_all(dr == list(dr))
+ assert_all(list(dr) == dr)
+ assert_all(np.array(list(dr), dtype=object) == dr)
+ assert_all(dr == np.array(list(dr), dtype=object))
- # FIXME: DataFrame case fails to raise for == and !=, wrong
- # message for inequalities
- assert (dr == list(dr)).all()
- assert (dz == list(dz)).all()
+ assert_all(dz == dz)
+ assert_all(dz == list(dz))
+ assert_all(list(dz) == dz)
+ assert_all(np.array(list(dz), dtype=object) == dz)
+ assert_all(dz == np.array(list(dz), dtype=object))
@pytest.mark.parametrize(
"op",
@@ -844,6 +825,7 @@ def test_dt64arr_isub_timedeltalike_scalar(
rng -= two_hours
tm.assert_equal(rng, expected)
+ # TODO: redundant with test_dt64arr_add_timedeltalike_scalar
def test_dt64arr_add_td64_scalar(self, box_with_array):
# scalar timedeltas/np.timedelta64 objects
# operate with np.timedelta64 correctly
@@ -1709,14 +1691,12 @@ def test_operators_datetimelike(self):
dt1 - dt2
dt2 - dt1
- # ## datetime64 with timetimedelta ###
+ # datetime64 with timetimedelta
dt1 + td1
td1 + dt1
dt1 - td1
- # TODO: Decide if this ought to work.
- # td1 - dt1
- # ## timetimedelta with datetime64 ###
+ # timetimedelta with datetime64
td1 + dt1
dt1 + td1
@@ -1914,7 +1894,7 @@ def test_dt64_series_add_intlike(self, tz, op):
with pytest.raises(TypeError, match=msg):
method(other)
with pytest.raises(TypeError, match=msg):
- method(other.values)
+ method(np.array(other))
with pytest.raises(TypeError, match=msg):
method(pd.Index(other))
@@ -2380,34 +2360,34 @@ def test_ufunc_coercions(self):
idx = date_range("2011-01-01", periods=3, freq="2D", name="x")
delta = np.timedelta64(1, "D")
+ exp = date_range("2011-01-02", periods=3, freq="2D", name="x")
for result in [idx + delta, np.add(idx, delta)]:
assert isinstance(result, DatetimeIndex)
- exp = date_range("2011-01-02", periods=3, freq="2D", name="x")
tm.assert_index_equal(result, exp)
assert result.freq == "2D"
+ exp = date_range("2010-12-31", periods=3, freq="2D", name="x")
for result in [idx - delta, np.subtract(idx, delta)]:
assert isinstance(result, DatetimeIndex)
- exp = date_range("2010-12-31", periods=3, freq="2D", name="x")
tm.assert_index_equal(result, exp)
assert result.freq == "2D"
delta = np.array(
[np.timedelta64(1, "D"), np.timedelta64(2, "D"), np.timedelta64(3, "D")]
)
+ exp = DatetimeIndex(
+ ["2011-01-02", "2011-01-05", "2011-01-08"], freq="3D", name="x"
+ )
for result in [idx + delta, np.add(idx, delta)]:
assert isinstance(result, DatetimeIndex)
- exp = DatetimeIndex(
- ["2011-01-02", "2011-01-05", "2011-01-08"], freq="3D", name="x"
- )
tm.assert_index_equal(result, exp)
assert result.freq == "3D"
+ exp = DatetimeIndex(
+ ["2010-12-31", "2011-01-01", "2011-01-02"], freq="D", name="x"
+ )
for result in [idx - delta, np.subtract(idx, delta)]:
assert isinstance(result, DatetimeIndex)
- exp = DatetimeIndex(
- ["2010-12-31", "2011-01-01", "2011-01-02"], freq="D", name="x"
- )
tm.assert_index_equal(result, exp)
assert result.freq == "D"
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 2b23790e4ccd3..06ef1a30532ee 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -561,9 +561,9 @@ def test_div_int(self, numeric_idx):
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize("op", [operator.mul, ops.rmul, operator.floordiv])
- def test_mul_int_identity(self, op, numeric_idx, box):
+ def test_mul_int_identity(self, op, numeric_idx, box_with_array):
idx = numeric_idx
- idx = tm.box_expected(idx, box)
+ idx = tm.box_expected(idx, box_with_array)
result = op(idx, 1)
tm.assert_equal(result, idx)
@@ -615,8 +615,9 @@ def test_mul_size_mismatch_raises(self, numeric_idx):
idx * np.array([1, 2])
@pytest.mark.parametrize("op", [operator.pow, ops.rpow])
- def test_pow_float(self, op, numeric_idx, box):
+ def test_pow_float(self, op, numeric_idx, box_with_array):
# test power calculations both ways, GH#14973
+ box = box_with_array
idx = numeric_idx
expected = pd.Float64Index(op(idx.values, 2.0))
@@ -626,8 +627,9 @@ def test_pow_float(self, op, numeric_idx, box):
result = op(idx, 2.0)
tm.assert_equal(result, expected)
- def test_modulo(self, numeric_idx, box):
+ def test_modulo(self, numeric_idx, box_with_array):
# GH#9244
+ box = box_with_array
idx = numeric_idx
expected = Index(idx.values % 2)
@@ -1041,7 +1043,8 @@ class TestObjectDtypeEquivalence:
# Tests that arithmetic operations match operations executed elementwise
@pytest.mark.parametrize("dtype", [None, object])
- def test_numarr_with_dtype_add_nan(self, dtype, box):
+ def test_numarr_with_dtype_add_nan(self, dtype, box_with_array):
+ box = box_with_array
ser = pd.Series([1, 2, 3], dtype=dtype)
expected = pd.Series([np.nan, np.nan, np.nan], dtype=dtype)
@@ -1055,7 +1058,8 @@ def test_numarr_with_dtype_add_nan(self, dtype, box):
tm.assert_equal(result, expected)
@pytest.mark.parametrize("dtype", [None, object])
- def test_numarr_with_dtype_add_int(self, dtype, box):
+ def test_numarr_with_dtype_add_int(self, dtype, box_with_array):
+ box = box_with_array
ser = pd.Series([1, 2, 3], dtype=dtype)
expected = pd.Series([2, 3, 4], dtype=dtype)
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index fd9db80671360..f9c1de115b3a4 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -89,7 +89,7 @@ def test_pow_ops_object(self):
@pytest.mark.parametrize("op", [operator.add, ops.radd])
@pytest.mark.parametrize("other", ["category", "Int64"])
- def test_add_extension_scalar(self, other, box, op):
+ def test_add_extension_scalar(self, other, box_with_array, op):
# GH#22378
# Check that scalars satisfying is_extension_array_dtype(obj)
# do not incorrectly try to dispatch to an ExtensionArray operation
@@ -97,8 +97,8 @@ def test_add_extension_scalar(self, other, box, op):
arr = pd.Series(["a", "b", "c"])
expected = pd.Series([op(x, other) for x in arr])
- arr = tm.box_expected(arr, box)
- expected = tm.box_expected(expected, box)
+ arr = tm.box_expected(arr, box_with_array)
+ expected = tm.box_expected(expected, box_with_array)
result = op(arr, other)
tm.assert_equal(result, expected)
@@ -133,16 +133,17 @@ def test_objarr_radd_str(self, box):
],
)
@pytest.mark.parametrize("dtype", [None, object])
- def test_objarr_radd_str_invalid(self, dtype, data, box):
+ def test_objarr_radd_str_invalid(self, dtype, data, box_with_array):
ser = Series(data, dtype=dtype)
- ser = tm.box_expected(ser, box)
+ ser = tm.box_expected(ser, box_with_array)
with pytest.raises(TypeError):
"foo_" + ser
@pytest.mark.parametrize("op", [operator.add, ops.radd, operator.sub, ops.rsub])
- def test_objarr_add_invalid(self, op, box):
+ def test_objarr_add_invalid(self, op, box_with_array):
# invalid ops
+ box = box_with_array
obj_ser = tm.makeObjectSeries()
obj_ser.name = "objects"
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 33a5d45df3885..d28f05b4ab247 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -968,71 +968,37 @@ def test_td64arr_add_datetime64_nat(self, box_with_array):
# ------------------------------------------------------------------
# Operations with int-like others
- def test_td64arr_add_int_series_invalid(self, box):
- tdser = pd.Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
- tdser = tm.box_expected(tdser, box)
- err = TypeError if box is not pd.Index else NullFrequencyError
- int_ser = Series([2, 3, 4])
-
- with pytest.raises(err):
- tdser + int_ser
- with pytest.raises(err):
- int_ser + tdser
- with pytest.raises(err):
- tdser - int_ser
- with pytest.raises(err):
- int_ser - tdser
-
- def test_td64arr_add_intlike(self, box_with_array):
- # GH#19123
- tdi = TimedeltaIndex(["59 days", "59 days", "NaT"])
- ser = tm.box_expected(tdi, box_with_array)
-
- err = TypeError
- if box_with_array in [pd.Index, tm.to_array]:
- err = NullFrequencyError
-
- other = Series([20, 30, 40], dtype="uint8")
-
- # TODO: separate/parametrize
- with pytest.raises(err):
- ser + 1
- with pytest.raises(err):
- ser - 1
-
- with pytest.raises(err):
- ser + other
- with pytest.raises(err):
- ser - other
-
- with pytest.raises(err):
- ser + np.array(other)
- with pytest.raises(err):
- ser - np.array(other)
-
- with pytest.raises(err):
- ser + pd.Index(other)
- with pytest.raises(err):
- ser - pd.Index(other)
-
- @pytest.mark.parametrize("scalar", [1, 1.5, np.array(2)])
- def test_td64arr_add_sub_numeric_scalar_invalid(self, box_with_array, scalar):
+ @pytest.mark.parametrize(
+ "other",
+ [
+ # GH#19123
+ 1,
+ Series([20, 30, 40], dtype="uint8"),
+ np.array([20, 30, 40], dtype="uint8"),
+ pd.UInt64Index([20, 30, 40]),
+ pd.Int64Index([20, 30, 40]),
+ Series([2, 3, 4]),
+ 1.5,
+ np.array(2),
+ ],
+ )
+ def test_td64arr_addsub_numeric_invalid(self, box_with_array, other):
box = box_with_array
-
tdser = pd.Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
tdser = tm.box_expected(tdser, box)
+
err = TypeError
- if box in [pd.Index, tm.to_array] and not isinstance(scalar, float):
+ if box in [pd.Index, tm.to_array] and not isinstance(other, float):
err = NullFrequencyError
with pytest.raises(err):
- tdser + scalar
+ tdser + other
with pytest.raises(err):
- scalar + tdser
+ other + tdser
with pytest.raises(err):
- tdser - scalar
+ tdser - other
with pytest.raises(err):
- scalar - tdser
+ other - tdser
@pytest.mark.parametrize(
"dtype",
@@ -1059,11 +1025,12 @@ def test_td64arr_add_sub_numeric_scalar_invalid(self, box_with_array, scalar):
],
ids=lambda x: type(x).__name__,
)
- def test_td64arr_add_sub_numeric_arr_invalid(self, box, vec, dtype):
+ def test_td64arr_add_sub_numeric_arr_invalid(self, box_with_array, vec, dtype):
+ box = box_with_array
tdser = pd.Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
tdser = tm.box_expected(tdser, box)
err = TypeError
- if box is pd.Index and not dtype.startswith("float"):
+ if box in [pd.Index, tm.to_array] and not dtype.startswith("float"):
err = NullFrequencyError
vector = vec.astype(dtype)
@@ -1080,14 +1047,6 @@ def test_td64arr_add_sub_numeric_arr_invalid(self, box, vec, dtype):
# Operations with timedelta-like others
# TODO: this was taken from tests.series.test_ops; de-duplicate
- @pytest.mark.parametrize(
- "scalar_td",
- [
- timedelta(minutes=5, seconds=4),
- Timedelta(minutes=5, seconds=4),
- Timedelta("5m4s").to_timedelta64(),
- ],
- )
def test_operators_timedelta64_with_timedelta(self, scalar_td):
# smoke tests
td1 = Series([timedelta(minutes=5, seconds=3)] * 3)
@@ -1141,7 +1100,8 @@ def test_timedelta64_operations_with_timedeltas(self):
# roundtrip
tm.assert_series_equal(result + td2, td1)
- def test_td64arr_add_td64_array(self, box):
+ def test_td64arr_add_td64_array(self, box_with_array):
+ box = box_with_array
dti = pd.date_range("2016-01-01", periods=3)
tdi = dti - dti.shift(1)
tdarr = tdi.values
@@ -1155,7 +1115,8 @@ def test_td64arr_add_td64_array(self, box):
result = tdarr + tdi
tm.assert_equal(result, expected)
- def test_td64arr_sub_td64_array(self, box):
+ def test_td64arr_sub_td64_array(self, box_with_array):
+ box = box_with_array
dti = pd.date_range("2016-01-01", periods=3)
tdi = dti - dti.shift(1)
tdarr = tdi.values
@@ -1229,8 +1190,9 @@ def test_td64arr_add_sub_tdi(self, box, names):
else:
assert result.dtypes[0] == "timedelta64[ns]"
- def test_td64arr_add_sub_td64_nat(self, box):
+ def test_td64arr_add_sub_td64_nat(self, box_with_array):
# GH#23320 special handling for timedelta64("NaT")
+ box = box_with_array
tdi = pd.TimedeltaIndex([NaT, Timedelta("1s")])
other = np.timedelta64("NaT")
expected = pd.TimedeltaIndex(["NaT"] * 2)
@@ -1247,8 +1209,9 @@ def test_td64arr_add_sub_td64_nat(self, box):
result = other - obj
tm.assert_equal(result, expected)
- def test_td64arr_sub_NaT(self, box):
+ def test_td64arr_sub_NaT(self, box_with_array):
# GH#18808
+ box = box_with_array
ser = Series([NaT, Timedelta("1s")])
expected = Series([NaT, NaT], dtype="timedelta64[ns]")
@@ -1258,8 +1221,9 @@ def test_td64arr_sub_NaT(self, box):
res = ser - pd.NaT
tm.assert_equal(res, expected)
- def test_td64arr_add_timedeltalike(self, two_hours, box):
+ def test_td64arr_add_timedeltalike(self, two_hours, box_with_array):
# only test adding/sub offsets as + is now numeric
+ box = box_with_array
rng = timedelta_range("1 days", "10 days")
expected = timedelta_range("1 days 02:00:00", "10 days 02:00:00", freq="D")
rng = tm.box_expected(rng, box)
@@ -1268,8 +1232,9 @@ def test_td64arr_add_timedeltalike(self, two_hours, box):
result = rng + two_hours
tm.assert_equal(result, expected)
- def test_td64arr_sub_timedeltalike(self, two_hours, box):
+ def test_td64arr_sub_timedeltalike(self, two_hours, box_with_array):
# only test adding/sub offsets as - is now numeric
+ box = box_with_array
rng = timedelta_range("1 days", "10 days")
expected = timedelta_range("0 days 22:00:00", "9 days 22:00:00")
@@ -1352,8 +1317,9 @@ def test_td64arr_add_offset_index(self, names, box):
# TODO: combine with test_td64arr_add_offset_index by parametrizing
# over second box?
- def test_td64arr_add_offset_array(self, box):
+ def test_td64arr_add_offset_array(self, box_with_array):
# GH#18849
+ box = box_with_array
tdi = TimedeltaIndex(["1 days 00:00:00", "3 days 04:00:00"])
other = np.array([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)])
@@ -1429,13 +1395,12 @@ def test_td64arr_with_offset_series(self, names, box_df_fail):
# GH#18849
box = box_df_fail
box2 = Series if box in [pd.Index, tm.to_array] else box
+ exname = names[2] if box is not tm.to_array else names[1]
tdi = TimedeltaIndex(["1 days 00:00:00", "3 days 04:00:00"], name=names[0])
other = Series([pd.offsets.Hour(n=1), pd.offsets.Minute(n=-2)], name=names[1])
- expected_add = Series(
- [tdi[n] + other[n] for n in range(len(tdi))], name=names[2]
- )
+ expected_add = Series([tdi[n] + other[n] for n in range(len(tdi))], name=exname)
tdi = tm.box_expected(tdi, box)
expected_add = tm.box_expected(expected_add, box2)
@@ -1448,9 +1413,7 @@ def test_td64arr_with_offset_series(self, names, box_df_fail):
tm.assert_equal(res2, expected_add)
# TODO: separate/parametrize add/sub test?
- expected_sub = Series(
- [tdi[n] - other[n] for n in range(len(tdi))], name=names[2]
- )
+ expected_sub = Series([tdi[n] - other[n] for n in range(len(tdi))], name=exname)
expected_sub = tm.box_expected(expected_sub, box2)
with tm.assert_produces_warning(PerformanceWarning):
@@ -2051,6 +2014,8 @@ def test_td64arr_div_numeric_array(self, box_with_array, vector, dtype):
def test_td64arr_mul_int_series(self, box_df_fail, names):
# GH#19042 test for correct name attachment
box = box_df_fail # broadcasts along wrong axis, but doesn't raise
+ exname = names[2] if box is not tm.to_array else names[1]
+
tdi = TimedeltaIndex(
["0days", "1day", "2days", "3days", "4days"], name=names[0]
)
@@ -2060,11 +2025,11 @@ def test_td64arr_mul_int_series(self, box_df_fail, names):
expected = Series(
["0days", "1day", "4days", "9days", "16days"],
dtype="timedelta64[ns]",
- name=names[2],
+ name=exname,
)
tdi = tm.box_expected(tdi, box)
- box = Series if (box is pd.Index and type(ser) is Series) else box
+ box = Series if (box is pd.Index or box is tm.to_array) else box
expected = tm.box_expected(expected, box)
result = ser * tdi
@@ -2115,7 +2080,11 @@ def test_float_series_rdiv_td64arr(self, box_with_array, names):
tm.assert_equal(result, expected)
-class TestTimedeltaArraylikeInvalidArithmeticOps:
+class TestTimedelta64ArrayLikeArithmetic:
+ # Arithmetic tests for timedelta64[ns] vectors fully parametrized over
+ # DataFrame/Series/TimedeltaIndex/TimedeltaArray. Ideally all arithmetic
+ # tests will eventually end up here.
+
def test_td64arr_pow_invalid(self, scalar_td, box_with_array):
td1 = Series([timedelta(minutes=5, seconds=3)] * 3)
td1.iloc[2] = np.nan
| Hopefully before long we can get rid of `box` fixture altogether, then rename `box_with_array` to `box` | https://api.github.com/repos/pandas-dev/pandas/pulls/27950 | 2019-08-16T16:25:21Z | 2019-09-02T21:31:29Z | 2019-09-02T21:31:29Z | 2019-09-02T22:53:49Z |
API: Add string extension type | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index e13738b98833a..45be38b40b658 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -266,6 +266,10 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
-k"-from_arrays -from_breaks -from_intervals -from_tuples -set_closed -to_tuples -interval_range"
RET=$(($RET + $?)) ; echo $MSG "DONE"
+ MSG='Doctests arrays/string_.py' ; echo $MSG
+ pytest -q --doctest-modules pandas/core/arrays/string_.py
+ RET=$(($RET + $?)) ; echo $MSG "DONE"
+
fi
### DOCSTRINGS ###
diff --git a/doc/source/getting_started/basics.rst b/doc/source/getting_started/basics.rst
index 802ffadf2a81e..36a7166f350e5 100644
--- a/doc/source/getting_started/basics.rst
+++ b/doc/source/getting_started/basics.rst
@@ -986,7 +986,7 @@ not noted for a particular column will be ``NaN``:
tsdf.agg({'A': ['mean', 'min'], 'B': 'sum'})
-.. _basics.aggregation.mixed_dtypes:
+.. _basics.aggregation.mixed_string:
Mixed dtypes
++++++++++++
@@ -1704,7 +1704,8 @@ built-in string methods. For example:
.. ipython:: python
- s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
+ s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'],
+ dtype="string")
s.str.lower()
Powerful pattern-matching methods are provided as well, but note that
@@ -1712,6 +1713,12 @@ pattern-matching generally uses `regular expressions
<https://docs.python.org/3/library/re.html>`__ by default (and in some cases
always uses them).
+.. note::
+
+ Prior to pandas 1.0, string methods were only available on ``object`` -dtype
+ ``Series``. Pandas 1.0 added the :class:`StringDtype` which is dedicated
+ to strings. See :ref:`text.types` for more.
+
Please see :ref:`Vectorized String Methods <text.string_methods>` for a complete
description.
@@ -1925,9 +1932,15 @@ period (time spans) :class:`PeriodDtype` :class:`Period` :class:`arrays.
sparse :class:`SparseDtype` (none) :class:`arrays.SparseArray` :ref:`sparse`
intervals :class:`IntervalDtype` :class:`Interval` :class:`arrays.IntervalArray` :ref:`advanced.intervalindex`
nullable integer :class:`Int64Dtype`, ... (none) :class:`arrays.IntegerArray` :ref:`integer_na`
+Strings :class:`StringDtype` :class:`str` :class:`arrays.StringArray` :ref:`text`
=================== ========================= ================== ============================= =============================
-Pandas uses the ``object`` dtype for storing strings.
+Pandas has two ways to store strings.
+
+1. ``object`` dtype, which can hold any Python object, including strings.
+2. :class:`StringDtype`, which is dedicated to strings.
+
+Generally, we recommend using :class:`StringDtype`. See :ref:`text.types` fore more.
Finally, arbitrary objects may be stored using the ``object`` dtype, but should
be avoided to the extent possible (for performance and interoperability with
diff --git a/doc/source/reference/arrays.rst b/doc/source/reference/arrays.rst
index 7f464bf952bfb..0c435e06ac57f 100644
--- a/doc/source/reference/arrays.rst
+++ b/doc/source/reference/arrays.rst
@@ -24,6 +24,7 @@ Intervals :class:`IntervalDtype` :class:`Interval` :ref:`api.array
Nullable Integer :class:`Int64Dtype`, ... (none) :ref:`api.arrays.integer_na`
Categorical :class:`CategoricalDtype` (none) :ref:`api.arrays.categorical`
Sparse :class:`SparseDtype` (none) :ref:`api.arrays.sparse`
+Strings :class:`StringDtype` :class:`str` :ref:`api.arrays.string`
=================== ========================= ================== =============================
Pandas and third-party libraries can extend NumPy's type system (see :ref:`extending.extension-types`).
@@ -460,6 +461,29 @@ and methods if the :class:`Series` contains sparse values. See
:ref:`api.series.sparse` for more.
+.. _api.arrays.string:
+
+Text data
+---------
+
+When working with text data, where each valid element is a string or missing,
+we recommend using :class:`StringDtype` (with the alias ``"string"``).
+
+.. autosummary::
+ :toctree: api/
+ :template: autosummary/class_without_autosummary.rst
+
+ arrays.StringArray
+
+.. autosummary::
+ :toctree: api/
+ :template: autosummary/class_without_autosummary.rst
+
+ StringDtype
+
+The ``Series.str`` accessor is available for ``Series`` backed by a :class:`arrays.StringArray`.
+See :ref:`api.series.str` for more.
+
.. Dtype attributes which are manually listed in their docstrings: including
.. it here to make sure a docstring page is built for them
@@ -471,4 +495,4 @@ and methods if the :class:`Series` contains sparse values. See
DatetimeTZDtype.unit
DatetimeTZDtype.tz
PeriodDtype.freq
- IntervalDtype.subtype
\ No newline at end of file
+ IntervalDtype.subtype
diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst
index acb5810e5252a..789ff2a65355b 100644
--- a/doc/source/user_guide/text.rst
+++ b/doc/source/user_guide/text.rst
@@ -6,8 +6,71 @@
Working with text data
======================
+.. _text.types:
+
+Text Data Types
+---------------
+
+.. versionadded:: 1.0.0
+
+There are two main ways to store text data
+
+1. ``object`` -dtype NumPy array.
+2. :class:`StringDtype` extension type.
+
+We recommend using :class:`StringDtype` to store text data.
+
+Prior to pandas 1.0, ``object`` dtype was the only option. This was unfortunate
+for many reasons:
+
+1. You can accidentally store a *mixture* of strings and non-strings in an
+ ``object`` dtype array. It's better to have a dedicated dtype.
+2. ``object`` dtype breaks dtype-specific operations like :meth:`DataFrame.select_dtypes`.
+ There isn't a clear way to select *just* text while excluding non-text
+ but still object-dtype columns.
+3. When reading code, the contents of an ``object`` dtype array is less clear
+ than ``'string'``.
+
+Currently, the performance of ``object`` dtype arrays of strings and
+:class:`arrays.StringArray` are about the same. We expect future enhancements
+to significantly increase the performance and lower the memory overhead of
+:class:`~arrays.StringArray`.
+
+.. warning::
+
+ ``StringArray`` is currently considered experimental. The implementation
+ and parts of the API may change without warning.
+
+For backwards-compatibility, ``object`` dtype remains the default type we
+infer a list of strings to
+
+.. ipython:: python
+
+ pd.Series(['a', 'b', 'c'])
+
+To explicitly request ``string`` dtype, specify the ``dtype``
+
+.. ipython:: python
+
+ pd.Series(['a', 'b', 'c'], dtype="string")
+ pd.Series(['a', 'b', 'c'], dtype=pd.StringDtype())
+
+Or ``astype`` after the ``Series`` or ``DataFrame`` is created
+
+.. ipython:: python
+
+ s = pd.Series(['a', 'b', 'c'])
+ s
+ s.astype("string")
+
+Everything that follows in the rest of this document applies equally to
+``string`` and ``object`` dtype.
+
.. _text.string_methods:
+String Methods
+--------------
+
Series and Index are equipped with a set of string processing methods
that make it easy to operate on each element of the array. Perhaps most
importantly, these methods exclude missing/NA values automatically. These are
@@ -16,7 +79,8 @@ the equivalent (scalar) built-in string methods:
.. ipython:: python
- s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
+ s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'],
+ dtype="string")
s.str.lower()
s.str.upper()
s.str.len()
@@ -90,7 +154,7 @@ Methods like ``split`` return a Series of lists:
.. ipython:: python
- s2 = pd.Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h'])
+ s2 = pd.Series(['a_b_c', 'c_d_e', np.nan, 'f_g_h'], dtype="string")
s2.str.split('_')
Elements in the split lists can be accessed using ``get`` or ``[]`` notation:
@@ -106,6 +170,9 @@ It is easy to expand this to return a DataFrame using ``expand``.
s2.str.split('_', expand=True)
+When original ``Series`` has :class:`StringDtype`, the output columns will all
+be :class:`StringDtype` as well.
+
It is also possible to limit the number of splits:
.. ipython:: python
@@ -125,7 +192,8 @@ i.e., from the end of the string to the beginning of the string:
.. ipython:: python
s3 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca',
- '', np.nan, 'CABA', 'dog', 'cat'])
+ '', np.nan, 'CABA', 'dog', 'cat'],
+ dtype="string")
s3
s3.str.replace('^.a|dog', 'XX-XX ', case=False)
@@ -136,7 +204,7 @@ following code will cause trouble because of the regular expression meaning of
.. ipython:: python
# Consider the following badly formatted financial data
- dollars = pd.Series(['12', '-$10', '$10,000'])
+ dollars = pd.Series(['12', '-$10', '$10,000'], dtype="string")
# This does what you'd naively expect:
dollars.str.replace('$', '')
@@ -174,7 +242,8 @@ positional argument (a regex object) and return a string.
def repl(m):
return m.group(0)[::-1]
- pd.Series(['foo 123', 'bar baz', np.nan]).str.replace(pat, repl)
+ pd.Series(['foo 123', 'bar baz', np.nan],
+ dtype="string").str.replace(pat, repl)
# Using regex groups
pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)"
@@ -182,7 +251,8 @@ positional argument (a regex object) and return a string.
def repl(m):
return m.group('two').swapcase()
- pd.Series(['Foo Bar Baz', np.nan]).str.replace(pat, repl)
+ pd.Series(['Foo Bar Baz', np.nan],
+ dtype="string").str.replace(pat, repl)
.. versionadded:: 0.20.0
@@ -221,7 +291,7 @@ The content of a ``Series`` (or ``Index``) can be concatenated:
.. ipython:: python
- s = pd.Series(['a', 'b', 'c', 'd'])
+ s = pd.Series(['a', 'b', 'c', 'd'], dtype="string")
s.str.cat(sep=',')
If not specified, the keyword ``sep`` for the separator defaults to the empty string, ``sep=''``:
@@ -234,7 +304,7 @@ By default, missing values are ignored. Using ``na_rep``, they can be given a re
.. ipython:: python
- t = pd.Series(['a', 'b', np.nan, 'd'])
+ t = pd.Series(['a', 'b', np.nan, 'd'], dtype="string")
t.str.cat(sep=',')
t.str.cat(sep=',', na_rep='-')
@@ -279,7 +349,8 @@ the ``join``-keyword.
.. ipython:: python
:okwarning:
- u = pd.Series(['b', 'd', 'a', 'c'], index=[1, 3, 0, 2])
+ u = pd.Series(['b', 'd', 'a', 'c'], index=[1, 3, 0, 2],
+ dtype="string")
s
u
s.str.cat(u)
@@ -295,7 +366,8 @@ In particular, alignment also means that the different lengths do not need to co
.. ipython:: python
- v = pd.Series(['z', 'a', 'b', 'd', 'e'], index=[-1, 0, 1, 3, 4])
+ v = pd.Series(['z', 'a', 'b', 'd', 'e'], index=[-1, 0, 1, 3, 4],
+ dtype="string")
s
v
s.str.cat(v, join='left', na_rep='-')
@@ -351,7 +423,8 @@ of the string, the result will be a ``NaN``.
.. ipython:: python
s = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan,
- 'CABA', 'dog', 'cat'])
+ 'CABA', 'dog', 'cat'],
+ dtype="string")
s.str[0]
s.str[1]
@@ -382,7 +455,8 @@ DataFrame with one column per group.
.. ipython:: python
- pd.Series(['a1', 'b2', 'c3']).str.extract(r'([ab])(\d)', expand=False)
+ pd.Series(['a1', 'b2', 'c3'],
+ dtype="string").str.extract(r'([ab])(\d)', expand=False)
Elements that do not match return a row filled with ``NaN``. Thus, a
Series of messy strings can be "converted" into a like-indexed Series
@@ -395,14 +469,16 @@ Named groups like
.. ipython:: python
- pd.Series(['a1', 'b2', 'c3']).str.extract(r'(?P<letter>[ab])(?P<digit>\d)',
- expand=False)
+ pd.Series(['a1', 'b2', 'c3'],
+ dtype="string").str.extract(r'(?P<letter>[ab])(?P<digit>\d)',
+ expand=False)
and optional groups like
.. ipython:: python
- pd.Series(['a1', 'b2', '3']).str.extract(r'([ab])?(\d)', expand=False)
+ pd.Series(['a1', 'b2', '3'],
+ dtype="string").str.extract(r'([ab])?(\d)', expand=False)
can also be used. Note that any capture group names in the regular
expression will be used for column names; otherwise capture group
@@ -413,20 +489,23 @@ with one column if ``expand=True``.
.. ipython:: python
- pd.Series(['a1', 'b2', 'c3']).str.extract(r'[ab](\d)', expand=True)
+ pd.Series(['a1', 'b2', 'c3'],
+ dtype="string").str.extract(r'[ab](\d)', expand=True)
It returns a Series if ``expand=False``.
.. ipython:: python
- pd.Series(['a1', 'b2', 'c3']).str.extract(r'[ab](\d)', expand=False)
+ pd.Series(['a1', 'b2', 'c3'],
+ dtype="string").str.extract(r'[ab](\d)', expand=False)
Calling on an ``Index`` with a regex with exactly one capture group
returns a ``DataFrame`` with one column if ``expand=True``.
.. ipython:: python
- s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"])
+ s = pd.Series(["a1", "b2", "c3"], ["A11", "B22", "C33"],
+ dtype="string")
s
s.index.str.extract("(?P<letter>[a-zA-Z])", expand=True)
@@ -471,7 +550,8 @@ Unlike ``extract`` (which returns only the first match),
.. ipython:: python
- s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"])
+ s = pd.Series(["a1a2", "b1", "c1"], index=["A", "B", "C"],
+ dtype="string")
s
two_groups = '(?P<letter>[a-z])(?P<digit>[0-9])'
s.str.extract(two_groups, expand=True)
@@ -489,7 +569,7 @@ When each subject string in the Series has exactly one match,
.. ipython:: python
- s = pd.Series(['a3', 'b3', 'c2'])
+ s = pd.Series(['a3', 'b3', 'c2'], dtype="string")
s
then ``extractall(pat).xs(0, level='match')`` gives the same result as
@@ -510,7 +590,7 @@ same result as a ``Series.str.extractall`` with a default index (starts from 0).
pd.Index(["a1a2", "b1", "c1"]).str.extractall(two_groups)
- pd.Series(["a1a2", "b1", "c1"]).str.extractall(two_groups)
+ pd.Series(["a1a2", "b1", "c1"], dtype="string").str.extractall(two_groups)
Testing for Strings that match or contain a pattern
@@ -521,13 +601,15 @@ You can check whether elements contain a pattern:
.. ipython:: python
pattern = r'[0-9][a-z]'
- pd.Series(['1', '2', '3a', '3b', '03c']).str.contains(pattern)
+ pd.Series(['1', '2', '3a', '3b', '03c'],
+ dtype="string").str.contains(pattern)
Or whether elements match a pattern:
.. ipython:: python
- pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern)
+ pd.Series(['1', '2', '3a', '3b', '03c'],
+ dtype="string").str.match(pattern)
The distinction between ``match`` and ``contains`` is strictness: ``match``
relies on strict ``re.match``, while ``contains`` relies on ``re.search``.
@@ -537,7 +619,8 @@ an extra ``na`` argument so missing values can be considered True or False:
.. ipython:: python
- s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
+ s4 = pd.Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'],
+ dtype="string")
s4.str.contains('A', na=False)
.. _text.indicator:
@@ -550,7 +633,7 @@ For example if they are separated by a ``'|'``:
.. ipython:: python
- s = pd.Series(['a', 'a|b', np.nan, 'a|c'])
+ s = pd.Series(['a', 'a|b', np.nan, 'a|c'], dtype="string")
s.str.get_dummies(sep='|')
String ``Index`` also supports ``get_dummies`` which returns a ``MultiIndex``.
diff --git a/doc/source/whatsnew/v1.0.0.rst b/doc/source/whatsnew/v1.0.0.rst
index 16d23d675a8bb..3f7c1a8a5222e 100644
--- a/doc/source/whatsnew/v1.0.0.rst
+++ b/doc/source/whatsnew/v1.0.0.rst
@@ -50,14 +50,56 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~
-- :meth:`DataFrame.to_string` added the ``max_colwidth`` parameter to control when wide columns are truncated (:issue:`9784`)
--
+.. _whatsnew_100.string:
+
+Dedicated string data type
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+We've added :class:`StringDtype`, an extension type dedicated to string data.
+Previously, strings were typically stored in object-dtype NumPy arrays.
+
+.. warning::
+
+ ``StringDtype`` and is currently considered experimental. The implementation
+ and parts of the API may change without warning.
+
+The text extension type solves several issues with object-dtype NumPy arrays:
+
+1. You can accidentally store a *mixture* of strings and non-strings in an
+ ``object`` dtype array. A ``StringArray`` can only store strings.
+2. ``object`` dtype breaks dtype-specific operations like :meth:`DataFrame.select_dtypes`.
+ There isn't a clear way to select *just* text while excluding non-text,
+ but still object-dtype columns.
+3. When reading code, the contents of an ``object`` dtype array is less clear
+ than ``string``.
+
+
+.. ipython:: python
+
+ pd.Series(['abc', None, 'def'], dtype=pd.StringDtype())
+
+You can use the alias ``"string"`` as well.
+
+.. ipython:: python
+
+ s = pd.Series(['abc', None, 'def'], dtype="string")
+ s
+
+The usual string accessor methods work. Where appropriate, the return type
+of the Series or columns of a DataFrame will also have string dtype.
+
+ s.str.upper()
+ s.str.split('b', expand=True).dtypes
+
+We recommend explicitly using the ``string`` data type when working with strings.
+See :ref:`text.types` for more.
.. _whatsnew_1000.enhancements.other:
Other enhancements
^^^^^^^^^^^^^^^^^^
+- :meth:`DataFrame.to_string` added the ``max_colwidth`` parameter to control when wide columns are truncated (:issue:`9784`)
- :meth:`MultiIndex.from_product` infers level names from inputs if not explicitly provided (:issue:`27292`)
- :meth:`DataFrame.to_latex` now accepts ``caption`` and ``label`` arguments (:issue:`25436`)
- The :ref:`integer dtype <integer_na>` with support for missing values can now be converted to
diff --git a/pandas/__init__.py b/pandas/__init__.py
index 6d0c55a45ed46..5d163e411c0ac 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -66,6 +66,7 @@
PeriodDtype,
IntervalDtype,
DatetimeTZDtype,
+ StringDtype,
# missing
isna,
isnull,
diff --git a/pandas/arrays/__init__.py b/pandas/arrays/__init__.py
index db01f2a0c674f..9870b5bed076d 100644
--- a/pandas/arrays/__init__.py
+++ b/pandas/arrays/__init__.py
@@ -11,6 +11,7 @@
PandasArray,
PeriodArray,
SparseArray,
+ StringArray,
TimedeltaArray,
)
@@ -22,5 +23,6 @@
"PandasArray",
"PeriodArray",
"SparseArray",
+ "StringArray",
"TimedeltaArray",
]
diff --git a/pandas/core/api.py b/pandas/core/api.py
index bd2a57a15bdd2..04f2f84c92a15 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -10,6 +10,7 @@
)
from pandas.core.dtypes.missing import isna, isnull, notna, notnull
+# TODO: Remove get_dummies import when statsmodels updates #18264
from pandas.core.algorithms import factorize, unique, value_counts
from pandas.core.arrays import Categorical
from pandas.core.arrays.integer import (
@@ -22,12 +23,9 @@
UInt32Dtype,
UInt64Dtype,
)
+from pandas.core.arrays.string_ import StringDtype
from pandas.core.construction import array
-
from pandas.core.groupby import Grouper, NamedAgg
-
-# DataFrame needs to be imported after NamedAgg to avoid a circular import
-from pandas.core.frame import DataFrame # isort:skip
from pandas.core.index import (
CategoricalIndex,
DatetimeIndex,
@@ -47,9 +45,7 @@
from pandas.core.indexes.period import Period, period_range
from pandas.core.indexes.timedeltas import Timedelta, timedelta_range
from pandas.core.indexing import IndexSlice
-from pandas.core.reshape.reshape import (
- get_dummies,
-) # TODO: Remove get_dummies import when statsmodels updates #18264
+from pandas.core.reshape.reshape import get_dummies
from pandas.core.series import Series
from pandas.core.tools.datetimes import to_datetime
from pandas.core.tools.numeric import to_numeric
@@ -57,3 +53,6 @@
from pandas.io.formats.format import set_eng_float_format
from pandas.tseries.offsets import DateOffset
+
+# DataFrame needs to be imported after NamedAgg to avoid a circular import
+from pandas.core.frame import DataFrame # isort:skip
diff --git a/pandas/core/arrays/__init__.py b/pandas/core/arrays/__init__.py
index 5c83ed8cf5e24..868118bac6a7b 100644
--- a/pandas/core/arrays/__init__.py
+++ b/pandas/core/arrays/__init__.py
@@ -10,4 +10,5 @@
from .numpy_ import PandasArray, PandasDtype # noqa: F401
from .period import PeriodArray, period_array # noqa: F401
from .sparse import SparseArray # noqa: F401
+from .string_ import StringArray # noqa: F401
from .timedeltas import TimedeltaArray # noqa: F401
diff --git a/pandas/core/arrays/numpy_.py b/pandas/core/arrays/numpy_.py
index 32da0199e28f8..bf7404e8997c6 100644
--- a/pandas/core/arrays/numpy_.py
+++ b/pandas/core/arrays/numpy_.py
@@ -10,7 +10,7 @@
from pandas.core.dtypes.dtypes import ExtensionDtype
from pandas.core.dtypes.generic import ABCIndexClass, ABCSeries
-from pandas.core.dtypes.inference import is_array_like, is_list_like
+from pandas.core.dtypes.inference import is_array_like
from pandas.core.dtypes.missing import isna
from pandas import compat
@@ -229,13 +229,15 @@ def __getitem__(self, item):
def __setitem__(self, key, value):
value = extract_array(value, extract_numpy=True)
- if not lib.is_scalar(key) and is_list_like(key):
+ scalar_key = lib.is_scalar(key)
+ scalar_value = lib.is_scalar(value)
+
+ if not scalar_key and scalar_value:
key = np.asarray(key)
- if not lib.is_scalar(value):
- value = np.asarray(value)
+ if not scalar_value:
+ value = np.asarray(value, dtype=self._ndarray.dtype)
- value = np.asarray(value, dtype=self._ndarray.dtype)
self._ndarray[key] = value
def __len__(self) -> int:
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
new file mode 100644
index 0000000000000..87649ac651127
--- /dev/null
+++ b/pandas/core/arrays/string_.py
@@ -0,0 +1,281 @@
+import operator
+from typing import TYPE_CHECKING, Type
+
+import numpy as np
+
+from pandas._libs import lib
+
+from pandas.core.dtypes.base import ExtensionDtype
+from pandas.core.dtypes.common import pandas_dtype
+from pandas.core.dtypes.dtypes import register_extension_dtype
+from pandas.core.dtypes.generic import ABCDataFrame, ABCIndexClass, ABCSeries
+from pandas.core.dtypes.inference import is_array_like
+
+from pandas import compat
+from pandas.core import ops
+from pandas.core.arrays import PandasArray
+from pandas.core.construction import extract_array
+from pandas.core.missing import isna
+
+if TYPE_CHECKING:
+ from pandas._typing import Scalar
+
+
+@register_extension_dtype
+class StringDtype(ExtensionDtype):
+ """
+ Extension dtype for string data.
+
+ .. versionadded:: 1.0.0
+
+ .. warning::
+
+ StringDtype is considered experimental. The implementation and
+ parts of the API may change without warning.
+
+ In particular, StringDtype.na_value may change to no longer be
+ ``numpy.nan``.
+
+ Attributes
+ ----------
+ None
+
+ Methods
+ -------
+ None
+
+ Examples
+ --------
+ >>> pd.StringDtype()
+ StringDtype
+ """
+
+ @property
+ def na_value(self) -> "Scalar":
+ """
+ StringDtype uses :attr:`numpy.nan` as the missing NA value.
+
+ .. warning::
+
+ `na_value` may change in a future release.
+ """
+ return np.nan
+
+ @property
+ def type(self) -> Type:
+ return str
+
+ @property
+ def name(self) -> str:
+ """
+ The alias for StringDtype is ``'string'``.
+ """
+ return "string"
+
+ @classmethod
+ def construct_from_string(cls, string: str) -> ExtensionDtype:
+ if string == "string":
+ return cls()
+ return super().construct_from_string(string)
+
+ @classmethod
+ def construct_array_type(cls) -> "Type[StringArray]":
+ return StringArray
+
+ def __repr__(self) -> str:
+ return "StringDtype"
+
+
+class StringArray(PandasArray):
+ """
+ Extension array for string data.
+
+ .. versionadded:: 1.0.0
+
+ .. warning::
+
+ StringArray is considered experimental. The implementation and
+ parts of the API may change without warning.
+
+ In particular, the NA value used may change to no longer be
+ ``numpy.nan``.
+
+ Parameters
+ ----------
+ values : array-like
+ The array of data.
+
+ .. warning::
+
+ Currently, this expects an object-dtype ndarray
+ where the elements are Python strings. This may
+ change without warning in the future.
+ copy : bool, default False
+ Whether to copy the array of data.
+
+ Attributes
+ ----------
+ None
+
+ Methods
+ -------
+ None
+
+ See Also
+ --------
+ Series.str
+ The string methods are available on Series backed by
+ a StringArray.
+
+ Examples
+ --------
+ >>> pd.array(['This is', 'some text', None, 'data.'], dtype="string")
+ <StringArray>
+ ['This is', 'some text', nan, 'data.']
+ Length: 4, dtype: string
+
+ Unlike ``object`` dtype arrays, ``StringArray`` doesn't allow non-string
+ values.
+
+ >>> pd.array(['1', 1], dtype="string")
+ Traceback (most recent call last):
+ ...
+ ValueError: StringArray requires an object-dtype ndarray of strings.
+ """
+
+ # undo the PandasArray hack
+ _typ = "extension"
+
+ def __init__(self, values, copy=False):
+ values = extract_array(values)
+ skip_validation = isinstance(values, type(self))
+
+ super().__init__(values, copy=copy)
+ self._dtype = StringDtype()
+ if not skip_validation:
+ self._validate()
+
+ def _validate(self):
+ """Validate that we only store NA or strings."""
+ if len(self._ndarray) and not lib.is_string_array(self._ndarray, skipna=True):
+ raise ValueError(
+ "StringArray requires a sequence of strings or missing values."
+ )
+ if self._ndarray.dtype != "object":
+ raise ValueError(
+ "StringArray requires a sequence of strings. Got "
+ "'{}' dtype instead.".format(self._ndarray.dtype)
+ )
+
+ @classmethod
+ def _from_sequence(cls, scalars, dtype=None, copy=False):
+ if dtype:
+ assert dtype == "string"
+ result = super()._from_sequence(scalars, dtype=object, copy=copy)
+ # convert None to np.nan
+ # TODO: it would be nice to do this in _validate / lib.is_string_array
+ # We are already doing a scan over the values there.
+ result[result.isna()] = np.nan
+ return result
+
+ @classmethod
+ def _from_sequence_of_strings(cls, strings, dtype=None, copy=False):
+ return cls._from_sequence(strings, dtype=dtype, copy=copy)
+
+ def __setitem__(self, key, value):
+ value = extract_array(value, extract_numpy=True)
+ if isinstance(value, type(self)):
+ # extract_array doesn't extract PandasArray subclasses
+ value = value._ndarray
+
+ scalar_key = lib.is_scalar(key)
+ scalar_value = lib.is_scalar(value)
+ if scalar_key and not scalar_value:
+ raise ValueError("setting an array element with a sequence.")
+
+ # validate new items
+ if scalar_value:
+ if scalar_value is None:
+ value = np.nan
+ elif not (isinstance(value, str) or np.isnan(value)):
+ raise ValueError(
+ "Cannot set non-string value '{}' into a StringArray.".format(value)
+ )
+ else:
+ if not is_array_like(value):
+ value = np.asarray(value, dtype=object)
+ if len(value) and not lib.is_string_array(value, skipna=True):
+ raise ValueError("Must provide strings.")
+
+ super().__setitem__(key, value)
+
+ def fillna(self, value=None, method=None, limit=None):
+ # TODO: validate dtype
+ return super().fillna(value, method, limit)
+
+ def astype(self, dtype, copy=True):
+ dtype = pandas_dtype(dtype)
+ if isinstance(dtype, StringDtype):
+ if copy:
+ return self.copy()
+ return self
+ return super().astype(dtype, copy)
+
+ def _reduce(self, name, skipna=True, **kwargs):
+ raise TypeError("Cannot perform reduction '{}' with string dtype".format(name))
+
+ def value_counts(self, dropna=False):
+ from pandas import value_counts
+
+ return value_counts(self._ndarray, dropna=dropna)
+
+ # Overrride parent because we have different return types.
+ @classmethod
+ def _create_arithmetic_method(cls, op):
+ def method(self, other):
+ if isinstance(other, (ABCIndexClass, ABCSeries, ABCDataFrame)):
+ return NotImplemented
+
+ elif isinstance(other, cls):
+ other = other._ndarray
+
+ mask = isna(self) | isna(other)
+ valid = ~mask
+
+ if not lib.is_scalar(other):
+ if len(other) != len(self):
+ # prevent improper broadcasting when other is 2D
+ raise ValueError(
+ "Lengths of operands do not match: {} != {}".format(
+ len(self), len(other)
+ )
+ )
+
+ other = np.asarray(other)
+ other = other[valid]
+
+ result = np.empty_like(self._ndarray, dtype="object")
+ result[mask] = np.nan
+ result[valid] = op(self._ndarray[valid], other)
+
+ if op.__name__ in {"add", "radd", "mul", "rmul"}:
+ return StringArray(result)
+ else:
+ dtype = "object" if mask.any() else "bool"
+ return np.asarray(result, dtype=dtype)
+
+ return compat.set_function_name(method, "__{}__".format(op.__name__), cls)
+
+ @classmethod
+ def _add_arithmetic_ops(cls):
+ cls.__add__ = cls._create_arithmetic_method(operator.add)
+ cls.__radd__ = cls._create_arithmetic_method(ops.radd)
+
+ cls.__mul__ = cls._create_arithmetic_method(operator.mul)
+ cls.__rmul__ = cls._create_arithmetic_method(ops.rmul)
+
+ _create_comparison_method = _create_arithmetic_method
+
+
+StringArray._add_arithmetic_ops()
+StringArray._add_comparison_ops()
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index cd87fbef02e4f..56bfbefdbf248 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -128,6 +128,7 @@ def isna(obj):
def _isna_new(obj):
+
if is_scalar(obj):
return libmissing.checknull(obj)
# hack (for now) because MI registers as ndarray
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 25350119f9df5..888d2ae6f9473 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -763,6 +763,16 @@ def f(x):
return f
+def _result_dtype(arr):
+ # workaround #27953
+ # ideally we just pass `dtype=arr.dtype` unconditionally, but this fails
+ # when the list of values is empty.
+ if arr.dtype.name == "string":
+ return "string"
+ else:
+ return object
+
+
def _str_extract_noexpand(arr, pat, flags=0):
"""
Find groups in each string in the Series using passed regular
@@ -817,11 +827,12 @@ def _str_extract_frame(arr, pat, flags=0):
result_index = arr.index
except AttributeError:
result_index = None
+ dtype = _result_dtype(arr)
return DataFrame(
[groups_or_na(val) for val in arr],
columns=columns,
index=result_index,
- dtype=object,
+ dtype=dtype,
)
@@ -1019,8 +1030,11 @@ def str_extractall(arr, pat, flags=0):
from pandas import MultiIndex
index = MultiIndex.from_tuples(index_list, names=arr.index.names + ["match"])
+ dtype = _result_dtype(arr)
- result = arr._constructor_expanddim(match_list, index=index, columns=columns)
+ result = arr._constructor_expanddim(
+ match_list, index=index, columns=columns, dtype=dtype
+ )
return result
@@ -1073,7 +1087,7 @@ def str_get_dummies(arr, sep="|"):
for i, t in enumerate(tags):
pat = sep + t + sep
- dummies[:, i] = lib.map_infer(arr.values, lambda x: pat in x)
+ dummies[:, i] = lib.map_infer(arr.to_numpy(), lambda x: pat in x)
return dummies, tags
@@ -1858,11 +1872,18 @@ def wrapper(self, *args, **kwargs):
return _forbid_nonstring_types
-def _noarg_wrapper(f, name=None, docstring=None, forbidden_types=["bytes"], **kargs):
+def _noarg_wrapper(
+ f,
+ name=None,
+ docstring=None,
+ forbidden_types=["bytes"],
+ returns_string=True,
+ **kargs
+):
@forbid_nonstring_types(forbidden_types, name=name)
def wrapper(self):
result = _na_map(f, self._parent, **kargs)
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=returns_string)
wrapper.__name__ = f.__name__ if name is None else name
if docstring is not None:
@@ -1874,22 +1895,28 @@ def wrapper(self):
def _pat_wrapper(
- f, flags=False, na=False, name=None, forbidden_types=["bytes"], **kwargs
+ f,
+ flags=False,
+ na=False,
+ name=None,
+ forbidden_types=["bytes"],
+ returns_string=True,
+ **kwargs
):
@forbid_nonstring_types(forbidden_types, name=name)
def wrapper1(self, pat):
result = f(self._parent, pat)
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=returns_string)
@forbid_nonstring_types(forbidden_types, name=name)
def wrapper2(self, pat, flags=0, **kwargs):
result = f(self._parent, pat, flags=flags, **kwargs)
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=returns_string)
@forbid_nonstring_types(forbidden_types, name=name)
def wrapper3(self, pat, na=np.nan):
result = f(self._parent, pat, na=na)
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=returns_string)
wrapper = wrapper3 if na else wrapper2 if flags else wrapper1
@@ -1926,6 +1953,7 @@ class StringMethods(NoNewAttributesMixin):
def __init__(self, data):
self._inferred_dtype = self._validate(data)
self._is_categorical = is_categorical_dtype(data)
+ self._is_string = data.dtype.name == "string"
# .values.categories works for both Series/Index
self._parent = data.values.categories if self._is_categorical else data
@@ -1956,6 +1984,8 @@ def _validate(data):
-------
dtype : inferred dtype of data
"""
+ from pandas import StringDtype
+
if isinstance(data, ABCMultiIndex):
raise AttributeError(
"Can only use .str accessor with Index, not MultiIndex"
@@ -1967,6 +1997,10 @@ def _validate(data):
values = getattr(data, "values", data) # Series / Index
values = getattr(values, "categories", values) # categorical / normal
+ # explicitly allow StringDtype
+ if isinstance(values.dtype, StringDtype):
+ return "string"
+
try:
inferred_dtype = lib.infer_dtype(values, skipna=True)
except ValueError:
@@ -1992,7 +2026,13 @@ def __iter__(self):
g = self.get(i)
def _wrap_result(
- self, result, use_codes=True, name=None, expand=None, fill_value=np.nan
+ self,
+ result,
+ use_codes=True,
+ name=None,
+ expand=None,
+ fill_value=np.nan,
+ returns_string=True,
):
from pandas import Index, Series, MultiIndex
@@ -2012,6 +2052,15 @@ def _wrap_result(
return result
assert result.ndim < 3
+ # We can be wrapping a string / object / categorical result, in which
+ # case we'll want to return the same dtype as the input.
+ # Or we can be wrapping a numeric output, in which case we don't want
+ # to return a StringArray.
+ if self._is_string and returns_string:
+ dtype = "string"
+ else:
+ dtype = None
+
if expand is None:
# infer from ndim if expand is not specified
expand = result.ndim != 1
@@ -2069,11 +2118,12 @@ def cons_row(x):
index = self._orig.index
if expand:
cons = self._orig._constructor_expanddim
- return cons(result, columns=name, index=index)
+ result = cons(result, columns=name, index=index, dtype=dtype)
else:
# Must be a Series
cons = self._orig._constructor
- return cons(result, name=name, index=index)
+ result = cons(result, name=name, index=index, dtype=dtype)
+ return result
def _get_series_list(self, others):
"""
@@ -2338,9 +2388,12 @@ def cat(self, others=None, sep=None, na_rep=None, join="left"):
# add dtype for case that result is all-NA
result = Index(result, dtype=object, name=self._orig.name)
else: # Series
- result = Series(
- result, dtype=object, index=data.index, name=self._orig.name
- )
+ if is_categorical_dtype(self._orig.dtype):
+ # We need to infer the new categories.
+ dtype = None
+ else:
+ dtype = self._orig.dtype
+ result = Series(result, dtype=dtype, index=data.index, name=self._orig.name)
return result
_shared_docs[
@@ -2479,13 +2532,13 @@ def cat(self, others=None, sep=None, na_rep=None, join="left"):
@forbid_nonstring_types(["bytes"])
def split(self, pat=None, n=-1, expand=False):
result = str_split(self._parent, pat, n=n)
- return self._wrap_result(result, expand=expand)
+ return self._wrap_result(result, expand=expand, returns_string=expand)
@Appender(_shared_docs["str_split"] % {"side": "end", "method": "rsplit"})
@forbid_nonstring_types(["bytes"])
def rsplit(self, pat=None, n=-1, expand=False):
result = str_rsplit(self._parent, pat, n=n)
- return self._wrap_result(result, expand=expand)
+ return self._wrap_result(result, expand=expand, returns_string=expand)
_shared_docs[
"str_partition"
@@ -2586,7 +2639,7 @@ def rsplit(self, pat=None, n=-1, expand=False):
def partition(self, sep=" ", expand=True):
f = lambda x: x.partition(sep)
result = _na_map(f, self._parent)
- return self._wrap_result(result, expand=expand)
+ return self._wrap_result(result, expand=expand, returns_string=expand)
@Appender(
_shared_docs["str_partition"]
@@ -2602,7 +2655,7 @@ def partition(self, sep=" ", expand=True):
def rpartition(self, sep=" ", expand=True):
f = lambda x: x.rpartition(sep)
result = _na_map(f, self._parent)
- return self._wrap_result(result, expand=expand)
+ return self._wrap_result(result, expand=expand, returns_string=expand)
@copy(str_get)
def get(self, i):
@@ -2621,13 +2674,13 @@ def contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
result = str_contains(
self._parent, pat, case=case, flags=flags, na=na, regex=regex
)
- return self._wrap_result(result, fill_value=na)
+ return self._wrap_result(result, fill_value=na, returns_string=False)
@copy(str_match)
@forbid_nonstring_types(["bytes"])
def match(self, pat, case=True, flags=0, na=np.nan):
result = str_match(self._parent, pat, case=case, flags=flags, na=na)
- return self._wrap_result(result, fill_value=na)
+ return self._wrap_result(result, fill_value=na, returns_string=False)
@copy(str_replace)
@forbid_nonstring_types(["bytes"])
@@ -2762,13 +2815,14 @@ def slice_replace(self, start=None, stop=None, repl=None):
def decode(self, encoding, errors="strict"):
# need to allow bytes here
result = str_decode(self._parent, encoding, errors)
- return self._wrap_result(result)
+ # TODO: Not sure how to handle this.
+ return self._wrap_result(result, returns_string=False)
@copy(str_encode)
@forbid_nonstring_types(["bytes"])
def encode(self, encoding, errors="strict"):
result = str_encode(self._parent, encoding, errors)
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=False)
_shared_docs[
"str_strip"
@@ -2869,7 +2923,11 @@ def get_dummies(self, sep="|"):
data = self._orig.astype(str) if self._is_categorical else self._parent
result, name = str_get_dummies(data, sep)
return self._wrap_result(
- result, use_codes=(not self._is_categorical), name=name, expand=True
+ result,
+ use_codes=(not self._is_categorical),
+ name=name,
+ expand=True,
+ returns_string=False,
)
@copy(str_translate)
@@ -2878,10 +2936,16 @@ def translate(self, table):
result = str_translate(self._parent, table)
return self._wrap_result(result)
- count = _pat_wrapper(str_count, flags=True, name="count")
- startswith = _pat_wrapper(str_startswith, na=True, name="startswith")
- endswith = _pat_wrapper(str_endswith, na=True, name="endswith")
- findall = _pat_wrapper(str_findall, flags=True, name="findall")
+ count = _pat_wrapper(str_count, flags=True, name="count", returns_string=False)
+ startswith = _pat_wrapper(
+ str_startswith, na=True, name="startswith", returns_string=False
+ )
+ endswith = _pat_wrapper(
+ str_endswith, na=True, name="endswith", returns_string=False
+ )
+ findall = _pat_wrapper(
+ str_findall, flags=True, name="findall", returns_string=False
+ )
@copy(str_extract)
@forbid_nonstring_types(["bytes"])
@@ -2929,7 +2993,7 @@ def extractall(self, pat, flags=0):
@forbid_nonstring_types(["bytes"])
def find(self, sub, start=0, end=None):
result = str_find(self._parent, sub, start=start, end=end, side="left")
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=False)
@Appender(
_shared_docs["find"]
@@ -2942,7 +3006,7 @@ def find(self, sub, start=0, end=None):
@forbid_nonstring_types(["bytes"])
def rfind(self, sub, start=0, end=None):
result = str_find(self._parent, sub, start=start, end=end, side="right")
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=False)
@forbid_nonstring_types(["bytes"])
def normalize(self, form):
@@ -3004,7 +3068,7 @@ def normalize(self, form):
@forbid_nonstring_types(["bytes"])
def index(self, sub, start=0, end=None):
result = str_index(self._parent, sub, start=start, end=end, side="left")
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=False)
@Appender(
_shared_docs["index"]
@@ -3018,7 +3082,7 @@ def index(self, sub, start=0, end=None):
@forbid_nonstring_types(["bytes"])
def rindex(self, sub, start=0, end=None):
result = str_index(self._parent, sub, start=start, end=end, side="right")
- return self._wrap_result(result)
+ return self._wrap_result(result, returns_string=False)
_shared_docs[
"len"
@@ -3067,7 +3131,11 @@ def rindex(self, sub, start=0, end=None):
dtype: float64
"""
len = _noarg_wrapper(
- len, docstring=_shared_docs["len"], forbidden_types=None, dtype=int
+ len,
+ docstring=_shared_docs["len"],
+ forbidden_types=None,
+ dtype=int,
+ returns_string=False,
)
_shared_docs[
@@ -3339,46 +3407,55 @@ def rindex(self, sub, start=0, end=None):
lambda x: x.isalnum(),
name="isalnum",
docstring=_shared_docs["ismethods"] % _doc_args["isalnum"],
+ returns_string=False,
)
isalpha = _noarg_wrapper(
lambda x: x.isalpha(),
name="isalpha",
docstring=_shared_docs["ismethods"] % _doc_args["isalpha"],
+ returns_string=False,
)
isdigit = _noarg_wrapper(
lambda x: x.isdigit(),
name="isdigit",
docstring=_shared_docs["ismethods"] % _doc_args["isdigit"],
+ returns_string=False,
)
isspace = _noarg_wrapper(
lambda x: x.isspace(),
name="isspace",
docstring=_shared_docs["ismethods"] % _doc_args["isspace"],
+ returns_string=False,
)
islower = _noarg_wrapper(
lambda x: x.islower(),
name="islower",
docstring=_shared_docs["ismethods"] % _doc_args["islower"],
+ returns_string=False,
)
isupper = _noarg_wrapper(
lambda x: x.isupper(),
name="isupper",
docstring=_shared_docs["ismethods"] % _doc_args["isupper"],
+ returns_string=False,
)
istitle = _noarg_wrapper(
lambda x: x.istitle(),
name="istitle",
docstring=_shared_docs["ismethods"] % _doc_args["istitle"],
+ returns_string=False,
)
isnumeric = _noarg_wrapper(
lambda x: x.isnumeric(),
name="isnumeric",
docstring=_shared_docs["ismethods"] % _doc_args["isnumeric"],
+ returns_string=False,
)
isdecimal = _noarg_wrapper(
lambda x: x.isdecimal(),
name="isdecimal",
docstring=_shared_docs["ismethods"] % _doc_args["isdecimal"],
+ returns_string=False,
)
@classmethod
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 2f24bbd6f0c85..6c50159663574 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -68,6 +68,7 @@ class TestPDApi(Base):
"Series",
"SparseArray",
"SparseDtype",
+ "StringDtype",
"Timedelta",
"TimedeltaIndex",
"Timestamp",
diff --git a/pandas/tests/arrays/string_/__init__.py b/pandas/tests/arrays/string_/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
new file mode 100644
index 0000000000000..40221c34116ae
--- /dev/null
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -0,0 +1,160 @@
+import operator
+
+import numpy as np
+import pytest
+
+import pandas as pd
+import pandas.util.testing as tm
+
+
+def test_none_to_nan():
+ a = pd.arrays.StringArray._from_sequence(["a", None, "b"])
+ assert a[1] is not None
+ assert np.isnan(a[1])
+
+
+def test_setitem_validates():
+ a = pd.arrays.StringArray._from_sequence(["a", "b"])
+ with pytest.raises(ValueError, match="10"):
+ a[0] = 10
+
+ with pytest.raises(ValueError, match="strings"):
+ a[:] = np.array([1, 2])
+
+
+@pytest.mark.parametrize(
+ "input, method",
+ [
+ (["a", "b", "c"], operator.methodcaller("capitalize")),
+ (["a", "b", "c"], operator.methodcaller("capitalize")),
+ (["a b", "a bc. de"], operator.methodcaller("capitalize")),
+ ],
+)
+def test_string_methods(input, method):
+ a = pd.Series(input, dtype="string")
+ b = pd.Series(input, dtype="object")
+ result = method(a.str)
+ expected = method(b.str)
+
+ assert result.dtype.name == "string"
+ tm.assert_series_equal(result.astype(object), expected)
+
+
+def test_astype_roundtrip():
+ s = pd.Series(pd.date_range("2000", periods=12))
+ s[0] = None
+
+ result = s.astype("string").astype("datetime64[ns]")
+ tm.assert_series_equal(result, s)
+
+
+def test_add():
+ a = pd.Series(["a", "b", "c", None, None], dtype="string")
+ b = pd.Series(["x", "y", None, "z", None], dtype="string")
+
+ result = a + b
+ expected = pd.Series(["ax", "by", None, None, None], dtype="string")
+ tm.assert_series_equal(result, expected)
+
+ result = a.add(b)
+ tm.assert_series_equal(result, expected)
+
+ result = a.radd(b)
+ expected = pd.Series(["xa", "yb", None, None, None], dtype="string")
+ tm.assert_series_equal(result, expected)
+
+ result = a.add(b, fill_value="-")
+ expected = pd.Series(["ax", "by", "c-", "-z", None], dtype="string")
+ tm.assert_series_equal(result, expected)
+
+
+def test_add_2d():
+ a = pd.array(["a", "b", "c"], dtype="string")
+ b = np.array([["a", "b", "c"]], dtype=object)
+ with pytest.raises(ValueError, match="3 != 1"):
+ a + b
+
+ s = pd.Series(a)
+ with pytest.raises(ValueError, match="3 != 1"):
+ s + b
+
+
+def test_add_sequence():
+ a = pd.array(["a", "b", None, None], dtype="string")
+ other = ["x", None, "y", None]
+
+ result = a + other
+ expected = pd.array(["ax", None, None, None], dtype="string")
+ tm.assert_extension_array_equal(result, expected)
+
+ result = other + a
+ expected = pd.array(["xa", None, None, None], dtype="string")
+ tm.assert_extension_array_equal(result, expected)
+
+
+def test_mul():
+ a = pd.array(["a", "b", None], dtype="string")
+ result = a * 2
+ expected = pd.array(["aa", "bb", None], dtype="string")
+ tm.assert_extension_array_equal(result, expected)
+
+ result = 2 * a
+ tm.assert_extension_array_equal(result, expected)
+
+
+@pytest.mark.xfail(reason="GH-28527")
+def test_add_strings():
+ array = pd.array(["a", "b", "c", "d"], dtype="string")
+ df = pd.DataFrame([["t", "u", "v", "w"]])
+ assert array.__add__(df) is NotImplemented
+
+ result = array + df
+ expected = pd.DataFrame([["at", "bu", "cv", "dw"]]).astype("string")
+ tm.assert_frame_equal(result, expected)
+
+ result = df + array
+ expected = pd.DataFrame([["ta", "ub", "vc", "wd"]]).astype("string")
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.xfail(reason="GH-28527")
+def test_add_frame():
+ array = pd.array(["a", "b", np.nan, np.nan], dtype="string")
+ df = pd.DataFrame([["x", np.nan, "y", np.nan]])
+
+ assert array.__add__(df) is NotImplemented
+
+ result = array + df
+ expected = pd.DataFrame([["ax", np.nan, np.nan, np.nan]]).astype("string")
+ tm.assert_frame_equal(result, expected)
+
+ result = df + array
+ expected = pd.DataFrame([["xa", np.nan, np.nan, np.nan]]).astype("string")
+ tm.assert_frame_equal(result, expected)
+
+
+def test_constructor_raises():
+ with pytest.raises(ValueError, match="sequence of strings"):
+ pd.arrays.StringArray(np.array(["a", "b"], dtype="S1"))
+
+ with pytest.raises(ValueError, match="sequence of strings"):
+ pd.arrays.StringArray(np.array([]))
+
+
+@pytest.mark.parametrize("skipna", [True, False])
+@pytest.mark.xfail(reason="Not implemented StringArray.sum")
+def test_reduce(skipna):
+ arr = pd.Series(["a", "b", "c"], dtype="string")
+ result = arr.sum(skipna=skipna)
+ assert result == "abc"
+
+
+@pytest.mark.parametrize("skipna", [True, False])
+@pytest.mark.xfail(reason="Not implemented StringArray.sum")
+def test_reduce_missing(skipna):
+ arr = pd.Series([None, "a", None, "b", "c", None], dtype="string")
+ result = arr.sum(skipna=skipna)
+ if skipna:
+ assert result == "abc"
+ else:
+ assert pd.isna(result)
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index 266f7ac50c663..466b724f98770 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -291,6 +291,8 @@ def test_is_string_dtype():
assert com.is_string_dtype(str)
assert com.is_string_dtype(object)
assert com.is_string_dtype(np.array(["a", "b"]))
+ assert com.is_string_dtype(pd.StringDtype())
+ assert com.is_string_dtype(pd.array(["a", "b"], dtype="string"))
def test_is_period_arraylike():
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
new file mode 100644
index 0000000000000..5b872d5b72227
--- /dev/null
+++ b/pandas/tests/extension/test_string.py
@@ -0,0 +1,112 @@
+import string
+
+import numpy as np
+import pytest
+
+import pandas as pd
+from pandas.core.arrays.string_ import StringArray, StringDtype
+from pandas.tests.extension import base
+
+
+@pytest.fixture
+def dtype():
+ return StringDtype()
+
+
+@pytest.fixture
+def data():
+ strings = np.random.choice(list(string.ascii_letters), size=100)
+ while strings[0] == strings[1]:
+ strings = np.random.choice(list(string.ascii_letters), size=100)
+
+ return StringArray._from_sequence(strings)
+
+
+@pytest.fixture
+def data_missing():
+ """Length 2 array with [NA, Valid]"""
+ return StringArray._from_sequence([np.nan, "A"])
+
+
+@pytest.fixture
+def data_for_sorting():
+ return StringArray._from_sequence(["B", "C", "A"])
+
+
+@pytest.fixture
+def data_missing_for_sorting():
+ return StringArray._from_sequence(["B", np.nan, "A"])
+
+
+@pytest.fixture
+def na_value():
+ return np.nan
+
+
+@pytest.fixture
+def data_for_grouping():
+ return StringArray._from_sequence(["B", "B", np.nan, np.nan, "A", "A", "B", "C"])
+
+
+class TestDtype(base.BaseDtypeTests):
+ pass
+
+
+class TestInterface(base.BaseInterfaceTests):
+ pass
+
+
+class TestConstructors(base.BaseConstructorsTests):
+ pass
+
+
+class TestReshaping(base.BaseReshapingTests):
+ pass
+
+
+class TestGetitem(base.BaseGetitemTests):
+ pass
+
+
+class TestSetitem(base.BaseSetitemTests):
+ pass
+
+
+class TestMissing(base.BaseMissingTests):
+ pass
+
+
+class TestNoReduce(base.BaseNoReduceTests):
+ pass
+
+
+class TestMethods(base.BaseMethodsTests):
+ pass
+
+
+class TestCasting(base.BaseCastingTests):
+ pass
+
+
+class TestComparisonOps(base.BaseComparisonOpsTests):
+ def _compare_other(self, s, data, op_name, other):
+ result = getattr(s, op_name)(other)
+ expected = getattr(s.astype(object), op_name)(other)
+ self.assert_series_equal(result, expected)
+
+ def test_compare_scalar(self, data, all_compare_operators):
+ op_name = all_compare_operators
+ s = pd.Series(data)
+ self._compare_other(s, data, op_name, "abc")
+
+
+class TestParsing(base.BaseParsingTests):
+ pass
+
+
+class TestPrinting(base.BasePrintingTests):
+ pass
+
+
+class TestGroupBy(base.BaseGroupbyTests):
+ pass
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index bc8dc7272a83a..b50f1a0fd2f2a 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -6,6 +6,8 @@
from numpy.random import randint
import pytest
+from pandas._libs import lib
+
from pandas import DataFrame, Index, MultiIndex, Series, concat, isna, notna
import pandas.core.strings as strings
import pandas.util.testing as tm
@@ -3269,3 +3271,25 @@ def test_casefold(self):
result = s.str.casefold()
tm.assert_series_equal(result, expected)
+
+
+def test_string_array(any_string_method):
+ data = ["a", "bb", np.nan, "ccc"]
+ a = Series(data, dtype=object)
+ b = Series(data, dtype="string")
+ method_name, args, kwargs = any_string_method
+
+ expected = getattr(a.str, method_name)(*args, **kwargs)
+ result = getattr(b.str, method_name)(*args, **kwargs)
+
+ if isinstance(expected, Series):
+ if expected.dtype == "object" and lib.is_string_array(
+ expected.values, skipna=True
+ ):
+ assert result.dtype == "string"
+ result = result.astype(object)
+ elif isinstance(expected, DataFrame):
+ columns = expected.select_dtypes(include="object").columns
+ assert all(result[columns].dtypes == "string")
+ result[columns] = result[columns].astype(object)
+ tm.assert_equal(result, expected)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 1c0a8dbc19ccd..5cd39e79199e6 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1431,6 +1431,9 @@ def assert_equal(left, right, **kwargs):
assert_extension_array_equal(left, right, **kwargs)
elif isinstance(left, np.ndarray):
assert_numpy_array_equal(left, right, **kwargs)
+ elif isinstance(left, str):
+ assert kwargs == {}
+ return left == right
else:
raise NotImplementedError(type(left))
| This adds a new extension type 'string' for storing string data.
The data model is essentially unchanged from master. String are still
stored in an object-dtype ndarray. Scalar elements are still Python
strs, and `np.nan` is still used as the string dtype.
Things are pretty well contained. The major changes outside the new array are
1. docs
2. `core/strings.py` to handle things correctly (mostly, returning `string` dtype when there's a `string` input.
No rush on reviewing this. Just parking it here for now. | https://api.github.com/repos/pandas-dev/pandas/pulls/27949 | 2019-08-16T15:22:23Z | 2019-10-05T23:17:41Z | 2019-10-05T23:17:41Z | 2019-10-07T11:33:37Z |
Backport PR #27946 on branch 0.25.x (BUG: Merge with readonly arrays) | diff --git a/doc/source/whatsnew/v0.25.1.rst b/doc/source/whatsnew/v0.25.1.rst
index 245aa42b0de14..d1324bc759ea1 100644
--- a/doc/source/whatsnew/v0.25.1.rst
+++ b/doc/source/whatsnew/v0.25.1.rst
@@ -127,9 +127,9 @@ Reshaping
^^^^^^^^^
- A ``KeyError`` is now raised if ``.unstack()`` is called on a :class:`Series` or :class:`DataFrame` with a flat :class:`Index` passing a name which is not the correct one (:issue:`18303`)
-- Bug in :meth:`DataFrame.crosstab` when ``margins`` set to ``True`` and ``normalize`` is not ``False``, an error is raised. (:issue:`27500`)
+- Bug in :meth:`DataFrame.crosstab` when ``margins`` set to ``True`` and ``normalize`` is not ``False``, an error is raised. (:issue:`27500`)
- :meth:`DataFrame.join` now suppresses the ``FutureWarning`` when the sort parameter is specified (:issue:`21952`)
--
+- Bug in :meth:`DataFrame.join` raising with readonly arrays (:issue:`27943`)
Sparse
^^^^^^
diff --git a/pandas/_libs/hashtable.pyx b/pandas/_libs/hashtable.pyx
index 3e620f5934d5e..b8df78e600a46 100644
--- a/pandas/_libs/hashtable.pyx
+++ b/pandas/_libs/hashtable.pyx
@@ -108,7 +108,7 @@ cdef class Int64Factorizer:
def get_count(self):
return self.count
- def factorize(self, int64_t[:] values, sort=False,
+ def factorize(self, const int64_t[:] values, sort=False,
na_sentinel=-1, na_value=None):
"""
Factorize values with nans replaced by na_sentinel
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index b6c6f967333a8..a04f093ee7818 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1340,6 +1340,18 @@ def test_merge_take_missing_values_from_index_of_other_dtype(self):
expected = expected.reindex(columns=["a", "key", "b"])
tm.assert_frame_equal(result, expected)
+ def test_merge_readonly(self):
+ # https://github.com/pandas-dev/pandas/issues/27943
+ data1 = pd.DataFrame(
+ np.arange(20).reshape((4, 5)) + 1, columns=["a", "b", "c", "d", "e"]
+ )
+ data2 = pd.DataFrame(
+ np.arange(20).reshape((5, 4)) + 1, columns=["a", "b", "x", "y"]
+ )
+
+ data1._data.blocks[0].values.flags.writeable = False
+ data1.merge(data2) # no error
+
def _check_merge(x, y):
for how in ["inner", "left", "outer"]:
| Backport PR #27946: BUG: Merge with readonly arrays | https://api.github.com/repos/pandas-dev/pandas/pulls/27948 | 2019-08-16T12:08:27Z | 2019-08-16T18:51:23Z | 2019-08-16T18:51:23Z | 2019-08-16T18:51:23Z |
BUG: infer-objects raising for bytes Series | diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 6aac736abf10a..0999a9af3f72a 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1969,6 +1969,9 @@ def convert(
attempt to cast any object types to better types return a copy of
the block (if copy = True) by definition we ARE an ObjectBlock!!!!!
"""
+ if self.dtype != object:
+ return [self]
+
values = self.values
if values.ndim == 2:
# maybe_split ensures we only get here with values.shape[0] == 1,
diff --git a/pandas/tests/series/methods/test_infer_objects.py b/pandas/tests/series/methods/test_infer_objects.py
index 4417bdb8e1687..25bff46d682be 100644
--- a/pandas/tests/series/methods/test_infer_objects.py
+++ b/pandas/tests/series/methods/test_infer_objects.py
@@ -1,6 +1,9 @@
import numpy as np
-from pandas import interval_range
+from pandas import (
+ Series,
+ interval_range,
+)
import pandas._testing as tm
@@ -44,3 +47,10 @@ def test_infer_objects_interval(self, index_or_series):
result = obj.astype(object).infer_objects()
tm.assert_equal(result, obj)
+
+ def test_infer_objects_bytes(self):
+ # GH#49650
+ ser = Series([b"a"], dtype="bytes")
+ expected = ser.copy()
+ result = ser.infer_objects()
+ tm.assert_series_equal(result, expected)
| - [x] closes #49650 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @jbrockmendel | https://api.github.com/repos/pandas-dev/pandas/pulls/50067 | 2022-12-05T15:31:36Z | 2022-12-08T00:38:22Z | 2022-12-08T00:38:22Z | 2022-12-08T13:19:12Z |
DEPR: Enforce DataFrameGroupBy.__iter__ returning tuples of length 1 | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index b70dcb0ae99fa..b1400be59b3a1 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -596,7 +596,7 @@ Removal of prior version deprecations/changes
- Enforced deprecation of silently dropping nuisance columns in groupby and resample operations when ``numeric_only=False`` (:issue:`41475`)
- Changed default of ``numeric_only`` in various :class:`.DataFrameGroupBy` methods; all methods now default to ``numeric_only=False`` (:issue:`46072`)
- Changed default of ``numeric_only`` to ``False`` in :class:`.Resampler` methods (:issue:`47177`)
--
+- When providing a list of columns of length one to :meth:`DataFrame.groupby`, the keys that are returned by iterating over the resulting :class:`DataFrameGroupBy` object will now be tuples of length one (:issue:`47761`)
.. ---------------------------------------------------------------------------
.. _whatsnew_200.performance:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 659ca228bdcb0..c7fb40e855ef7 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -70,7 +70,6 @@ class providing the base-class of operations.
cache_readonly,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import ensure_dtype_can_hold_na
from pandas.core.dtypes.common import (
@@ -832,19 +831,11 @@ def __iter__(self) -> Iterator[tuple[Hashable, NDFrameT]]:
for each group
"""
keys = self.keys
+ result = self.grouper.get_iterator(self._selected_obj, axis=self.axis)
if isinstance(keys, list) and len(keys) == 1:
- warnings.warn(
- (
- "In a future version of pandas, a length 1 "
- "tuple will be returned when iterating over a "
- "groupby with a grouper equal to a list of "
- "length 1. Don't supply a list with a single grouper "
- "to avoid this warning."
- ),
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return self.grouper.get_iterator(self._selected_obj, axis=self.axis)
+ # GH#42795 - when keys is a list, return tuples even when length is 1
+ result = (((key,), group) for key, group in result)
+ return result
# To track operations that expand dimensions, like ohlc
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index a7104c2e21049..667656cb4de02 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -2743,18 +2743,10 @@ def test_groupby_none_column_name():
def test_single_element_list_grouping():
# GH 42795
- df = DataFrame(
- {"a": [np.nan, 1], "b": [np.nan, 5], "c": [np.nan, 2]}, index=["x", "y"]
- )
- msg = (
- "In a future version of pandas, a length 1 "
- "tuple will be returned when iterating over "
- "a groupby with a grouper equal to a list of "
- "length 1. Don't supply a list with a single grouper "
- "to avoid this warning."
- )
- with tm.assert_produces_warning(FutureWarning, match=msg):
- values, _ = next(iter(df.groupby(["a"])))
+ df = DataFrame({"a": [1, 2], "b": [np.nan, 5], "c": [np.nan, 2]}, index=["x", "y"])
+ result = [key for key, _ in df.groupby(["a"])]
+ expected = [(1,), (2,)]
+ assert result == expected
@pytest.mark.parametrize("func", ["sum", "cumsum", "cumprod", "prod"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50064 | 2022-12-05T04:01:45Z | 2022-12-05T09:18:22Z | 2022-12-05T09:18:22Z | 2022-12-13T03:16:40Z |
DEPR: Enforce groupby.transform aligning with input index | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index d8a36b1711b6e..fb8462f7f58e4 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -776,12 +776,11 @@ as the one being grouped. The transform function must:
* (Optionally) operates on the entire group chunk. If this is supported, a
fast path is used starting from the *second* chunk.
-.. deprecated:: 1.5.0
+.. versionchanged:: 2.0.0
When using ``.transform`` on a grouped DataFrame and the transformation function
- returns a DataFrame, currently pandas does not align the result's index
- with the input's index. This behavior is deprecated and alignment will
- be performed in a future version of pandas. You can apply ``.to_numpy()`` to the
+ returns a DataFrame, pandas now aligns the result's index
+ with the input's index. You can call ``.to_numpy()`` on the
result of the transformation function to avoid alignment.
Similar to :ref:`groupby.aggregate.udfs`, the resulting dtype will reflect that of the
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index b1400be59b3a1..5b57baa9ec39a 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -596,7 +596,9 @@ Removal of prior version deprecations/changes
- Enforced deprecation of silently dropping nuisance columns in groupby and resample operations when ``numeric_only=False`` (:issue:`41475`)
- Changed default of ``numeric_only`` in various :class:`.DataFrameGroupBy` methods; all methods now default to ``numeric_only=False`` (:issue:`46072`)
- Changed default of ``numeric_only`` to ``False`` in :class:`.Resampler` methods (:issue:`47177`)
+- Using the method :meth:`DataFrameGroupBy.transform` with a callable that returns DataFrames will align to the input's index (:issue:`47244`)
- When providing a list of columns of length one to :meth:`DataFrame.groupby`, the keys that are returned by iterating over the resulting :class:`DataFrameGroupBy` object will now be tuples of length one (:issue:`47761`)
+-
.. ---------------------------------------------------------------------------
.. _whatsnew_200.performance:
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index d3e37a40614b3..819220d13566b 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -24,7 +24,6 @@
Union,
cast,
)
-import warnings
import numpy as np
@@ -51,7 +50,6 @@
Substitution,
doc,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
ensure_int64,
@@ -1392,33 +1390,15 @@ def _transform_general(self, func, *args, **kwargs):
applied.append(res)
# Compute and process with the remaining groups
- emit_alignment_warning = False
for name, group in gen:
if group.size == 0:
continue
object.__setattr__(group, "name", name)
res = path(group)
- if (
- not emit_alignment_warning
- and res.ndim == 2
- and not res.index.equals(group.index)
- ):
- emit_alignment_warning = True
res = _wrap_transform_general_frame(self.obj, group, res)
applied.append(res)
- if emit_alignment_warning:
- # GH#45648
- warnings.warn(
- "In a future version of pandas, returning a DataFrame in "
- "groupby.transform will align with the input's index. Apply "
- "`.to_numpy()` to the result in the transform function to keep "
- "the current behavior and silence this warning.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
-
concat_index = obj.columns if self.axis == 0 else obj.index
other_axis = 1 if self.axis == 0 else 0 # switches between 0 & 1
concatenated = concat(applied, axis=self.axis, verify_integrity=False)
@@ -2336,5 +2316,7 @@ def _wrap_transform_general_frame(
)
assert isinstance(res_frame, DataFrame)
return res_frame
+ elif isinstance(res, DataFrame) and not res.index.is_(group.index):
+ return res._align_frame(group)[0]
else:
return res
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index c7fb40e855ef7..6cb9bb7f23a06 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -471,12 +471,11 @@ class providing the base-class of operations.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
-.. deprecated:: 1.5.0
+.. versionchanged:: 2.0.0
When using ``.transform`` on a grouped DataFrame and the transformation function
- returns a DataFrame, currently pandas does not align the result's index
- with the input's index. This behavior is deprecated and alignment will
- be performed in a future version of pandas. You can apply ``.to_numpy()`` to the
+ returns a DataFrame, pandas now aligns the result's index
+ with the input's index. You can call ``.to_numpy()`` on the
result of the transformation function to avoid alignment.
Examples
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index 8bdbc86d8659c..d0c8b53f13399 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -1466,8 +1466,8 @@ def test_null_group_str_transformer_series(request, dropna, transformation_func)
@pytest.mark.parametrize(
"func, series, expected_values",
[
- (Series.sort_values, False, [4, 5, 3, 1, 2]),
- (lambda x: x.head(1), False, ValueError),
+ (Series.sort_values, False, [5, 4, 3, 2, 1]),
+ (lambda x: x.head(1), False, [5.0, np.nan, 3, 2, np.nan]),
# SeriesGroupBy already has correct behavior
(Series.sort_values, True, [5, 4, 3, 2, 1]),
(lambda x: x.head(1), True, [5.0, np.nan, 3.0, 2.0, np.nan]),
@@ -1475,7 +1475,7 @@ def test_null_group_str_transformer_series(request, dropna, transformation_func)
)
@pytest.mark.parametrize("keys", [["a1"], ["a1", "a2"]])
@pytest.mark.parametrize("keys_in_index", [True, False])
-def test_transform_aligns_depr(func, series, expected_values, keys, keys_in_index):
+def test_transform_aligns(func, series, expected_values, keys, keys_in_index):
# GH#45648 - transform should align with the input's index
df = DataFrame({"a1": [1, 1, 3, 2, 2], "b": [5, 4, 3, 2, 1]})
if "a2" in keys:
@@ -1487,19 +1487,11 @@ def test_transform_aligns_depr(func, series, expected_values, keys, keys_in_inde
if series:
gb = gb["b"]
- warn = None if series else FutureWarning
- msg = "returning a DataFrame in groupby.transform will align"
- if expected_values is ValueError:
- with tm.assert_produces_warning(warn, match=msg):
- with pytest.raises(ValueError, match="Length mismatch"):
- gb.transform(func)
- else:
- with tm.assert_produces_warning(warn, match=msg):
- result = gb.transform(func)
- expected = DataFrame({"b": expected_values}, index=df.index)
- if series:
- expected = expected["b"]
- tm.assert_equal(result, expected)
+ result = gb.transform(func)
+ expected = DataFrame({"b": expected_values}, index=df.index)
+ if series:
+ expected = expected["b"]
+ tm.assert_equal(result, expected)
@pytest.mark.parametrize("keys", ["A", ["A", "B"]])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50063 | 2022-12-05T02:54:46Z | 2022-12-05T11:48:23Z | 2022-12-05T11:48:23Z | 2022-12-06T22:22:34Z |
TST: add test for nested OrderedDict in constructor | diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index ef80cc847a5b8..fcbca5e7fa4e7 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1395,6 +1395,16 @@ def test_constructor_list_of_dicts(self):
expected = DataFrame(index=[0])
tm.assert_frame_equal(result, expected)
+ def test_constructor_ordered_dict_nested_preserve_order(self):
+ # see gh-18166
+ nested1 = OrderedDict([("b", 1), ("a", 2)])
+ nested2 = OrderedDict([("b", 2), ("a", 5)])
+ data = OrderedDict([("col2", nested1), ("col1", nested2)])
+ result = DataFrame(data)
+ data = {"col2": [1, 2], "col1": [2, 5]}
+ expected = DataFrame(data=data, index=["b", "a"])
+ tm.assert_frame_equal(result, expected)
+
@pytest.mark.parametrize("dict_type", [dict, OrderedDict])
def test_constructor_ordered_dict_preserve_order(self, dict_type):
# see gh-13304
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
closes #18166 | https://api.github.com/repos/pandas-dev/pandas/pulls/50060 | 2022-12-04T22:49:56Z | 2022-12-06T17:15:09Z | 2022-12-06T17:15:09Z | 2022-12-06T20:48:55Z |
DOC: Remove all versionchanged / versionadded directives < 1.0.0 | diff --git a/doc/source/user_guide/dsintro.rst b/doc/source/user_guide/dsintro.rst
index 571f8980070af..9aa6423908cfd 100644
--- a/doc/source/user_guide/dsintro.rst
+++ b/doc/source/user_guide/dsintro.rst
@@ -712,10 +712,8 @@ The ufunc is applied to the underlying array in a :class:`Series`.
ser = pd.Series([1, 2, 3, 4])
np.exp(ser)
-.. versionchanged:: 0.25.0
-
- When multiple :class:`Series` are passed to a ufunc, they are aligned before
- performing the operation.
+When multiple :class:`Series` are passed to a ufunc, they are aligned before
+performing the operation.
Like other parts of the library, pandas will automatically align labeled inputs
as part of a ufunc with multiple inputs. For example, using :meth:`numpy.remainder`
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index d8a36b1711b6e..1e2b6d6fe4c20 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -629,8 +629,6 @@ For a grouped ``DataFrame``, you can rename in a similar manner:
Named aggregation
~~~~~~~~~~~~~~~~~
-.. versionadded:: 0.25.0
-
To support column-specific aggregation *with control over the output column names*, pandas
accepts the special syntax in :meth:`DataFrameGroupBy.agg` and :meth:`SeriesGroupBy.agg`, known as "named aggregation", where
diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index f1c212b53a87a..d6a00022efb79 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -294,8 +294,6 @@ cache_dates : boolean, default True
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
- .. versionadded:: 0.25.0
-
Iteration
+++++++++
@@ -3829,8 +3827,6 @@ using the `odfpy <https://pypi.org/project/odfpy/>`__ module. The semantics and
OpenDocument spreadsheets match what can be done for `Excel files`_ using
``engine='odf'``.
-.. versionadded:: 0.25
-
The :func:`~pandas.read_excel` method can read OpenDocument spreadsheets
.. code-block:: python
@@ -6134,8 +6130,6 @@ No official documentation is available for the SAS7BDAT format.
SPSS formats
------------
-.. versionadded:: 0.25.0
-
The top-level function :func:`read_spss` can read (but not write) SPSS
SAV (.sav) and ZSAV (.zsav) format files.
diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst
index adca9de6c130a..1b8fdc75d5e88 100644
--- a/doc/source/user_guide/reshaping.rst
+++ b/doc/source/user_guide/reshaping.rst
@@ -890,8 +890,6 @@ Note to subdivide over multiple columns we can pass in a list to the
Exploding a list-like column
----------------------------
-.. versionadded:: 0.25.0
-
Sometimes the values in a column are list-like.
.. ipython:: python
diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst
index bc4eec1c23a35..c86208e70af3c 100644
--- a/doc/source/user_guide/sparse.rst
+++ b/doc/source/user_guide/sparse.rst
@@ -127,9 +127,6 @@ attributes and methods that are specific to sparse data.
This accessor is available only on data with ``SparseDtype``, and on the :class:`Series`
class itself for creating a Series with sparse data from a scipy COO matrix with.
-
-.. versionadded:: 0.25.0
-
A ``.sparse`` accessor has been added for :class:`DataFrame` as well.
See :ref:`api.frame.sparse` for more.
@@ -271,8 +268,6 @@ Interaction with *scipy.sparse*
Use :meth:`DataFrame.sparse.from_spmatrix` to create a :class:`DataFrame` with sparse values from a sparse matrix.
-.. versionadded:: 0.25.0
-
.. ipython:: python
from scipy.sparse import csr_matrix
diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst
index 474068e43a4d4..78134872fa4b8 100644
--- a/doc/source/user_guide/timeseries.rst
+++ b/doc/source/user_guide/timeseries.rst
@@ -647,8 +647,6 @@ We are stopping on the included end-point as it is part of the index:
dft2 = dft2.swaplevel(0, 1).sort_index()
dft2.loc[idx[:, "2013-01-05"], :]
-.. versionadded:: 0.25.0
-
Slicing with string indexing also honors UTC offset.
.. ipython:: python
@@ -2348,8 +2346,6 @@ To return ``dateutil`` time zone objects, append ``dateutil/`` before the string
)
rng_utc.tz
-.. versionadded:: 0.25.0
-
.. ipython:: python
# datetime.timezone
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 5b2cb880195ec..fc2c486173b9d 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -126,8 +126,6 @@ cdef class IntervalMixin:
"""
Indicates if an interval is empty, meaning it contains no points.
- .. versionadded:: 0.25.0
-
Returns
-------
bool or ndarray
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index 1f18f8cae4ae8..aa75c886a4491 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -265,8 +265,6 @@ cdef class _NaT(datetime):
"""
Convert the Timestamp to a NumPy datetime64 or timedelta64.
- .. versionadded:: 0.25.0
-
With the default 'dtype', this is an alias method for `NaT.to_datetime64()`.
The copy parameter is available here only for compatibility. Its value
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index fc276d5d024cd..618e0208154fa 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1243,8 +1243,6 @@ cdef class _Timedelta(timedelta):
"""
Convert the Timedelta to a NumPy timedelta64.
- .. versionadded:: 0.25.0
-
This is an alias method for `Timedelta.to_timedelta64()`. The dtype and
copy parameters are available here only for compatibility. Their values
will not affect the return value.
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index f987a2feb2717..108c884bba170 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1090,8 +1090,6 @@ cdef class _Timestamp(ABCTimestamp):
"""
Convert the Timestamp to a NumPy datetime64.
- .. versionadded:: 0.25.0
-
This is an alias method for `Timestamp.to_datetime64()`. The dtype and
copy parameters are available here only for compatibility. Their values
will not affect the return value.
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index cd719a5256ea3..de9e3ace4f0ca 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1499,8 +1499,6 @@ def searchsorted(
"""
Find indices where elements should be inserted to maintain order.
- .. versionadded:: 0.25.0
-
Find the indices into a sorted array `arr` (a) such that, if the
corresponding elements in `value` were inserted before the indices,
the order of `arr` would be preserved.
@@ -1727,8 +1725,6 @@ def safe_sort(
codes equal to ``-1``. If ``verify=False``, it is assumed there
are no out of bound codes. Ignored when ``codes`` is None.
- .. versionadded:: 0.25.0
-
Returns
-------
ordered : AnyArrayLike
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index a9af210e08741..030b1344d5348 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -1568,9 +1568,7 @@ def argsort(
"""
Return the indices that would sort the Categorical.
- .. versionchanged:: 0.25.0
-
- Changed to sort missing values at the end.
+ Missing values are sorted at the end.
Parameters
----------
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be20d825b0c80..c0a65a97d31f8 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1594,8 +1594,6 @@ def mean(self, *, skipna: bool = True, axis: AxisInt | None = 0):
"""
Return the mean value of the Array.
- .. versionadded:: 0.25.0
-
Parameters
----------
skipna : bool, default True
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 5ebd882cd3a13..3028e276361d3 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1596,8 +1596,6 @@ def repeat(
Return a boolean mask whether the value is contained in the Intervals
of the %(klass)s.
- .. versionadded:: 0.25.0
-
Parameters
----------
other : scalar
diff --git a/pandas/core/arrays/sparse/accessor.py b/pandas/core/arrays/sparse/accessor.py
index 5149dc0973b79..1d5fd09034ae0 100644
--- a/pandas/core/arrays/sparse/accessor.py
+++ b/pandas/core/arrays/sparse/accessor.py
@@ -193,8 +193,6 @@ def to_dense(self) -> Series:
"""
Convert a Series from sparse values to dense.
- .. versionadded:: 0.25.0
-
Returns
-------
Series:
@@ -227,8 +225,6 @@ def to_dense(self) -> Series:
class SparseFrameAccessor(BaseAccessor, PandasDelegate):
"""
DataFrame accessor for sparse data.
-
- .. versionadded:: 0.25.0
"""
def _validate(self, data):
@@ -241,8 +237,6 @@ def from_spmatrix(cls, data, index=None, columns=None) -> DataFrame:
"""
Create a new DataFrame from a scipy sparse matrix.
- .. versionadded:: 0.25.0
-
Parameters
----------
data : scipy.sparse.spmatrix
@@ -297,8 +291,6 @@ def to_dense(self) -> DataFrame:
"""
Convert a DataFrame with sparse values to dense.
- .. versionadded:: 0.25.0
-
Returns
-------
DataFrame
@@ -322,8 +314,6 @@ def to_coo(self):
"""
Return the contents of the frame as a sparse SciPy COO matrix.
- .. versionadded:: 0.25.0
-
Returns
-------
coo_matrix : scipy.sparse.spmatrix
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 7dce02f362b47..033917fe9eb2d 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -530,8 +530,6 @@ def from_spmatrix(cls: type[SparseArrayT], data: spmatrix) -> SparseArrayT:
"""
Create a SparseArray from a scipy.sparse matrix.
- .. versionadded:: 0.25.0
-
Parameters
----------
data : scipy.sparse.sp_matrix
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index eb3365d4f8410..acdfaba3d8d86 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -489,8 +489,7 @@ class DataFrame(NDFrame, OpsMixin):
occurs if data is a Series or a DataFrame itself. Alignment is done on
Series/DataFrame inputs.
- .. versionchanged:: 0.25.0
- If data is a list of dicts, column order follows insertion-order.
+ If data is a list of dicts, column order follows insertion-order.
index : Index or array-like
Index to use for resulting frame. Will default to RangeIndex if
@@ -3151,9 +3150,7 @@ def to_html(
header="Whether to print column labels, default True",
col_space_type="str or int, list or dict of int or str",
col_space="The minimum width of each column in CSS length "
- "units. An int is assumed to be px units.\n\n"
- " .. versionadded:: 0.25.0\n"
- " Ability to use str",
+ "units. An int is assumed to be px units.",
)
@Substitution(shared_params=fmt.common_docstring, returns=fmt.return_docstring)
def to_html(
@@ -4349,9 +4346,6 @@ def query(self, expr: str, inplace: bool = False, **kwargs) -> DataFrame | None:
For example, if one of your columns is called ``a a`` and you want
to sum it with ``b``, your query should be ```a a` + b``.
- .. versionadded:: 0.25.0
- Backtick quoting introduced.
-
.. versionadded:: 1.0.0
Expanding functionality of backtick quoting for more than only spaces.
@@ -8493,8 +8487,6 @@ def pivot(
If True: only show observed values for categorical groupers.
If False: show all values for categorical groupers.
- .. versionchanged:: 0.25.0
-
sort : bool, default True
Specifies if the result should be sorted.
@@ -8808,8 +8800,6 @@ def explode(
"""
Transform each element of a list-like to a row, replicating index values.
- .. versionadded:: 0.25.0
-
Parameters
----------
column : IndexLabel
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 0b55416d2bd7e..b962ca780f1d0 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2916,8 +2916,6 @@ def union(self, other, sort=None):
If the Index objects are incompatible, both Index objects will be
cast to dtype('object') first.
- .. versionchanged:: 0.25.0
-
Parameters
----------
other : Index or array-like
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index ae88b85aa06e1..a2281c6fd9540 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -607,8 +607,6 @@ def _union(self, other: Index, sort):
increasing and other is fully contained in self. Otherwise, returns
an unsorted ``Int64Index``
- .. versionadded:: 0.25.0
-
Returns
-------
union : Index
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index b6fa5da857910..7e0052cf896e4 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -256,7 +256,6 @@ def merge_ordered(
`right` should be left as-is, with no suffix. At least one of the
values must not be None.
- .. versionchanged:: 0.25.0
how : {'left', 'right', 'outer', 'inner'}, default 'outer'
* left: use only keys from left frame (SQL: left outer join)
* right: use only keys from right frame (SQL: right outer join)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 48bc07ca022ee..1e5f565934b50 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4117,8 +4117,6 @@ def explode(self, ignore_index: bool = False) -> Series:
"""
Transform each element of a list-like to a row.
- .. versionadded:: 0.25.0
-
Parameters
----------
ignore_index : bool, default False
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 07dc203e556e8..147fa622fdedc 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -551,9 +551,6 @@
The method to use when for replacement, when `to_replace` is a
scalar, list or tuple and `value` is ``None``.
- .. versionchanged:: 0.23.0
- Added to DataFrame.
-
Returns
-------
{klass}
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 8cd4cb976503d..9ac2e538b6a80 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -456,7 +456,6 @@ def cat(
to match the length of the calling Series/Index). To disable
alignment, use `.values` on any Series/Index/DataFrame in `others`.
- .. versionadded:: 0.23.0
.. versionchanged:: 1.0.0
Changed default of `join` from None to `'left'`.
@@ -2978,7 +2977,7 @@ def len(self):
_doc_args["casefold"] = {
"type": "be casefolded",
"method": "casefold",
- "version": "\n .. versionadded:: 0.25.0\n",
+ "version": "",
}
@Appender(_shared_docs["casemethods"] % _doc_args["lower"])
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 430343beb630b..fd0604903000e 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -824,9 +824,6 @@ def to_datetime(
out-of-bounds values will render the cache unusable and may slow down
parsing.
- .. versionchanged:: 0.25.0
- changed default value from :const:`False` to :const:`True`.
-
Returns
-------
datetime
diff --git a/pandas/io/formats/printing.py b/pandas/io/formats/printing.py
index 0338cb018049b..37351049f0f56 100644
--- a/pandas/io/formats/printing.py
+++ b/pandas/io/formats/printing.py
@@ -312,8 +312,6 @@ def format_object_summary(
If False, only break lines when the a line of values gets wider
than the display width.
- .. versionadded:: 0.25.0
-
Returns
-------
summary string
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index ec2ffbcf9682c..8db93f882fefd 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -130,7 +130,6 @@ def read_gbq(
package. It also requires the ``google-cloud-bigquery-storage`` and
``fastavro`` packages.
- .. versionadded:: 0.25.0
max_results : int, optional
If set, limit the maximum number of rows to fetch from the query
results.
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index 5f02822b68d6d..f2780d5fa6832 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -549,19 +549,11 @@ def read_json(
For all ``orient`` values except ``'table'``, default is True.
- .. versionchanged:: 0.25.0
-
- Not applicable for ``orient='table'``.
-
convert_axes : bool, default None
Try to convert the axes to the proper dtypes.
For all ``orient`` values except ``'table'``, default is True.
- .. versionchanged:: 0.25.0
-
- Not applicable for ``orient='table'``.
-
convert_dates : bool or list of str, default True
If True then default datelike columns may be converted (depending on
keep_default_dates).
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 4b2d0d9beea3f..2e320c6cb9b8f 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -66,8 +66,6 @@ def nested_to_record(
max_level: int, optional, default: None
The max depth to normalize.
- .. versionadded:: 0.25.0
-
Returns
-------
d - dict or list of dicts, matching `ds`
@@ -289,8 +287,6 @@ def _json_normalize(
Max number of levels(depth of dict) to normalize.
if None, normalizes all levels.
- .. versionadded:: 0.25.0
-
Returns
-------
frame : DataFrame
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index d9c2403a19d0c..23a335ec0b965 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -267,7 +267,6 @@
conversion. May produce significant speed-up when parsing duplicate
date strings, especially ones with timezone offsets.
- .. versionadded:: 0.25.0
iterator : bool, default False
Return TextFileReader object for iteration or getting chunks with
``get_chunk()``.
diff --git a/pandas/io/spss.py b/pandas/io/spss.py
index 32efd6ca1180c..630b78497d32f 100644
--- a/pandas/io/spss.py
+++ b/pandas/io/spss.py
@@ -24,8 +24,6 @@ def read_spss(
"""
Load an SPSS file from the file path, returning a DataFrame.
- .. versionadded:: 0.25.0
-
Parameters
----------
path : str or Path
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index aef1eb5d59e68..c8b217319b91d 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -690,18 +690,12 @@ class PlotAccessor(PandasObject):
logx : bool or 'sym', default False
Use log scaling or symlog scaling on x axis.
- .. versionchanged:: 0.25.0
-
logy : bool or 'sym' default False
Use log scaling or symlog scaling on y axis.
- .. versionchanged:: 0.25.0
-
loglog : bool or 'sym', default False
Use log scaling or symlog scaling on both x and y axes.
- .. versionchanged:: 0.25.0
-
xticks : sequence
Values to use for the xticks.
yticks : sequence
| **Description**
Remove all `versionchanged` / `versionadded` directives for versions < `1.0.0`
**Checks**
- [x] closes https://github.com/noatamir/pydata-global-sprints/issues/10)
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit). | https://api.github.com/repos/pandas-dev/pandas/pulls/50057 | 2022-12-04T16:25:18Z | 2022-12-05T09:24:09Z | 2022-12-05T09:24:09Z | 2022-12-05T09:24:21Z |
BUG: read_csv with engine pyarrow parsing multiple date columns | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index e1ac9e3309de7..67f5484044cd5 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -386,6 +386,7 @@ I/O
- :meth:`DataFrame.to_orc` now raising ``ValueError`` when non-default :class:`Index` is given (:issue:`51828`)
- :meth:`DataFrame.to_sql` now raising ``ValueError`` when the name param is left empty while using SQLAlchemy to connect (:issue:`52675`)
- Bug in :func:`json_normalize`, fix json_normalize cannot parse metadata fields list type (:issue:`37782`)
+- Bug in :func:`read_csv` where it would error when ``parse_dates`` was set to a list or dictionary with ``engine="pyarrow"`` (:issue:`47961`)
- Bug in :func:`read_hdf` not properly closing store after a ``IndexError`` is raised (:issue:`52781`)
- Bug in :func:`read_html`, style elements were read into DataFrames (:issue:`52197`)
- Bug in :func:`read_html`, tail texts were removed together with elements containing ``display:none`` style (:issue:`51629`)
diff --git a/pandas/io/parsers/arrow_parser_wrapper.py b/pandas/io/parsers/arrow_parser_wrapper.py
index e106db224c3dc..d34b3ae1372fd 100644
--- a/pandas/io/parsers/arrow_parser_wrapper.py
+++ b/pandas/io/parsers/arrow_parser_wrapper.py
@@ -58,6 +58,21 @@ def _get_pyarrow_options(self) -> None:
if pandas_name in self.kwds and self.kwds.get(pandas_name) is not None:
self.kwds[pyarrow_name] = self.kwds.pop(pandas_name)
+ # Date format handling
+ # If we get a string, we need to convert it into a list for pyarrow
+ # If we get a dict, we want to parse those separately
+ date_format = self.date_format
+ if isinstance(date_format, str):
+ date_format = [date_format]
+ else:
+ # In case of dict, we don't want to propagate through, so
+ # just set to pyarrow default of None
+
+ # Ideally, in future we disable pyarrow dtype inference (read in as string)
+ # to prevent misreads.
+ date_format = None
+ self.kwds["timestamp_parsers"] = date_format
+
self.parse_options = {
option_name: option_value
for option_name, option_value in self.kwds.items()
@@ -76,6 +91,7 @@ def _get_pyarrow_options(self) -> None:
"true_values",
"false_values",
"decimal_point",
+ "timestamp_parsers",
)
}
self.convert_options["strings_can_be_null"] = "" in self.kwds["null_values"]
@@ -116,7 +132,7 @@ def _finalize_pandas_output(self, frame: DataFrame) -> DataFrame:
multi_index_named = False
frame.columns = self.names
# we only need the frame not the names
- frame.columns, frame = self._do_date_conversions(frame.columns, frame)
+ _, frame = self._do_date_conversions(frame.columns, frame)
if self.index_col is not None:
index_to_set = self.index_col.copy()
for i, item in enumerate(self.index_col):
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index 564339cefa3aa..3208286489fbe 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -61,8 +61,10 @@
from pandas import (
ArrowDtype,
+ DataFrame,
DatetimeIndex,
StringDtype,
+ concat,
)
from pandas.core import algorithms
from pandas.core.arrays import (
@@ -92,8 +94,6 @@
Scalar,
)
- from pandas import DataFrame
-
class ParserBase:
class BadLineHandleMethod(Enum):
@@ -1304,7 +1304,10 @@ def _isindex(colspec):
new_cols.append(new_name)
date_cols.update(old_names)
- data_dict.update(new_data)
+ if isinstance(data_dict, DataFrame):
+ data_dict = concat([DataFrame(new_data), data_dict], axis=1, copy=False)
+ else:
+ data_dict.update(new_data)
new_cols.extend(columns)
if not keep_date_col:
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index 81de4f13de81d..b354f7d9da94d 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -139,9 +139,8 @@ def test_separator_date_conflict(all_parsers):
tm.assert_frame_equal(df, expected)
-@xfail_pyarrow
@pytest.mark.parametrize("keep_date_col", [True, False])
-def test_multiple_date_col_custom(all_parsers, keep_date_col):
+def test_multiple_date_col_custom(all_parsers, keep_date_col, request):
data = """\
KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
@@ -152,6 +151,14 @@ def test_multiple_date_col_custom(all_parsers, keep_date_col):
"""
parser = all_parsers
+ if keep_date_col and parser.engine == "pyarrow":
+ # For this to pass, we need to disable auto-inference on the date columns
+ # in parse_dates. We have no way of doing this though
+ mark = pytest.mark.xfail(
+ reason="pyarrow doesn't support disabling auto-inference on column numbers."
+ )
+ request.node.add_marker(mark)
+
def date_parser(*date_cols):
"""
Test date parser.
@@ -301,9 +308,8 @@ def test_concat_date_col_fail(container, dim):
parsing.concat_date_cols(date_cols)
-@xfail_pyarrow
@pytest.mark.parametrize("keep_date_col", [True, False])
-def test_multiple_date_col(all_parsers, keep_date_col):
+def test_multiple_date_col(all_parsers, keep_date_col, request):
data = """\
KORD,19990127, 19:00:00, 18:56:00, 0.8100, 2.8100, 7.2000, 0.0000, 280.0000
KORD,19990127, 20:00:00, 19:56:00, 0.0100, 2.2100, 7.2000, 0.0000, 260.0000
@@ -313,6 +319,15 @@ def test_multiple_date_col(all_parsers, keep_date_col):
KORD,19990127, 23:00:00, 22:56:00, -0.5900, 1.7100, 4.6000, 0.0000, 280.0000
"""
parser = all_parsers
+
+ if keep_date_col and parser.engine == "pyarrow":
+ # For this to pass, we need to disable auto-inference on the date columns
+ # in parse_dates. We have no way of doing this though
+ mark = pytest.mark.xfail(
+ reason="pyarrow doesn't support disabling auto-inference on column numbers."
+ )
+ request.node.add_marker(mark)
+
kwds = {
"header": None,
"parse_dates": [[1, 2], [1, 3]],
@@ -469,7 +484,6 @@ def test_date_col_as_index_col(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_multiple_date_cols_int_cast(all_parsers):
data = (
"KORD,19990127, 19:00:00, 18:56:00, 0.8100\n"
@@ -530,7 +544,6 @@ def test_multiple_date_cols_int_cast(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_multiple_date_col_timestamp_parse(all_parsers):
parser = all_parsers
data = """05/31/2012,15:30:00.029,1306.25,1,E,0,,1306.25
@@ -1168,7 +1181,6 @@ def test_multiple_date_cols_chunked(all_parsers):
tm.assert_frame_equal(chunks[2], expected[4:])
-@xfail_pyarrow
def test_multiple_date_col_named_index_compat(all_parsers):
parser = all_parsers
data = """\
@@ -1192,7 +1204,6 @@ def test_multiple_date_col_named_index_compat(all_parsers):
tm.assert_frame_equal(with_indices, with_names)
-@xfail_pyarrow
def test_multiple_date_col_multiple_index_compat(all_parsers):
parser = all_parsers
data = """\
@@ -1408,7 +1419,6 @@ def test_parse_date_time_multi_level_column_name(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
"data,kwargs,expected",
[
@@ -1498,9 +1508,6 @@ def test_parse_date_time(all_parsers, data, kwargs, expected):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
-# From date_parser fallback behavior
-@pytest.mark.filterwarnings("ignore:elementwise comparison:FutureWarning")
def test_parse_date_fields(all_parsers):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
@@ -1510,7 +1517,7 @@ def test_parse_date_fields(all_parsers):
StringIO(data),
header=0,
parse_dates={"ymd": [0, 1, 2]},
- date_parser=pd.to_datetime,
+ date_parser=lambda x: x,
)
expected = DataFrame(
@@ -1520,7 +1527,6 @@ def test_parse_date_fields(all_parsers):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
("key", "value", "warn"),
[
@@ -1557,7 +1563,6 @@ def test_parse_date_all_fields(all_parsers, key, value, warn):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
@pytest.mark.parametrize(
("key", "value", "warn"),
[
@@ -1594,7 +1599,6 @@ def test_datetime_fractional_seconds(all_parsers, key, value, warn):
tm.assert_frame_equal(result, expected)
-@xfail_pyarrow
def test_generic(all_parsers):
parser = all_parsers
data = "year,month,day,a\n2001,01,10,10.\n2001,02,1,11."
| - [ ] closes #47961 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50056 | 2022-12-04T16:23:37Z | 2023-05-18T21:21:05Z | 2023-05-18T21:21:05Z | 2023-05-18T21:21:17Z |
CI: Pin pyarrow smaller than 10 | diff --git a/.github/actions/setup-conda/action.yml b/.github/actions/setup-conda/action.yml
index 002d0020c2df1..7d1e54052f938 100644
--- a/.github/actions/setup-conda/action.yml
+++ b/.github/actions/setup-conda/action.yml
@@ -18,7 +18,7 @@ runs:
- name: Set Arrow version in ${{ inputs.environment-file }} to ${{ inputs.pyarrow-version }}
run: |
grep -q ' - pyarrow' ${{ inputs.environment-file }}
- sed -i"" -e "s/ - pyarrow/ - pyarrow=${{ inputs.pyarrow-version }}/" ${{ inputs.environment-file }}
+ sed -i"" -e "s/ - pyarrow<10/ - pyarrow=${{ inputs.pyarrow-version }}/" ${{ inputs.environment-file }}
cat ${{ inputs.environment-file }}
shell: bash
if: ${{ inputs.pyarrow-version }}
diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 1f6f73f3c963f..25623d9826030 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -42,7 +42,7 @@ dependencies:
- psycopg2
- pymysql
- pytables
- - pyarrow
+ - pyarrow<10
- pyreadstat
- python-snappy
- pyxlsb
diff --git a/ci/deps/actions-38-downstream_compat.yaml b/ci/deps/actions-38-downstream_compat.yaml
index 1c59b9db9b1fa..15ce02204ee99 100644
--- a/ci/deps/actions-38-downstream_compat.yaml
+++ b/ci/deps/actions-38-downstream_compat.yaml
@@ -41,7 +41,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/ci/deps/actions-38.yaml b/ci/deps/actions-38.yaml
index 48b9d18771afb..c93af914c2277 100644
--- a/ci/deps/actions-38.yaml
+++ b/ci/deps/actions-38.yaml
@@ -40,7 +40,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 59191ad107d12..f6240ae7611b9 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -41,7 +41,7 @@ dependencies:
- pandas-gbq
- psycopg2
- pymysql
- - pyarrow
+ - pyarrow<10
- pyreadstat
- pytables
- python-snappy
diff --git a/ci/deps/circle-38-arm64.yaml b/ci/deps/circle-38-arm64.yaml
index 63547d3521489..2095850aa4d3a 100644
--- a/ci/deps/circle-38-arm64.yaml
+++ b/ci/deps/circle-38-arm64.yaml
@@ -40,7 +40,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
# Not provided on ARM
#- pyreadstat
diff --git a/environment.yml b/environment.yml
index 87c5f5d031fcf..70884f4ca98a3 100644
--- a/environment.yml
+++ b/environment.yml
@@ -42,7 +42,7 @@ dependencies:
- odfpy
- pandas-gbq
- psycopg2
- - pyarrow
+ - pyarrow<10
- pymysql
- pyreadstat
- pytables
diff --git a/requirements-dev.txt b/requirements-dev.txt
index debbdb635901c..caa3dd49add3b 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -31,7 +31,7 @@ openpyxl
odfpy
pandas-gbq
psycopg2-binary
-pyarrow
+pyarrow<10
pymysql
pyreadstat
tables
| lets see if this fixes ci, matplotlib is a bit weird | https://api.github.com/repos/pandas-dev/pandas/pulls/50055 | 2022-12-04T16:20:55Z | 2022-12-04T17:27:27Z | 2022-12-04T17:27:27Z | 2022-12-15T13:42:58Z |
DEPR: remove Index._is backward_compat_public_numeric_index | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9e00278f85cbc..999ef0d7ac68c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -90,7 +90,6 @@
ensure_platform_int,
is_bool_dtype,
is_categorical_dtype,
- is_complex_dtype,
is_dtype_equal,
is_ea_or_datetimelike_dtype,
is_extension_array_dtype,
@@ -107,7 +106,6 @@
is_scalar,
is_signed_integer_dtype,
is_string_dtype,
- is_unsigned_integer_dtype,
needs_i8_conversion,
pandas_dtype,
validate_all_hashable,
@@ -125,7 +123,6 @@
ABCDatetimeIndex,
ABCMultiIndex,
ABCPeriodIndex,
- ABCRangeIndex,
ABCSeries,
ABCTimedeltaIndex,
)
@@ -392,11 +389,6 @@ def _outer_indexer(
_attributes: list[str] = ["name"]
_can_hold_strings: bool = True
- # Whether this index is a NumericIndex, but not a Int64Index, Float64Index,
- # UInt64Index or RangeIndex. Needed for backwards compat. Remove this attribute and
- # associated code in pandas 2.0.
- _is_backward_compat_public_numeric_index: bool = False
-
@property
def _engine_type(
self,
@@ -446,13 +438,6 @@ def __new__(
elif is_ea_or_datetimelike_dtype(data_dtype):
pass
- # index-like
- elif (
- isinstance(data, Index)
- and data._is_backward_compat_public_numeric_index
- and dtype is None
- ):
- return data._constructor(data, name=name, copy=copy)
elif isinstance(data, (np.ndarray, Index, ABCSeries)):
if isinstance(data, ABCMultiIndex):
@@ -981,34 +966,6 @@ def astype(self, dtype, copy: bool = True):
new_values = astype_array(values, dtype=dtype, copy=copy)
# pass copy=False because any copying will be done in the astype above
- if not self._is_backward_compat_public_numeric_index and not isinstance(
- self, ABCRangeIndex
- ):
- # this block is needed so e.g. Int64Index.astype("int32") returns
- # Int64Index and not a NumericIndex with dtype int32.
- # When Int64Index etc. are removed from the code base, removed this also.
- if (
- isinstance(dtype, np.dtype)
- and is_numeric_dtype(dtype)
- and not is_complex_dtype(dtype)
- ):
- from pandas.core.api import (
- Float64Index,
- Int64Index,
- UInt64Index,
- )
-
- klass: type[Index]
- if is_signed_integer_dtype(dtype):
- klass = Int64Index
- elif is_unsigned_integer_dtype(dtype):
- klass = UInt64Index
- elif is_float_dtype(dtype):
- klass = Float64Index
- else:
- klass = Index
- return klass(new_values, name=self.name, dtype=dtype, copy=False)
-
return Index(new_values, name=self.name, dtype=new_values.dtype, copy=False)
_index_shared_docs[
@@ -5059,10 +5016,6 @@ def _concat(self, to_concat: list[Index], name: Hashable) -> Index:
result = concat_compat(to_concat_vals)
- is_numeric = result.dtype.kind in ["i", "u", "f"]
- if self._is_backward_compat_public_numeric_index and is_numeric:
- return type(self)._simple_new(result, name=name)
-
return Index._with_infer(result, name=name)
def putmask(self, mask, value) -> Index:
@@ -6460,12 +6413,7 @@ def insert(self, loc: int, item) -> Index:
loc = loc if loc >= 0 else loc - 1
new_values[loc] = item
- if self._typ == "numericindex":
- # Use self._constructor instead of Index to retain NumericIndex GH#43921
- # TODO(2.0) can use Index instead of self._constructor
- return self._constructor(new_values, name=self.name)
- else:
- return Index._with_infer(new_values, name=self.name)
+ return Index._with_infer(new_values, name=self.name)
def drop(
self,
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 6c172a2034524..ff24f97106c51 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -21,7 +21,6 @@
from pandas.core.dtypes.common import (
is_categorical_dtype,
is_scalar,
- pandas_dtype,
)
from pandas.core.dtypes.missing import (
is_valid_na_for_dtype,
@@ -274,30 +273,6 @@ def _is_dtype_compat(self, other) -> Categorical:
return other
- @doc(Index.astype)
- def astype(self, dtype: Dtype, copy: bool = True) -> Index:
- from pandas.core.api import NumericIndex
-
- dtype = pandas_dtype(dtype)
-
- categories = self.categories
- # the super method always returns Int64Index, UInt64Index and Float64Index
- # but if the categories are a NumericIndex with dtype float32, we want to
- # return an index with the same dtype as self.categories.
- if categories._is_backward_compat_public_numeric_index:
- assert isinstance(categories, NumericIndex) # mypy complaint fix
- try:
- categories._validate_dtype(dtype)
- except ValueError:
- pass
- else:
- new_values = self._data.astype(dtype, copy=copy)
- # pass copy=False because any copying has been done in the
- # _data.astype call above
- return categories._constructor(new_values, name=self.name, copy=False)
-
- return super().astype(dtype, copy=copy)
-
def equals(self, other: object) -> bool:
"""
Determine if two CategoricalIndex objects contain the same elements.
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py
index af3ff54bb9e2b..3ea7b30f7e9f1 100644
--- a/pandas/core/indexes/numeric.py
+++ b/pandas/core/indexes/numeric.py
@@ -87,7 +87,6 @@ class NumericIndex(Index):
"numeric type",
)
_can_hold_strings = False
- _is_backward_compat_public_numeric_index: bool = True
_engine_types: dict[np.dtype, type[libindex.IndexEngine]] = {
np.dtype(np.int8): libindex.Int8Engine,
@@ -214,12 +213,7 @@ def _ensure_dtype(cls, dtype: Dtype | None) -> np.dtype | None:
# float16 not supported (no indexing engine)
raise NotImplementedError("float16 indexes are not supported")
- if cls._is_backward_compat_public_numeric_index:
- # dtype for NumericIndex
- return dtype
- else:
- # dtype for Int64Index, UInt64Index etc. Needed for backwards compat.
- return cls._default_dtype
+ return dtype
# ----------------------------------------------------------------
# Indexing Methods
@@ -415,7 +409,6 @@ class Float64Index(NumericIndex):
_typ = "float64index"
_default_dtype = np.dtype(np.float64)
_dtype_validation_metadata = (is_float_dtype, "float")
- _is_backward_compat_public_numeric_index: bool = False
@property
def _engine_type(self) -> type[libindex.Float64Engine]:
diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py
index e17a0d070be6a..73e4a51ca3e7c 100644
--- a/pandas/core/indexes/range.py
+++ b/pandas/core/indexes/range.py
@@ -103,7 +103,6 @@ class RangeIndex(NumericIndex):
_typ = "rangeindex"
_dtype_validation_metadata = (is_signed_integer_dtype, "signed integer")
_range: range
- _is_backward_compat_public_numeric_index: bool = False
@property
def _engine_type(self) -> type[libindex.Int64Engine]:
diff --git a/pandas/tests/base/test_unique.py b/pandas/tests/base/test_unique.py
index b1b0479f397b1..624d3d68d37fd 100644
--- a/pandas/tests/base/test_unique.py
+++ b/pandas/tests/base/test_unique.py
@@ -5,7 +5,6 @@
import pandas as pd
import pandas._testing as tm
-from pandas.core.api import NumericIndex
from pandas.tests.base.common import allow_na_ops
@@ -20,9 +19,6 @@ def test_unique(index_or_series_obj):
expected = pd.MultiIndex.from_tuples(unique_values)
expected.names = obj.names
tm.assert_index_equal(result, expected, exact=True)
- elif isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index:
- expected = NumericIndex(unique_values, dtype=obj.dtype)
- tm.assert_index_equal(result, expected, exact=True)
elif isinstance(obj, pd.Index):
expected = pd.Index(unique_values, dtype=obj.dtype)
if is_datetime64tz_dtype(obj.dtype):
@@ -58,10 +54,7 @@ def test_unique_null(null_obj, index_or_series_obj):
unique_values_not_null = [val for val in unique_values_raw if not pd.isnull(val)]
unique_values = [null_obj] + unique_values_not_null
- if isinstance(obj, pd.Index) and obj._is_backward_compat_public_numeric_index:
- expected = NumericIndex(unique_values, dtype=obj.dtype)
- tm.assert_index_equal(result, expected, exact=True)
- elif isinstance(obj, pd.Index):
+ if isinstance(obj, pd.Index):
expected = pd.Index(unique_values, dtype=obj.dtype)
if is_datetime64tz_dtype(obj.dtype):
result = result.normalize()
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 6b0046dbe619c..825c703b7972e 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -624,14 +624,10 @@ def test_map_dictlike(self, mapper, simple_index):
# empty mappable
dtype = None
- if idx._is_backward_compat_public_numeric_index:
- new_index_cls = NumericIndex
- if idx.dtype.kind == "f":
- dtype = idx.dtype
- else:
- new_index_cls = Float64Index
+ if idx.dtype.kind == "f":
+ dtype = idx.dtype
- expected = new_index_cls([np.nan] * len(idx), dtype=dtype)
+ expected = Index([np.nan] * len(idx), dtype=dtype)
result = idx.map(mapper(expected, idx))
tm.assert_index_equal(result, expected)
@@ -880,13 +876,9 @@ def test_insert_na(self, nulls_fixture, simple_index):
expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object)
else:
expected = Index([index[0], np.nan] + list(index[1:]))
-
- if index._is_backward_compat_public_numeric_index:
- # GH#43921 we preserve NumericIndex
- if index.dtype.kind == "f":
- expected = NumericIndex(expected, dtype=index.dtype)
- else:
- expected = NumericIndex(expected)
+ # GH#43921 we preserve float dtype
+ if index.dtype.kind == "f":
+ expected = Index(expected, dtype=index.dtype)
result = index.insert(1, na_val)
tm.assert_index_equal(result, expected, exact=True)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index af15cbc2f7929..1012734bab234 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -18,6 +18,8 @@
)
from pandas.util._test_decorators import async_mark
+from pandas.core.dtypes.common import is_numeric_dtype
+
import pandas as pd
from pandas import (
CategoricalIndex,
@@ -592,13 +594,11 @@ def test_map_dictlike(self, index, mapper, request):
if index.empty:
# to match proper result coercion for uints
expected = Index([])
- elif index._is_backward_compat_public_numeric_index:
+ elif is_numeric_dtype(index.dtype):
expected = index._constructor(rng, dtype=index.dtype)
elif type(index) is Index and index.dtype != object:
# i.e. EA-backed, for now just Nullable
expected = Index(rng, dtype=index.dtype)
- elif index.dtype.kind == "u":
- expected = Index(rng, dtype=index.dtype)
else:
expected = Index(rng)
| Removes the attribute `Index._is backward_compat_public_numeric_index`and related stuff. This is the last PR before actually removing Int64/UInt64/Float64Index from the code base.
This PR builds on top of #49560 (which is not yet merged), so please exclude the first commit from review of this PR.
I'd also truly appreciate if someone could review #49560, as it's quite a heavy PR with lots of dtype changes. | https://api.github.com/repos/pandas-dev/pandas/pulls/50052 | 2022-12-04T11:43:44Z | 2023-01-16T17:44:37Z | 2023-01-16T17:44:37Z | 2023-01-16T17:55:45Z |
DOC: Add tips on where to locate test | diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst
index 91f3d51460f99..b05f026bbbb44 100644
--- a/doc/source/development/contributing_codebase.rst
+++ b/doc/source/development/contributing_codebase.rst
@@ -338,7 +338,22 @@ Writing tests
All tests should go into the ``tests`` subdirectory of the specific package.
This folder contains many current examples of tests, and we suggest looking to these for
-inspiration. Ideally, there should be one, and only one, obvious place for a test to reside.
+inspiration.
+
+As a general tip, you can use the search functionality in your integrated development
+environment (IDE) or the git grep command in a terminal to find test files in which the method
+is called. If you are unsure of the best location to put your test, take your best guess,
+but note that reviewers may request that you move the test to a different location.
+
+To use git grep, you can run the following command in a terminal:
+
+``git grep "function_name("``
+
+This will search through all files in your repository for the text ``function_name(``.
+This can be a useful way to quickly locate the function in the
+codebase and determine the best location to add a test for it.
+
+Ideally, there should be one, and only one, obvious place for a test to reside.
Until we reach that ideal, these are some rules of thumb for where a test should
be located.
| Closes #50014 | https://api.github.com/repos/pandas-dev/pandas/pulls/50050 | 2022-12-04T11:28:45Z | 2022-12-06T21:07:41Z | 2022-12-06T21:07:41Z | 2022-12-06T21:07:50Z |
DOC: Improve groupby().ngroup() explanation for missing groups | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 659ca228bdcb0..9a813e866e8d0 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -3212,6 +3212,9 @@ def ngroup(self, ascending: bool = True):
would be seen when iterating over the groupby object, not the
order they are first observed.
+ Groups with missing keys (where `pd.isna()` is True) will be labeled with `NaN`
+ and will be skipped from the count.
+
Parameters
----------
ascending : bool, default True
@@ -3228,38 +3231,38 @@ def ngroup(self, ascending: bool = True):
Examples
--------
- >>> df = pd.DataFrame({"A": list("aaabba")})
+ >>> df = pd.DataFrame({"color": ["red", None, "red", "blue", "blue", "red"]})
>>> df
- A
- 0 a
- 1 a
- 2 a
- 3 b
- 4 b
- 5 a
- >>> df.groupby('A').ngroup()
- 0 0
- 1 0
- 2 0
- 3 1
- 4 1
- 5 0
- dtype: int64
- >>> df.groupby('A').ngroup(ascending=False)
+ color
+ 0 red
+ 1 None
+ 2 red
+ 3 blue
+ 4 blue
+ 5 red
+ >>> df.groupby("color").ngroup()
+ 0 1.0
+ 1 NaN
+ 2 1.0
+ 3 0.0
+ 4 0.0
+ 5 1.0
+ dtype: float64
+ >>> df.groupby("color", dropna=False).ngroup()
0 1
- 1 1
+ 1 2
2 1
3 0
4 0
5 1
dtype: int64
- >>> df.groupby(["A", [1,1,2,3,2,1]]).ngroup()
- 0 0
+ >>> df.groupby("color", dropna=False).ngroup(ascending=False)
+ 0 1
1 0
2 1
- 3 3
+ 3 2
4 2
- 5 0
+ 5 1
dtype: int64
"""
with self._group_selection_context():
| The exact prose and examples are open to improvements if you have ideas.
I figure this didn't require a separate issue, please let me know if I'm missing some step in the process. | https://api.github.com/repos/pandas-dev/pandas/pulls/50049 | 2022-12-04T04:43:20Z | 2023-01-03T21:14:37Z | 2023-01-03T21:14:37Z | 2023-01-04T02:45:52Z |
ENH: Add nullable keyword to read_sql | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d6e0bb2ae0830..2db5f977721d8 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -37,6 +37,7 @@ The ``use_nullable_dtypes`` keyword argument has been expanded to the following
* :func:`read_csv`
* :func:`read_excel`
+* :func:`read_sql`
Additionally a new global configuration, ``io.nullable_backend`` can now be used in conjunction with the parameter ``use_nullable_dtypes=True`` in the following functions
to select the nullable dtypes implementation.
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 07fab0080a747..9bdfd7991689b 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -31,9 +31,11 @@
)
from pandas.core.dtypes.common import (
is_1d_only_ea_dtype,
+ is_bool_dtype,
is_datetime_or_timedelta_dtype,
is_dtype_equal,
is_extension_array_dtype,
+ is_float_dtype,
is_integer_dtype,
is_list_like,
is_named_tuple,
@@ -49,7 +51,13 @@
algorithms,
common as com,
)
-from pandas.core.arrays import ExtensionArray
+from pandas.core.arrays import (
+ BooleanArray,
+ ExtensionArray,
+ FloatingArray,
+ IntegerArray,
+)
+from pandas.core.arrays.string_ import StringDtype
from pandas.core.construction import (
ensure_wrapped_if_datetimelike,
extract_array,
@@ -900,7 +908,7 @@ def _finalize_columns_and_data(
raise ValueError(err) from err
if len(contents) and contents[0].dtype == np.object_:
- contents = _convert_object_array(contents, dtype=dtype)
+ contents = convert_object_array(contents, dtype=dtype)
return contents, columns
@@ -963,8 +971,11 @@ def _validate_or_indexify_columns(
return columns
-def _convert_object_array(
- content: list[npt.NDArray[np.object_]], dtype: DtypeObj | None
+def convert_object_array(
+ content: list[npt.NDArray[np.object_]],
+ dtype: DtypeObj | None,
+ use_nullable_dtypes: bool = False,
+ coerce_float: bool = False,
) -> list[ArrayLike]:
"""
Internal function to convert object array.
@@ -973,20 +984,37 @@ def _convert_object_array(
----------
content: List[np.ndarray]
dtype: np.dtype or ExtensionDtype
+ use_nullable_dtypes: Controls if nullable dtypes are returned.
+ coerce_float: Cast floats that are integers to int.
Returns
-------
List[ArrayLike]
"""
# provide soft conversion of object dtypes
+
def convert(arr):
if dtype != np.dtype("O"):
- arr = lib.maybe_convert_objects(arr)
+ arr = lib.maybe_convert_objects(
+ arr,
+ try_float=coerce_float,
+ convert_to_nullable_dtype=use_nullable_dtypes,
+ )
if dtype is None:
if arr.dtype == np.dtype("O"):
# i.e. maybe_convert_objects didn't convert
arr = maybe_infer_to_datetimelike(arr)
+ if use_nullable_dtypes and arr.dtype == np.dtype("O"):
+ arr = StringDtype().construct_array_type()._from_sequence(arr)
+ elif use_nullable_dtypes and isinstance(arr, np.ndarray):
+ if is_integer_dtype(arr.dtype):
+ arr = IntegerArray(arr, np.zeros(arr.shape, dtype=np.bool_))
+ elif is_bool_dtype(arr.dtype):
+ arr = BooleanArray(arr, np.zeros(arr.shape, dtype=np.bool_))
+ elif is_float_dtype(arr.dtype):
+ arr = FloatingArray(arr, np.isnan(arr))
+
elif isinstance(dtype, ExtensionDtype):
# TODO: test(s) that get here
# TODO: try to de-duplicate this convert function with
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index e3510f71bd0cd..4c1dca180c6e9 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -58,6 +58,7 @@
)
from pandas.core.base import PandasObject
import pandas.core.common as com
+from pandas.core.internals.construction import convert_object_array
from pandas.core.tools.datetimes import to_datetime
if TYPE_CHECKING:
@@ -139,6 +140,25 @@ def _parse_date_columns(data_frame, parse_dates):
return data_frame
+def _convert_arrays_to_dataframe(
+ data,
+ columns,
+ coerce_float: bool = True,
+ use_nullable_dtypes: bool = False,
+) -> DataFrame:
+ content = lib.to_object_array_tuples(data)
+ arrays = convert_object_array(
+ list(content.T),
+ dtype=None,
+ coerce_float=coerce_float,
+ use_nullable_dtypes=use_nullable_dtypes,
+ )
+ if arrays:
+ return DataFrame(dict(zip(columns, arrays)))
+ else:
+ return DataFrame(columns=columns)
+
+
def _wrap_result(
data,
columns,
@@ -146,9 +166,12 @@ def _wrap_result(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Wrap result set of query in a DataFrame."""
- frame = DataFrame.from_records(data, columns=columns, coerce_float=coerce_float)
+ frame = _convert_arrays_to_dataframe(
+ data, columns, coerce_float, use_nullable_dtypes
+ )
if dtype:
frame = frame.astype(dtype)
@@ -156,7 +179,7 @@ def _wrap_result(
frame = _parse_date_columns(frame, parse_dates)
if index_col is not None:
- frame.set_index(index_col, inplace=True)
+ frame = frame.set_index(index_col)
return frame
@@ -418,6 +441,7 @@ def read_sql(
parse_dates=...,
columns: list[str] = ...,
chunksize: None = ...,
+ use_nullable_dtypes: bool = ...,
) -> DataFrame:
...
@@ -432,6 +456,7 @@ def read_sql(
parse_dates=...,
columns: list[str] = ...,
chunksize: int = ...,
+ use_nullable_dtypes: bool = ...,
) -> Iterator[DataFrame]:
...
@@ -445,6 +470,7 @@ def read_sql(
parse_dates=None,
columns: list[str] | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL query or database table into a DataFrame.
@@ -492,6 +518,12 @@ def read_sql(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the
number of rows to include in each chunk.
+ use_nullable_dtypes : bool = False
+ Whether to use nullable dtypes as default when reading data. If
+ set to True, nullable dtypes are used for all dtypes that have a nullable
+ implementation, even if no nulls are present.
+
+ .. versionadded:: 2.0
Returns
-------
@@ -571,6 +603,7 @@ def read_sql(
coerce_float=coerce_float,
parse_dates=parse_dates,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
try:
@@ -587,6 +620,7 @@ def read_sql(
parse_dates=parse_dates,
columns=columns,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
return pandas_sql.read_query(
@@ -596,6 +630,7 @@ def read_sql(
coerce_float=coerce_float,
parse_dates=parse_dates,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
@@ -983,6 +1018,7 @@ def _query_iterator(
columns,
coerce_float: bool = True,
parse_dates=None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set."""
has_read_data = False
@@ -996,11 +1032,13 @@ def _query_iterator(
break
has_read_data = True
- self.frame = DataFrame.from_records(
- data, columns=columns, coerce_float=coerce_float
+ self.frame = _convert_arrays_to_dataframe(
+ data, columns, coerce_float, use_nullable_dtypes
)
- self._harmonize_columns(parse_dates=parse_dates)
+ self._harmonize_columns(
+ parse_dates=parse_dates, use_nullable_dtypes=use_nullable_dtypes
+ )
if self.index is not None:
self.frame.set_index(self.index, inplace=True)
@@ -1013,6 +1051,7 @@ def read(
parse_dates=None,
columns=None,
chunksize=None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
from sqlalchemy import select
@@ -1034,14 +1073,17 @@ def read(
column_names,
coerce_float=coerce_float,
parse_dates=parse_dates,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = result.fetchall()
- self.frame = DataFrame.from_records(
- data, columns=column_names, coerce_float=coerce_float
+ self.frame = _convert_arrays_to_dataframe(
+ data, column_names, coerce_float, use_nullable_dtypes
)
- self._harmonize_columns(parse_dates=parse_dates)
+ self._harmonize_columns(
+ parse_dates=parse_dates, use_nullable_dtypes=use_nullable_dtypes
+ )
if self.index is not None:
self.frame.set_index(self.index, inplace=True)
@@ -1124,7 +1166,9 @@ def _create_table_setup(self):
meta = MetaData()
return Table(self.name, meta, *columns, schema=schema)
- def _harmonize_columns(self, parse_dates=None) -> None:
+ def _harmonize_columns(
+ self, parse_dates=None, use_nullable_dtypes: bool = False
+ ) -> None:
"""
Make the DataFrame's column types align with the SQL table
column types.
@@ -1164,11 +1208,11 @@ def _harmonize_columns(self, parse_dates=None) -> None:
# Convert tz-aware Datetime SQL columns to UTC
utc = col_type is DatetimeTZDtype
self.frame[col_name] = _handle_date_column(df_col, utc=utc)
- elif col_type is float:
+ elif not use_nullable_dtypes and col_type is float:
# floats support NA, can always convert!
self.frame[col_name] = df_col.astype(col_type, copy=False)
- elif len(df_col) == df_col.count():
+ elif not use_nullable_dtypes and len(df_col) == df_col.count():
# No NA values, can convert ints and bools
if col_type is np.dtype("int64") or col_type is bool:
self.frame[col_name] = df_col.astype(col_type, copy=False)
@@ -1290,6 +1334,7 @@ def read_table(
columns=None,
schema: str | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
raise NotImplementedError
@@ -1303,6 +1348,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
pass
@@ -1466,6 +1512,7 @@ def read_table(
columns=None,
schema: str | None = None,
chunksize: int | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL database table into a DataFrame.
@@ -1498,6 +1545,12 @@ def read_table(
chunksize : int, default None
If specified, return an iterator where `chunksize` is the number
of rows to include in each chunk.
+ use_nullable_dtypes : bool = False
+ Whether to use nullable dtypes as default when reading data. If
+ set to True, nullable dtypes are used for all dtypes that have a nullable
+ implementation, even if no nulls are present.
+
+ .. versionadded:: 2.0
Returns
-------
@@ -1516,6 +1569,7 @@ def read_table(
parse_dates=parse_dates,
columns=columns,
chunksize=chunksize,
+ use_nullable_dtypes=use_nullable_dtypes,
)
@staticmethod
@@ -1527,6 +1581,7 @@ def _query_iterator(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set"""
has_read_data = False
@@ -1540,6 +1595,7 @@ def _query_iterator(
index_col=index_col,
coerce_float=coerce_float,
parse_dates=parse_dates,
+ use_nullable_dtypes=use_nullable_dtypes,
)
break
@@ -1551,6 +1607,7 @@ def _query_iterator(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
def read_query(
@@ -1562,6 +1619,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
"""
Read SQL query into a DataFrame.
@@ -1623,6 +1681,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = result.fetchall()
@@ -1633,6 +1692,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
return frame
@@ -2089,6 +2149,7 @@ def _query_iterator(
coerce_float: bool = True,
parse_dates=None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
):
"""Return generator through chunked result set"""
has_read_data = False
@@ -2112,6 +2173,7 @@ def _query_iterator(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
def read_query(
@@ -2123,6 +2185,7 @@ def read_query(
params=None,
chunksize: int | None = None,
dtype: DtypeArg | None = None,
+ use_nullable_dtypes: bool = False,
) -> DataFrame | Iterator[DataFrame]:
args = _convert_params(sql, params)
@@ -2138,6 +2201,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
else:
data = self._fetchall_as_list(cursor)
@@ -2150,6 +2214,7 @@ def read_query(
coerce_float=coerce_float,
parse_dates=parse_dates,
dtype=dtype,
+ use_nullable_dtypes=use_nullable_dtypes,
)
return frame
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 394fceb69b788..db37b1785af5c 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -53,6 +53,10 @@
to_timedelta,
)
import pandas._testing as tm
+from pandas.core.arrays import (
+ ArrowStringArray,
+ StringArray,
+)
from pandas.io import sql
from pandas.io.sql import (
@@ -2266,6 +2270,94 @@ def test_get_engine_auto_error_message(self):
pass
# TODO(GH#36893) fill this in when we add more engines
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes(self, storage):
+ # GH#50048
+ table = "test"
+ df = self.nullable_data()
+ df.to_sql(table, self.conn, index=False, if_exists="replace")
+
+ with pd.option_context("mode.string_storage", storage):
+ result = pd.read_sql(
+ f"Select * from {table}", self.conn, use_nullable_dtypes=True
+ )
+ expected = self.nullable_expected(storage)
+ tm.assert_frame_equal(result, expected)
+
+ with pd.option_context("mode.string_storage", storage):
+ iterator = pd.read_sql(
+ f"Select * from {table}",
+ self.conn,
+ use_nullable_dtypes=True,
+ chunksize=3,
+ )
+ expected = self.nullable_expected(storage)
+ for result in iterator:
+ tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes_table(self, storage):
+ # GH#50048
+ table = "test"
+ df = self.nullable_data()
+ df.to_sql(table, self.conn, index=False, if_exists="replace")
+
+ with pd.option_context("mode.string_storage", storage):
+ result = pd.read_sql(table, self.conn, use_nullable_dtypes=True)
+ expected = self.nullable_expected(storage)
+ tm.assert_frame_equal(result, expected)
+
+ with pd.option_context("mode.string_storage", storage):
+ iterator = pd.read_sql(
+ f"Select * from {table}",
+ self.conn,
+ use_nullable_dtypes=True,
+ chunksize=3,
+ )
+ expected = self.nullable_expected(storage)
+ for result in iterator:
+ tm.assert_frame_equal(result, expected)
+
+ def nullable_data(self) -> DataFrame:
+ return DataFrame(
+ {
+ "a": Series([1, np.nan, 3], dtype="Int64"),
+ "b": Series([1, 2, 3], dtype="Int64"),
+ "c": Series([1.5, np.nan, 2.5], dtype="Float64"),
+ "d": Series([1.5, 2.0, 2.5], dtype="Float64"),
+ "e": [True, False, None],
+ "f": [True, False, True],
+ "g": ["a", "b", "c"],
+ "h": ["a", "b", None],
+ }
+ )
+
+ def nullable_expected(self, storage) -> DataFrame:
+
+ string_array: StringArray | ArrowStringArray
+ string_array_na: StringArray | ArrowStringArray
+ if storage == "python":
+ string_array = StringArray(np.array(["a", "b", "c"], dtype=np.object_))
+ string_array_na = StringArray(np.array(["a", "b", pd.NA], dtype=np.object_))
+
+ else:
+ pa = pytest.importorskip("pyarrow")
+ string_array = ArrowStringArray(pa.array(["a", "b", "c"]))
+ string_array_na = ArrowStringArray(pa.array(["a", "b", None]))
+
+ return DataFrame(
+ {
+ "a": Series([1, np.nan, 3], dtype="Int64"),
+ "b": Series([1, 2, 3], dtype="Int64"),
+ "c": Series([1.5, np.nan, 2.5], dtype="Float64"),
+ "d": Series([1.5, 2.0, 2.5], dtype="Float64"),
+ "e": Series([True, False, pd.NA], dtype="boolean"),
+ "f": Series([True, False, True], dtype="boolean"),
+ "g": string_array,
+ "h": string_array_na,
+ }
+ )
+
class TestSQLiteAlchemy(_TestSQLAlchemy):
"""
@@ -2349,6 +2441,14 @@ class Test(BaseModel):
assert list(df.columns) == ["id", "string_column"]
+ def nullable_expected(self, storage) -> DataFrame:
+ return super().nullable_expected(storage).astype({"e": "Int64", "f": "Int64"})
+
+ @pytest.mark.parametrize("storage", ["pyarrow", "python"])
+ def test_read_sql_nullable_dtypes_table(self, storage):
+ # GH#50048 Not supported for sqlite
+ pass
+
@pytest.mark.db
class TestMySQLAlchemy(_TestSQLAlchemy):
@@ -2376,6 +2476,9 @@ def setup_driver(cls):
def test_default_type_conversion(self):
pass
+ def nullable_expected(self, storage) -> DataFrame:
+ return super().nullable_expected(storage).astype({"e": "Int64", "f": "Int64"})
+
@pytest.mark.db
class TestPostgreSQLAlchemy(_TestSQLAlchemy):
| Sits on top of #50047
Functionality wise, this should work now.
More broadly, I am not sure that this is the best approach we could take here. Since the ``convert_to_nullable_type`` in ``lib.maybe_convert_objects`` is not used right now except here, we could also make this strict and return the appropriate Array from the Cython code, not only when nulls are present. This would avoid the re-cast in the non-cython code part. | https://api.github.com/repos/pandas-dev/pandas/pulls/50048 | 2022-12-03T22:01:39Z | 2022-12-13T02:15:44Z | 2022-12-13T02:15:44Z | 2023-05-03T22:47:17Z |
ENH: maybe_convert_objects add boolean support with NA | diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index 3cbc04fb2f5cd..9bc02e90ebb9e 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -75,7 +75,7 @@ def maybe_convert_objects(
convert_timedelta: Literal[False] = ...,
convert_period: Literal[False] = ...,
convert_interval: Literal[False] = ...,
- convert_to_nullable_integer: Literal[False] = ...,
+ convert_to_nullable_dtype: Literal[False] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> npt.NDArray[np.object_ | np.number]: ...
@overload # both convert_datetime and convert_to_nullable_integer False -> np.ndarray
@@ -88,7 +88,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: Literal[False] = ...,
convert_interval: Literal[False] = ...,
- convert_to_nullable_integer: Literal[False] = ...,
+ convert_to_nullable_dtype: Literal[False] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> np.ndarray: ...
@overload
@@ -101,7 +101,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: Literal[True] = ...,
+ convert_to_nullable_dtype: Literal[True] = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -114,7 +114,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -127,7 +127,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: Literal[True] = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
@@ -140,7 +140,7 @@ def maybe_convert_objects(
convert_timedelta: bool = ...,
convert_period: bool = ...,
convert_interval: bool = ...,
- convert_to_nullable_integer: bool = ...,
+ convert_to_nullable_dtype: bool = ...,
dtype_if_all_nat: DtypeObj | None = ...,
) -> ArrayLike: ...
@overload
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 81e0f3de748ff..462537af3383a 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -1309,10 +1309,14 @@ cdef class Seen:
@property
def is_bool(self):
# i.e. not (anything but bool)
- return not (
- self.datetime_ or self.datetimetz_ or self.timedelta_ or self.nat_
- or self.period_ or self.interval_
- or self.numeric_ or self.nan_ or self.null_ or self.object_
+ return self.is_bool_or_na and not (self.nan_ or self.null_)
+
+ @property
+ def is_bool_or_na(self):
+ # i.e. not (anything but bool or missing values)
+ return self.bool_ and not (
+ self.datetime_ or self.datetimetz_ or self.nat_ or self.timedelta_
+ or self.period_ or self.interval_ or self.numeric_ or self.object_
)
@@ -2335,7 +2339,7 @@ def maybe_convert_objects(ndarray[object] objects,
bint convert_timedelta=False,
bint convert_period=False,
bint convert_interval=False,
- bint convert_to_nullable_integer=False,
+ bint convert_to_nullable_dtype=False,
object dtype_if_all_nat=None) -> "ArrayLike":
"""
Type inference function-- convert object array to proper dtype
@@ -2362,9 +2366,9 @@ def maybe_convert_objects(ndarray[object] objects,
convert_interval : bool, default False
If an array-like object contains only Interval objects (with matching
dtypes and closedness) or NaN, whether to convert to IntervalArray.
- convert_to_nullable_integer : bool, default False
- If an array-like object contains only integer values (and NaN) is
- encountered, whether to convert and return an IntegerArray.
+ convert_to_nullable_dtype : bool, default False
+ If an array-like object contains only integer or boolean values (and NaN) is
+ encountered, whether to convert and return an Boolean/IntegerArray.
dtype_if_all_nat : np.dtype, ExtensionDtype, or None, default None
Dtype to cast to if we have all-NaT.
@@ -2446,7 +2450,7 @@ def maybe_convert_objects(ndarray[object] objects,
seen.int_ = True
floats[i] = <float64_t>val
complexes[i] = <double complex>val
- if not seen.null_ or convert_to_nullable_integer:
+ if not seen.null_ or convert_to_nullable_dtype:
seen.saw_int(val)
if ((seen.uint_ and seen.sint_) or
@@ -2606,6 +2610,9 @@ def maybe_convert_objects(ndarray[object] objects,
if seen.is_bool:
# is_bool property rules out everything else
return bools.view(np.bool_)
+ elif convert_to_nullable_dtype and seen.is_bool_or_na:
+ from pandas.core.arrays import BooleanArray
+ return BooleanArray(bools.view(np.bool_), mask)
seen.object_ = True
if not seen.object_:
@@ -2617,7 +2624,7 @@ def maybe_convert_objects(ndarray[object] objects,
elif seen.float_:
result = floats
elif seen.int_ or seen.uint_:
- if convert_to_nullable_integer:
+ if convert_to_nullable_dtype:
from pandas.core.arrays import IntegerArray
if seen.uint_:
result = IntegerArray(uints, mask)
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 015c121ca684a..c9b61afb5eb25 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -859,7 +859,7 @@ def test_maybe_convert_objects_timedelta64_nat(self):
def test_maybe_convert_objects_nullable_integer(self, exp):
# GH27335
arr = np.array([2, np.NaN], dtype=object)
- result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
tm.assert_extension_array_equal(result, exp)
@@ -869,7 +869,7 @@ def test_maybe_convert_objects_nullable_integer(self, exp):
def test_maybe_convert_objects_nullable_none(self, dtype, val):
# GH#50043
arr = np.array([val, None, 3], dtype="object")
- result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
expected = IntegerArray(
np.array([val, 0, 3], dtype=dtype), np.array([False, True, False])
)
@@ -930,6 +930,28 @@ def test_maybe_convert_objects_bool_nan(self):
out = lib.maybe_convert_objects(ind.values, safe=1)
tm.assert_numpy_array_equal(out, exp)
+ def test_maybe_convert_objects_nullable_boolean(self):
+ # GH50047
+ arr = np.array([True, False], dtype=object)
+ exp = np.array([True, False])
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_numpy_array_equal(out, exp)
+
+ arr = np.array([True, False, pd.NaT], dtype=object)
+ exp = np.array([True, False, pd.NaT], dtype=object)
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_numpy_array_equal(out, exp)
+
+ @pytest.mark.parametrize("val", [None, np.nan])
+ def test_maybe_convert_objects_nullable_boolean_na(self, val):
+ # GH50047
+ arr = np.array([True, False, val], dtype=object)
+ exp = BooleanArray(
+ np.array([True, False, False]), np.array([False, False, True])
+ )
+ out = lib.maybe_convert_objects(arr, convert_to_nullable_dtype=True)
+ tm.assert_extension_array_equal(out, exp)
+
@pytest.mark.parametrize(
"data0",
[
| This is necessary for read_sql and nullables.
I could not find a usage of ``convert_to_nullable_integer``, so we should be able to repurpose the keyword
| https://api.github.com/repos/pandas-dev/pandas/pulls/50047 | 2022-12-03T20:52:09Z | 2022-12-09T00:20:58Z | 2022-12-09T00:20:58Z | 2022-12-09T09:24:41Z |
BUG: Change FutureWarning to DeprecationWarning for inplace setitem with DataFrame.(i)loc | diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst
index 489a6fda9ffab..0b670219f830c 100644
--- a/doc/source/whatsnew/v1.5.3.rst
+++ b/doc/source/whatsnew/v1.5.3.rst
@@ -48,6 +48,7 @@ Other
as pandas works toward compatibility with SQLAlchemy 2.0.
- Reverted deprecation (:issue:`45324`) of behavior of :meth:`Series.__getitem__` and :meth:`Series.__setitem__` slicing with an integer :class:`Index`; this will remain positional (:issue:`49612`)
+- A ``FutureWarning`` raised when attempting to set values inplace with :meth:`DataFrame.loc` or :meth:`DataFrame.loc` has been changed to a ``DeprecationWarning`` (:issue:`48673`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 198903f5fceff..dd06d9bee4428 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2026,7 +2026,7 @@ def _setitem_single_column(self, loc: int, value, plane_indexer):
"array. To retain the old behavior, use either "
"`df[df.columns[i]] = newvals` or, if columns are non-unique, "
"`df.isetitem(i, newvals)`",
- FutureWarning,
+ DeprecationWarning,
stacklevel=find_stack_level(),
)
# TODO: how to get future behavior?
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index 8dbf7d47374a6..83b1679b0da7e 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -400,7 +400,7 @@ def test_setitem_frame_2d_values(self, data):
warn = None
if has_can_hold_element and not isinstance(data.dtype, PandasDtype):
# PandasDtype excluded because it isn't *really* supported.
- warn = FutureWarning
+ warn = DeprecationWarning
with tm.assert_produces_warning(warn, match=msg):
df.iloc[:] = df
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index acd742c54b908..e2a99348f45aa 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -785,7 +785,7 @@ def test_getitem_setitem_float_labels(self, using_array_manager):
assert len(result) == 5
cp = df.copy()
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
cp.loc[1.0:5.0] = 0
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index cf0ff4e3603f3..e33c6d6a805cf 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -408,7 +408,7 @@ def test_setitem_frame_length_0_str_key(self, indexer):
def test_setitem_frame_duplicate_columns(self, using_array_manager):
# GH#15695
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
cols = ["A", "B", "C"] * 2
diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py
index fba8978d2128c..c7e0a10c0d7d0 100644
--- a/pandas/tests/frame/indexing/test_where.py
+++ b/pandas/tests/frame/indexing/test_where.py
@@ -384,7 +384,7 @@ def test_where_datetime(self, using_array_manager):
expected = df.copy()
expected.loc[[0, 1], "A"] = np.nan
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
expected.loc[:, "C"] = np.nan
@@ -571,7 +571,7 @@ def test_where_axis_multiple_dtypes(self, using_array_manager):
d2 = df.copy().drop(1, axis=1)
expected = df.copy()
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
with tm.assert_produces_warning(warn, match=msg):
expected.loc[:, 1] = np.nan
diff --git a/pandas/tests/frame/methods/test_dropna.py b/pandas/tests/frame/methods/test_dropna.py
index 62351aa89c914..53d9f75494d92 100644
--- a/pandas/tests/frame/methods/test_dropna.py
+++ b/pandas/tests/frame/methods/test_dropna.py
@@ -221,7 +221,7 @@ def test_dropna_with_duplicate_columns(self):
df.iloc[0, 0] = np.nan
df.iloc[1, 1] = np.nan
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 3] = np.nan
expected = df.dropna(subset=["A", "B", "C"], how="all")
expected.columns = ["A", "A", "B", "C"]
diff --git a/pandas/tests/frame/methods/test_rename.py b/pandas/tests/frame/methods/test_rename.py
index f4443953a0d52..405518c372b2c 100644
--- a/pandas/tests/frame/methods/test_rename.py
+++ b/pandas/tests/frame/methods/test_rename.py
@@ -178,7 +178,9 @@ def test_rename_nocopy(self, float_frame, using_copy_on_write):
# TODO(CoW) this also shouldn't warn in case of CoW, but the heuristic
# checking if the array shares memory doesn't work if CoW happened
- with tm.assert_produces_warning(FutureWarning if using_copy_on_write else None):
+ with tm.assert_produces_warning(
+ DeprecationWarning if using_copy_on_write else None
+ ):
# This loc setitem already happens inplace, so no warning
# that this will change in the future
renamed.loc[:, "foo"] = 1.0
diff --git a/pandas/tests/frame/methods/test_shift.py b/pandas/tests/frame/methods/test_shift.py
index bfc3c8e0a25eb..9b4dcf58590e3 100644
--- a/pandas/tests/frame/methods/test_shift.py
+++ b/pandas/tests/frame/methods/test_shift.py
@@ -372,7 +372,7 @@ def test_shift_duplicate_columns(self, using_array_manager):
warn = None
if using_array_manager:
- warn = FutureWarning
+ warn = DeprecationWarning
shifted = []
for columns in column_lists:
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index b4f027f3a832a..16021facb3986 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2604,7 +2604,9 @@ def check_views(c_only: bool = False):
# FIXME(GH#35417): until GH#35417, iloc.setitem into EA values does not preserve
# view, so we have to check in the other direction
- with tm.assert_produces_warning(FutureWarning, match="will attempt to set"):
+ with tm.assert_produces_warning(
+ DeprecationWarning, match="will attempt to set"
+ ):
df.iloc[:, 2] = pd.array([45, 46], dtype=c.dtype)
assert df.dtypes.iloc[2] == c.dtype
if not copy and not using_copy_on_write:
diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py
index 2c28800fb181f..38861a2b04409 100644
--- a/pandas/tests/frame/test_nonunique_indexes.py
+++ b/pandas/tests/frame/test_nonunique_indexes.py
@@ -323,7 +323,9 @@ def test_dup_columns_across_dtype(self):
def test_set_value_by_index(self, using_array_manager):
# See gh-12344
warn = (
- FutureWarning if using_array_manager and not is_platform_windows() else None
+ DeprecationWarning
+ if using_array_manager and not is_platform_windows()
+ else None
)
msg = "will attempt to set the values inplace"
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index 69e5d5e3d5447..e22559802cbec 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -23,7 +23,7 @@
class TestDataFrameReshape:
def test_stack_unstack(self, float_frame, using_array_manager):
- warn = FutureWarning if using_array_manager else None
+ warn = DeprecationWarning if using_array_manager else None
msg = "will attempt to set the values inplace"
df = float_frame.copy()
diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py
index d4354766a203b..5cf044280b391 100644
--- a/pandas/tests/indexing/multiindex/test_loc.py
+++ b/pandas/tests/indexing/multiindex/test_loc.py
@@ -541,9 +541,9 @@ def test_loc_setitem_single_column_slice():
)
expected = df.copy()
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "B"] = np.arange(4)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
expected.iloc[:, 2] = np.arange(4)
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index 8cc6b6e73aaea..dcc95d9e41a5a 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -84,7 +84,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manage
overwrite = isinstance(key, slice) and key == slice(None)
warn = None
if overwrite:
- warn = FutureWarning
+ warn = DeprecationWarning
msg = "will attempt to set the values inplace instead"
with tm.assert_produces_warning(warn, match=msg):
indexer(df)[key, 0] = cat
@@ -108,7 +108,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key, using_array_manage
frame = DataFrame({0: np.array([0, 1, 2], dtype=object), 1: range(3)})
df = frame.copy()
orig_vals = df.values
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
indexer(df)[key, 0] = cat
expected = DataFrame({0: cat, 1: range(3)})
tm.assert_frame_equal(df, expected)
@@ -904,7 +904,7 @@ def test_iloc_setitem_categorical_updates_inplace(self, using_copy_on_write):
# This should modify our original values in-place
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = cat[::-1]
if not using_copy_on_write:
@@ -1314,7 +1314,7 @@ def test_iloc_setitem_dtypes_duplicate_columns(
# GH#22035
df = DataFrame([[init_value, "str", "str2"]], columns=["a", "b", "b"])
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = df.iloc[:, 0].astype(dtypes)
expected_df = DataFrame(
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index 069e5a62895af..210c75b075011 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -550,7 +550,7 @@ def test_astype_assignment(self):
df = df_orig.copy()
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0:2] = df.iloc[:, 0:2].astype(np.int64)
expected = DataFrame(
[[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -558,7 +558,7 @@ def test_astype_assignment(self):
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True)
expected = DataFrame(
[[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -567,7 +567,7 @@ def test_astype_assignment(self):
# GH5702 (loc)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "A"] = df.loc[:, "A"].astype(np.int64)
expected = DataFrame(
[[1, "2", "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -575,7 +575,7 @@ def test_astype_assignment(self):
tm.assert_frame_equal(df, expected)
df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ["B", "C"]] = df.loc[:, ["B", "C"]].astype(np.int64)
expected = DataFrame(
[["1", 2, 3, ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
@@ -586,13 +586,13 @@ def test_astype_assignment_full_replacements(self):
# full replacements / no nans
df = DataFrame({"A": [1.0, 2.0, 3.0, 4.0]})
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.iloc[:, 0] = df["A"].astype(np.int64)
expected = DataFrame({"A": [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
df = DataFrame({"A": [1.0, 2.0, 3.0, 4.0]})
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "A"] = df["A"].astype(np.int64)
expected = DataFrame({"A": [1, 2, 3, 4]})
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index e62fb98b0782d..235ad3d213a62 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -368,7 +368,7 @@ def test_loc_setitem_dtype(self):
df = DataFrame({"id": ["A"], "a": [1.2], "b": [0.0], "c": [-2.5]})
cols = ["a", "b", "c"]
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, cols] = df.loc[:, cols].astype("float32")
expected = DataFrame(
@@ -633,11 +633,11 @@ def test_loc_setitem_consistency_slice_column_len(self):
df = DataFrame(values, index=mi, columns=cols)
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ("Respondent", "StartDate")] = to_datetime(
df.loc[:, ("Respondent", "StartDate")]
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, ("Respondent", "EndDate")] = to_datetime(
df.loc[:, ("Respondent", "EndDate")]
)
@@ -720,7 +720,7 @@ def test_loc_setitem_frame_with_reindex_mixed(self):
df = DataFrame(index=[3, 5, 4], columns=["A", "B"], dtype=float)
df["B"] = "string"
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[[4, 3, 5], "A"] = np.array([1, 2, 3], dtype="int64")
ser = Series([2, 3, 1], index=[3, 5, 4], dtype="int64")
expected = DataFrame({"A": ser})
@@ -732,7 +732,7 @@ def test_loc_setitem_frame_with_inverted_slice(self):
df = DataFrame(index=[1, 2, 3], columns=["A", "B"], dtype=float)
df["B"] = "string"
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[slice(3, 0, -1), "A"] = np.array([1, 2, 3], dtype="int64")
expected = DataFrame({"A": [3, 2, 1], "B": "string"}, index=[1, 2, 3])
tm.assert_frame_equal(df, expected)
@@ -909,7 +909,7 @@ def test_loc_setitem_missing_columns(self, index, box, expected):
warn = None
if isinstance(index[0], slice) and index[0] == slice(None):
- warn = FutureWarning
+ warn = DeprecationWarning
msg = "will attempt to set the values inplace instead"
with tm.assert_produces_warning(warn, match=msg):
@@ -1425,7 +1425,7 @@ def test_loc_setitem_single_row_categorical(self):
categories = Categorical(df["Alpha"], categories=["a", "b", "c"])
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "Alpha"] = categories
result = df["Alpha"]
@@ -3211,3 +3211,11 @@ def test_getitem_loc_str_periodindex(self):
index = pd.period_range(start="2000", periods=20, freq="B")
series = Series(range(20), index=index)
assert series.loc["2000-01-14"] == 9
+
+ def test_deprecation_warnings_raised_loc(self):
+ # GH#48673
+ with tm.assert_produces_warning(DeprecationWarning):
+ values = np.arange(4).reshape(2, 2)
+ df = DataFrame(values, columns=["a", "b"])
+ new = np.array([10, 11]).astype(np.int16)
+ df.loc[:, "a"] = new
diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py
index 938056902e745..f973bdf7ea6f6 100644
--- a/pandas/tests/indexing/test_partial.py
+++ b/pandas/tests/indexing/test_partial.py
@@ -312,7 +312,7 @@ def test_partial_setting_frame(self, using_array_manager):
df = df_orig.copy()
df["B"] = df["B"].astype(np.float64)
msg = "will attempt to set the values inplace instead"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(DeprecationWarning, match=msg):
df.loc[:, "B"] = df.loc[:, "A"]
tm.assert_frame_equal(df, expected)
| - [X] closes #48673
- [X] [Tests added and passed](
- [ ] Added [type annotations]
- [ ] All [code checks passed].
- [X] Added an entry in the latest `doc/source/whatsnew/v1.5.3.rst` file.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50044 | 2022-12-03T18:43:53Z | 2023-01-18T04:42:41Z | 2023-01-18T04:42:41Z | 2023-01-18T14:38:30Z |
BUG: Fix bug in maybe_convert_objects with None and nullable | diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index e35cf2fb13768..81e0f3de748ff 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -2446,7 +2446,7 @@ def maybe_convert_objects(ndarray[object] objects,
seen.int_ = True
floats[i] = <float64_t>val
complexes[i] = <double complex>val
- if not seen.null_:
+ if not seen.null_ or convert_to_nullable_integer:
seen.saw_int(val)
if ((seen.uint_ and seen.sint_) or
@@ -2616,10 +2616,13 @@ def maybe_convert_objects(ndarray[object] objects,
result = complexes
elif seen.float_:
result = floats
- elif seen.int_:
+ elif seen.int_ or seen.uint_:
if convert_to_nullable_integer:
from pandas.core.arrays import IntegerArray
- result = IntegerArray(ints, mask)
+ if seen.uint_:
+ result = IntegerArray(uints, mask)
+ else:
+ result = IntegerArray(ints, mask)
else:
result = floats
elif seen.nan_:
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index df2afad51abf8..015c121ca684a 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -863,6 +863,18 @@ def test_maybe_convert_objects_nullable_integer(self, exp):
tm.assert_extension_array_equal(result, exp)
+ @pytest.mark.parametrize(
+ "dtype, val", [("int64", 1), ("uint64", np.iinfo(np.int64).max + 1)]
+ )
+ def test_maybe_convert_objects_nullable_none(self, dtype, val):
+ # GH#50043
+ arr = np.array([val, None, 3], dtype="object")
+ result = lib.maybe_convert_objects(arr, convert_to_nullable_integer=True)
+ expected = IntegerArray(
+ np.array([val, 0, 3], dtype=dtype), np.array([False, True, False])
+ )
+ tm.assert_extension_array_equal(result, expected)
+
@pytest.mark.parametrize(
"convert_to_masked_nullable, exp",
[
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I don't think that this is user visible right now. Stumbled on this when working on nullables for read_sql | https://api.github.com/repos/pandas-dev/pandas/pulls/50043 | 2022-12-03T18:40:50Z | 2022-12-03T20:50:16Z | 2022-12-03T20:50:16Z | 2022-12-03T20:50:19Z |
ENH skip 'now' and 'today' when inferring format for array | diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index b78174483be51..35a4131d11d50 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -431,7 +431,11 @@ def first_non_null(values: ndarray) -> int:
val = values[i]
if checknull_with_nat_and_na(val):
continue
- if isinstance(val, str) and (len(val) == 0 or val in nat_strings):
+ if (
+ isinstance(val, str)
+ and
+ (len(val) == 0 or val in ("now", "today", *nat_strings))
+ ):
continue
return i
else:
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 7df45975475dd..2921a01918808 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -2318,6 +2318,8 @@ class TestGuessDatetimeFormat:
["", "2011-12-30 00:00:00.000000"],
["NaT", "2011-12-30 00:00:00.000000"],
["2011-12-30 00:00:00.000000", "random_string"],
+ ["now", "2011-12-30 00:00:00.000000"],
+ ["today", "2011-12-30 00:00:00.000000"],
],
)
def test_guess_datetime_format_for_array(self, test_list):
| breaking this off from https://github.com/pandas-dev/pandas/pull/49024/files
haven't added a whatsnew note as it's not user facing (yet! but it will make a difference after PDEP4) | https://api.github.com/repos/pandas-dev/pandas/pulls/50039 | 2022-12-03T14:21:48Z | 2022-12-04T16:33:17Z | 2022-12-04T16:33:17Z | 2022-12-05T21:02:48Z |
BUG: to_datetime fails with np.datetime64 and non-ISO format | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index d8609737b8c7a..b70dcb0ae99fa 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -658,7 +658,7 @@ Datetimelike
- Bug in subtracting a ``datetime`` scalar from :class:`DatetimeIndex` failing to retain the original ``freq`` attribute (:issue:`48818`)
- Bug in ``pandas.tseries.holiday.Holiday`` where a half-open date interval causes inconsistent return types from :meth:`USFederalHolidayCalendar.holidays` (:issue:`49075`)
- Bug in rendering :class:`DatetimeIndex` and :class:`Series` and :class:`DataFrame` with timezone-aware dtypes with ``dateutil`` or ``zoneinfo`` timezones near daylight-savings transitions (:issue:`49684`)
-- Bug in :func:`to_datetime` was raising ``ValueError`` when parsing :class:`Timestamp` or ``datetime`` objects with non-ISO8601 ``format`` (:issue:`49298`)
+- Bug in :func:`to_datetime` was raising ``ValueError`` when parsing :class:`Timestamp`, ``datetime``, or ``np.datetime64`` objects with non-ISO8601 ``format`` (:issue:`49298`, :issue:`50036`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index c56b4891da428..9a315106b75cd 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -14,13 +14,17 @@ from _thread import allocate_lock as _thread_allocate_lock
import numpy as np
import pytz
+cimport numpy as cnp
from numpy cimport (
int64_t,
ndarray,
)
from pandas._libs.missing cimport checknull_with_nat_and_na
-from pandas._libs.tslibs.conversion cimport convert_timezone
+from pandas._libs.tslibs.conversion cimport (
+ convert_timezone,
+ get_datetime64_nanos,
+)
from pandas._libs.tslibs.nattype cimport (
NPY_NAT,
c_nat_strings as nat_strings,
@@ -33,6 +37,9 @@ from pandas._libs.tslibs.np_datetime cimport (
pydatetime_to_dt64,
)
from pandas._libs.tslibs.timestamps cimport _Timestamp
+from pandas._libs.util cimport is_datetime64_object
+
+cnp.import_array()
cdef dict _parse_code_table = {"y": 0,
@@ -166,6 +173,9 @@ def array_strptime(
check_dts_bounds(&dts)
result_timezone[i] = val.tzinfo
continue
+ elif is_datetime64_object(val):
+ iresult[i] = get_datetime64_nanos(val, NPY_FR_ns)
+ continue
else:
val = str(val)
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 7df45975475dd..59be88e245fd0 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -758,6 +758,19 @@ def test_to_datetime_today_now_unicode_bytes(self, arg):
def test_to_datetime_dt64s(self, cache, dt):
assert to_datetime(dt, cache=cache) == Timestamp(dt)
+ @pytest.mark.parametrize(
+ "arg, format",
+ [
+ ("2001-01-01", "%Y-%m-%d"),
+ ("01-01-2001", "%d-%m-%Y"),
+ ],
+ )
+ def test_to_datetime_dt64s_and_str(self, arg, format):
+ # https://github.com/pandas-dev/pandas/issues/50036
+ result = to_datetime([arg, np.datetime64("2020-01-01")], format=format)
+ expected = DatetimeIndex(["2001-01-01", "2020-01-01"])
+ tm.assert_index_equal(result, expected)
+
@pytest.mark.parametrize(
"dt", [np.datetime64("1000-01-01"), np.datetime64("5000-01-02")]
)
| - [x] closes #50036 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50038 | 2022-12-03T12:41:37Z | 2022-12-04T15:07:03Z | 2022-12-04T15:07:03Z | 2022-12-04T15:07:03Z |
test: Add test for loc assignment changes datetime dtype | diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index be67ce50a0634..81a5e3d9947be 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -29,6 +29,7 @@
date_range,
isna,
notna,
+ to_datetime,
)
import pandas._testing as tm
@@ -1454,6 +1455,26 @@ def test_loc_bool_multiindex(self, dtype, indexer):
)
tm.assert_frame_equal(result, expected)
+ @pytest.mark.parametrize("utc", [False, True])
+ @pytest.mark.parametrize("indexer", ["date", ["date"]])
+ def test_loc_datetime_assignment_dtype_does_not_change(self, utc, indexer):
+ # GH#49837
+ df = DataFrame(
+ {
+ "date": to_datetime(
+ [datetime(2022, 1, 20), datetime(2022, 1, 22)], utc=utc
+ ),
+ "update": [True, False],
+ }
+ )
+ expected = df.copy(deep=True)
+
+ update_df = df[df["update"]]
+
+ df.loc[df["update"], indexer] = update_df["date"]
+
+ tm.assert_frame_equal(df, expected)
+
class TestDataFrameIndexingUInt64:
def test_setitem(self, uint64_frame):
| - [x] closes #49837
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
| https://api.github.com/repos/pandas-dev/pandas/pulls/50037 | 2022-12-03T12:00:02Z | 2022-12-05T09:25:21Z | 2022-12-05T09:25:21Z | 2022-12-05T09:25:32Z |
ENH: Index.infer_objects | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 0ecdde89d9013..e4fef0049d119 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -83,6 +83,7 @@ Other enhancements
- :func:`timedelta_range` now supports a ``unit`` keyword ("s", "ms", "us", or "ns") to specify the desired resolution of the output index (:issue:`49824`)
- :meth:`DataFrame.to_json` now supports a ``mode`` keyword with supported inputs 'w' and 'a'. Defaulting to 'w', 'a' can be used when lines=True and orient='records' to append record oriented json lines to an existing json file. (:issue:`35849`)
- Added ``name`` parameter to :meth:`IntervalIndex.from_breaks`, :meth:`IntervalIndex.from_arrays` and :meth:`IntervalIndex.from_tuples` (:issue:`48911`)
+- Added :meth:`Index.infer_objects` analogous to :meth:`Series.infer_objects` (:issue:`50034`)
- :meth:`DataFrame.plot.hist` now recognizes ``xlabel`` and ``ylabel`` arguments (:issue:`49793`)
-
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ce06b6bc01581..dc0359426f07c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6521,6 +6521,36 @@ def drop(
indexer = indexer[~mask]
return self.delete(indexer)
+ def infer_objects(self, copy: bool = True) -> Index:
+ """
+ If we have an object dtype, try to infer a non-object dtype.
+
+ Parameters
+ ----------
+ copy : bool, default True
+ Whether to make a copy in cases where no inference occurs.
+ """
+ if self._is_multi:
+ raise NotImplementedError(
+ "infer_objects is not implemented for MultiIndex. "
+ "Use index.to_frame().infer_objects() instead."
+ )
+ if self.dtype != object:
+ return self.copy() if copy else self
+
+ values = self._values
+ values = cast("npt.NDArray[np.object_]", values)
+ res_values = lib.maybe_convert_objects(
+ values,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
+ convert_interval=True,
+ )
+ if copy and res_values is values:
+ return self.copy()
+ return Index(res_values, name=self.name)
+
# --------------------------------------------------------------------
# Generated Arithmetic, Comparison, and Unary Methods
diff --git a/pandas/tests/indexes/multi/test_analytics.py b/pandas/tests/indexes/multi/test_analytics.py
index 8803862615858..fb6f56b0fcba7 100644
--- a/pandas/tests/indexes/multi/test_analytics.py
+++ b/pandas/tests/indexes/multi/test_analytics.py
@@ -12,6 +12,11 @@
from pandas.core.api import UInt64Index
+def test_infer_objects(idx):
+ with pytest.raises(NotImplementedError, match="to_frame"):
+ idx.infer_objects()
+
+
def test_shift(idx):
# GH8083 test the base class for shift
diff --git a/pandas/tests/series/methods/test_infer_objects.py b/pandas/tests/series/methods/test_infer_objects.py
index bb83f62f5ebb5..4710aaf54de31 100644
--- a/pandas/tests/series/methods/test_infer_objects.py
+++ b/pandas/tests/series/methods/test_infer_objects.py
@@ -1,23 +1,24 @@
import numpy as np
-from pandas import Series
import pandas._testing as tm
class TestInferObjects:
- def test_infer_objects_series(self):
+ def test_infer_objects_series(self, index_or_series):
# GH#11221
- actual = Series(np.array([1, 2, 3], dtype="O")).infer_objects()
- expected = Series([1, 2, 3])
- tm.assert_series_equal(actual, expected)
+ actual = index_or_series(np.array([1, 2, 3], dtype="O")).infer_objects()
+ expected = index_or_series([1, 2, 3])
+ tm.assert_equal(actual, expected)
- actual = Series(np.array([1, 2, 3, None], dtype="O")).infer_objects()
- expected = Series([1.0, 2.0, 3.0, np.nan])
- tm.assert_series_equal(actual, expected)
+ actual = index_or_series(np.array([1, 2, 3, None], dtype="O")).infer_objects()
+ expected = index_or_series([1.0, 2.0, 3.0, np.nan])
+ tm.assert_equal(actual, expected)
# only soft conversions, unconvertable pass thru unchanged
- actual = Series(np.array([1, 2, 3, None, "a"], dtype="O")).infer_objects()
- expected = Series([1, 2, 3, None, "a"])
+
+ obj = index_or_series(np.array([1, 2, 3, None, "a"], dtype="O"))
+ actual = obj.infer_objects()
+ expected = index_or_series([1, 2, 3, None, "a"], dtype=object)
assert actual.dtype == "object"
- tm.assert_series_equal(actual, expected)
+ tm.assert_equal(actual, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50034 | 2022-12-03T02:54:45Z | 2022-12-06T17:43:39Z | 2022-12-06T17:43:39Z | 2022-12-06T18:31:59Z |
TST: adding new test for groupby cumsum with named aggregate | diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 339c6560d6212..3a9dbe9dfb384 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -56,6 +56,30 @@ def test_groupby_agg_extension(self, data_for_grouping):
result = df.groupby("A").first()
self.assert_frame_equal(result, expected)
+ def test_groupby_agg_extension_timedelta_cumsum_with_named_aggregation(self):
+ # GH#41720
+ expected = pd.DataFrame(
+ {
+ "td": {
+ 0: pd.Timedelta("0 days 01:00:00"),
+ 1: pd.Timedelta("0 days 01:15:00"),
+ 2: pd.Timedelta("0 days 01:15:00"),
+ }
+ }
+ )
+ df = pd.DataFrame(
+ {
+ "td": pd.Series(
+ ["0 days 01:00:00", "0 days 00:15:00", "0 days 01:15:00"],
+ dtype="timedelta64[ns]",
+ ),
+ "grps": ["a", "a", "b"],
+ }
+ )
+ gb = df.groupby("grps")
+ result = gb.agg(td=("td", "cumsum"))
+ self.assert_frame_equal(result, expected)
+
def test_groupby_extension_no_sort(self, data_for_grouping):
df = pd.DataFrame({"A": [1, 1, 2, 2, 3, 3, 1, 4], "B": data_for_grouping})
result = df.groupby("B", sort=False).A.mean()
| - [ ] closes #41720
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50033 | 2022-12-03T01:33:17Z | 2022-12-08T03:40:45Z | 2022-12-08T03:40:45Z | 2022-12-08T03:40:55Z |
REF: remove soft_convert_objects | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 36c713cab7123..f227eb46273a5 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -41,7 +41,6 @@
IntCastingNaNError,
LossySetitemError,
)
-from pandas.util._validators import validate_bool_kwarg
from pandas.core.dtypes.common import (
DT64NS_DTYPE,
@@ -952,54 +951,6 @@ def coerce_indexer_dtype(indexer, categories) -> np.ndarray:
return ensure_int64(indexer)
-def soft_convert_objects(
- values: np.ndarray,
- *,
- datetime: bool = True,
- timedelta: bool = True,
- period: bool = True,
- copy: bool = True,
-) -> ArrayLike:
- """
- Try to coerce datetime, timedelta, and numeric object-dtype columns
- to inferred dtype.
-
- Parameters
- ----------
- values : np.ndarray[object]
- datetime : bool, default True
- timedelta : bool, default True
- period : bool, default True
- copy : bool, default True
-
- Returns
- -------
- np.ndarray or ExtensionArray
- """
- validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(timedelta, "timedelta")
- validate_bool_kwarg(copy, "copy")
-
- conversion_count = sum((datetime, timedelta))
- if conversion_count == 0:
- raise ValueError("At least one of datetime or timedelta must be True.")
-
- # Soft conversions
- if datetime or timedelta or period:
- # GH 20380, when datetime is beyond year 2262, hence outside
- # bound of nanosecond-resolution 64-bit integers.
- converted = lib.maybe_convert_objects(
- values,
- convert_datetime=datetime,
- convert_timedelta=timedelta,
- convert_period=period,
- )
- if converted is not values:
- return converted
-
- return values
-
-
def convert_dtypes(
input_array: ArrayLike,
convert_string: bool = True,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d1e48a3d10a1e..37b7af13fc7c4 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6354,9 +6354,8 @@ def infer_objects(self: NDFrameT) -> NDFrameT:
A int64
dtype: object
"""
- return self._constructor(
- self._mgr.convert(datetime=True, timedelta=True, copy=True)
- ).__finalize__(self, method="infer_objects")
+ new_mgr = self._mgr.convert()
+ return self._constructor(new_mgr).__finalize__(self, method="infer_objects")
@final
def convert_dtypes(
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 8ddab458e35a9..918c70ff91da5 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -32,7 +32,6 @@
from pandas.core.dtypes.cast import (
ensure_dtype_can_hold_na,
infer_dtype_from_scalar,
- soft_convert_objects,
)
from pandas.core.dtypes.common import (
ensure_platform_int,
@@ -375,25 +374,19 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T:
def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
return self.apply(astype_array_safe, dtype=dtype, copy=copy, errors=errors)
- def convert(
- self: T,
- *,
- copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
- ) -> T:
+ def convert(self: T) -> T:
def _convert(arr):
if is_object_dtype(arr.dtype):
# extract PandasArray for tests that patch PandasArray._typ
arr = np.asarray(arr)
- return soft_convert_objects(
+ return lib.maybe_convert_objects(
arr,
- datetime=datetime,
- timedelta=timedelta,
- copy=copy,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
)
else:
- return arr.copy() if copy else arr
+ return arr.copy()
return self.apply(_convert)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 57a0fc81515c5..95300c888eede 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -44,7 +44,6 @@
find_result_type,
maybe_downcast_to_dtype,
np_can_hold_element,
- soft_convert_objects,
)
from pandas.core.dtypes.common import (
ensure_platform_int,
@@ -429,7 +428,7 @@ def _maybe_downcast(self, blocks: list[Block], downcast=None) -> list[Block]:
# but ATM it breaks too much existing code.
# split and convert the blocks
- return extend_blocks([blk.convert(datetime=True) for blk in blocks])
+ return extend_blocks([blk.convert() for blk in blocks])
if downcast is None:
return blocks
@@ -451,8 +450,6 @@ def convert(
self,
*,
copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
) -> list[Block]:
"""
attempt to coerce any object types to better types return a copy
@@ -1967,8 +1964,6 @@ def convert(
self,
*,
copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
) -> list[Block]:
"""
attempt to cast any object types to better types return a copy of
@@ -1980,11 +1975,11 @@ def convert(
# avoid doing .ravel as that might make a copy
values = values[0]
- res_values = soft_convert_objects(
+ res_values = lib.maybe_convert_objects(
values,
- datetime=datetime,
- timedelta=timedelta,
- copy=copy,
+ convert_datetime=True,
+ convert_timedelta=True,
+ convert_period=True,
)
res_values = ensure_block_shape(res_values, self.ndim)
return [self.make_block(res_values)]
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 306fea06963ec..d1eee23f1908c 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -441,18 +441,10 @@ def fillna(self: T, value, limit, inplace: bool, downcast) -> T:
def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
return self.apply("astype", dtype=dtype, copy=copy, errors=errors)
- def convert(
- self: T,
- *,
- copy: bool = True,
- datetime: bool = True,
- timedelta: bool = True,
- ) -> T:
+ def convert(self: T) -> T:
return self.apply(
"convert",
- copy=copy,
- datetime=datetime,
- timedelta=timedelta,
+ copy=True,
)
def replace(self: T, to_replace, value, inplace: bool) -> T:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50031 | 2022-12-03T00:22:26Z | 2022-12-03T19:55:56Z | 2022-12-03T19:55:56Z | 2022-12-03T21:42:30Z |
DEPR: Remove option use_inf_as null | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index bde97f3714219..d8609737b8c7a 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -386,6 +386,7 @@ Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Removed deprecated :attr:`Timestamp.freq`, :attr:`Timestamp.freqstr` and argument ``freq`` from the :class:`Timestamp` constructor and :meth:`Timestamp.fromordinal` (:issue:`14146`)
- Removed deprecated :class:`CategoricalBlock`, :meth:`Block.is_categorical`, require datetime64 and timedelta64 values to be wrapped in :class:`DatetimeArray` or :class:`TimedeltaArray` before passing to :meth:`Block.make_block_same_class`, require ``DatetimeTZBlock.values`` to have the correct ndim when passing to the :class:`BlockManager` constructor, and removed the "fastpath" keyword from the :class:`SingleBlockManager` constructor (:issue:`40226`, :issue:`40571`)
+- Removed deprecated global option ``use_inf_as_null`` in favor of ``use_inf_as_na`` (:issue:`17126`)
- Removed deprecated module ``pandas.core.index`` (:issue:`30193`)
- Removed deprecated alias ``pandas.core.tools.datetimes.to_time``, import the function directly from ``pandas.core.tools.times`` instead (:issue:`34145`)
- Removed deprecated :meth:`Categorical.to_dense`, use ``np.asarray(cat)`` instead (:issue:`32639`)
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index b101b25a10a80..d1a52798360bd 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -458,12 +458,6 @@ def is_terminal() -> bool:
with cf.config_prefix("mode"):
cf.register_option("sim_interactive", False, tc_sim_interactive_doc)
-use_inf_as_null_doc = """
-: boolean
- use_inf_as_null had been deprecated and will be removed in a future
- version. Use `use_inf_as_na` instead.
-"""
-
use_inf_as_na_doc = """
: boolean
True means treat None, NaN, INF, -INF as NA (old way),
@@ -483,14 +477,6 @@ def use_inf_as_na_cb(key) -> None:
with cf.config_prefix("mode"):
cf.register_option("use_inf_as_na", False, use_inf_as_na_doc, cb=use_inf_as_na_cb)
- cf.register_option(
- "use_inf_as_null", False, use_inf_as_null_doc, cb=use_inf_as_na_cb
- )
-
-
-cf.deprecate_option(
- "mode.use_inf_as_null", msg=use_inf_as_null_doc, rkey="mode.use_inf_as_na"
-)
data_manager_doc = """
diff --git a/pandas/tests/series/test_missing.py b/pandas/tests/series/test_missing.py
index d956b2c3fcd42..3c0f962b90086 100644
--- a/pandas/tests/series/test_missing.py
+++ b/pandas/tests/series/test_missing.py
@@ -36,20 +36,6 @@ def test_isna_for_inf(self):
tm.assert_series_equal(r, e)
tm.assert_series_equal(dr, de)
- @pytest.mark.parametrize(
- "method, expected",
- [
- ["isna", Series([False, True, True, False])],
- ["dropna", Series(["a", 1.0], index=[0, 3])],
- ],
- )
- def test_isnull_for_inf_deprecated(self, method, expected):
- # gh-17115
- s = Series(["a", np.inf, np.nan, 1.0])
- with pd.option_context("mode.use_inf_as_null", True):
- result = getattr(s, method)()
- tm.assert_series_equal(result, expected)
-
def test_timedelta64_nan(self):
td = Series([timedelta(days=i) for i in range(10)])
| Introduced in https://github.com/pandas-dev/pandas/pull/17126
| https://api.github.com/repos/pandas-dev/pandas/pulls/50030 | 2022-12-02T23:48:18Z | 2022-12-03T02:48:20Z | 2022-12-03T02:48:20Z | 2022-12-05T19:42:39Z |
DOC: Groupby transform should mention that parameter can be a string | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index e791e956473c1..02e8236524cb7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -427,7 +427,51 @@ def _aggregate_named(self, func, *args, **kwargs):
return result
- @Substitution(klass="Series")
+ __examples_series_doc = dedent(
+ """
+ >>> ser = pd.Series(
+ ... [390.0, 350.0, 30.0, 20.0],
+ ... index=["Falcon", "Falcon", "Parrot", "Parrot"],
+ ... name="Max Speed")
+ >>> grouped = ser.groupby([1, 1, 2, 2])
+ >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
+ Falcon 0.707107
+ Falcon -0.707107
+ Parrot 0.707107
+ Parrot -0.707107
+ Name: Max Speed, dtype: float64
+
+ Broadcast result of the transformation
+
+ >>> grouped.transform(lambda x: x.max() - x.min())
+ Falcon 40.0
+ Falcon 40.0
+ Parrot 10.0
+ Parrot 10.0
+ Name: Max Speed, dtype: float64
+
+ >>> grouped.transform("mean")
+ Falcon 370.0
+ Falcon 370.0
+ Parrot 25.0
+ Parrot 25.0
+ Name: Max Speed, dtype: float64
+
+ .. versionchanged:: 1.3.0
+
+ The resulting dtype will reflect the return value of the passed ``func``,
+ for example:
+
+ >>> grouped.transform(lambda x: x.astype(int).max())
+ Falcon 390
+ Falcon 390
+ Parrot 30
+ Parrot 30
+ Name: Max Speed, dtype: int64
+ """
+ )
+
+ @Substitution(klass="Series", example=__examples_series_doc)
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
return self._transform(
@@ -1407,7 +1451,61 @@ def _transform_general(self, func, *args, **kwargs):
concatenated = concatenated.reindex(concat_index, axis=other_axis, copy=False)
return self._set_result_index_ordered(concatenated)
- @Substitution(klass="DataFrame")
+ __examples_dataframe_doc = dedent(
+ """
+ >>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
+ ... 'foo', 'bar'],
+ ... 'B' : ['one', 'one', 'two', 'three',
+ ... 'two', 'two'],
+ ... 'C' : [1, 5, 5, 2, 5, 5],
+ ... 'D' : [2.0, 5., 8., 1., 2., 9.]})
+ >>> grouped = df.groupby('A')[['C', 'D']]
+ >>> grouped.transform(lambda x: (x - x.mean()) / x.std())
+ C D
+ 0 -1.154701 -0.577350
+ 1 0.577350 0.000000
+ 2 0.577350 1.154701
+ 3 -1.154701 -1.000000
+ 4 0.577350 -0.577350
+ 5 0.577350 1.000000
+
+ Broadcast result of the transformation
+
+ >>> grouped.transform(lambda x: x.max() - x.min())
+ C D
+ 0 4.0 6.0
+ 1 3.0 8.0
+ 2 4.0 6.0
+ 3 3.0 8.0
+ 4 4.0 6.0
+ 5 3.0 8.0
+
+ >>> grouped.transform("mean")
+ C D
+ 0 3.666667 4.0
+ 1 4.000000 5.0
+ 2 3.666667 4.0
+ 3 4.000000 5.0
+ 4 3.666667 4.0
+ 5 4.000000 5.0
+
+ .. versionchanged:: 1.3.0
+
+ The resulting dtype will reflect the return value of the passed ``func``,
+ for example:
+
+ >>> grouped.transform(lambda x: x.astype(int).max())
+ C D
+ 0 5 8
+ 1 5 9
+ 2 5 8
+ 3 5 9
+ 4 5 8
+ 5 5 9
+ """
+ )
+
+ @Substitution(klass="DataFrame", example=__examples_dataframe_doc)
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
return self._transform(
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 52d18e8ffe540..ab030aaa66d13 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -402,15 +402,22 @@ class providing the base-class of operations.
f : function, str
Function to apply to each group. See the Notes section below for requirements.
- Can also accept a Numba JIT function with
- ``engine='numba'`` specified.
+ Accepted inputs are:
+
+ - String
+ - Python function
+ - Numba JIT function with ``engine='numba'`` specified.
+ Only passing a single function is supported with this engine.
If the ``'numba'`` engine is chosen, the function must be
a user defined function with ``values`` and ``index`` as the
first and second arguments respectively in the function signature.
Each group's index will be passed to the user defined function
and optionally available for use.
+ If a string is chosen, then it needs to be the name
+ of the groupby method you want to use.
+
.. versionchanged:: 1.1.0
*args
Positional arguments to pass to func.
@@ -480,48 +487,7 @@ class providing the base-class of operations.
Examples
--------
-
->>> df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
-... 'foo', 'bar'],
-... 'B' : ['one', 'one', 'two', 'three',
-... 'two', 'two'],
-... 'C' : [1, 5, 5, 2, 5, 5],
-... 'D' : [2.0, 5., 8., 1., 2., 9.]})
->>> grouped = df.groupby('A')[['C', 'D']]
->>> grouped.transform(lambda x: (x - x.mean()) / x.std())
- C D
-0 -1.154701 -0.577350
-1 0.577350 0.000000
-2 0.577350 1.154701
-3 -1.154701 -1.000000
-4 0.577350 -0.577350
-5 0.577350 1.000000
-
-Broadcast result of the transformation
-
->>> grouped.transform(lambda x: x.max() - x.min())
- C D
-0 4.0 6.0
-1 3.0 8.0
-2 4.0 6.0
-3 3.0 8.0
-4 4.0 6.0
-5 3.0 8.0
-
-.. versionchanged:: 1.3.0
-
- The resulting dtype will reflect the return value of the passed ``func``,
- for example:
-
->>> grouped.transform(lambda x: x.astype(int).max())
- C D
-0 5 8
-1 5 9
-2 5 8
-3 5 9
-4 5 8
-5 5 9
-"""
+%(example)s"""
_agg_template = """
Aggregate using one or more operations over the specified axis.
| - [ ] closes https://github.com/pandas-dev/pandas/issues/49961
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `pandas/core/groupby/groupby.py`
| https://api.github.com/repos/pandas-dev/pandas/pulls/50029 | 2022-12-02T23:31:09Z | 2022-12-13T03:06:42Z | 2022-12-13T03:06:42Z | 2022-12-13T04:55:06Z |
API: make FloatingArray.astype consistent with IntegerArray/BooleanArray | diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 1fd6482f650da..3aa6a12160b73 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -38,7 +38,6 @@
from pandas.util._decorators import doc
from pandas.util._validators import validate_fillna_kwargs
-from pandas.core.dtypes.astype import astype_nansafe
from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import (
is_bool,
@@ -492,10 +491,6 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
raise ValueError("cannot convert float NaN to bool")
data = self.to_numpy(dtype=dtype, na_value=na_value, copy=copy)
- if self.dtype.kind == "f":
- # TODO: make this consistent between IntegerArray/FloatingArray,
- # see test_astype_str
- return astype_nansafe(data, dtype, copy=False)
return data
__array_priority__ = 1000 # higher than ndarray so ops dispatch to us
diff --git a/pandas/tests/arrays/floating/test_astype.py b/pandas/tests/arrays/floating/test_astype.py
index 5a6e0988a0897..ade3dbd2c99da 100644
--- a/pandas/tests/arrays/floating/test_astype.py
+++ b/pandas/tests/arrays/floating/test_astype.py
@@ -65,7 +65,7 @@ def test_astype_to_integer_array():
def test_astype_str():
a = pd.array([0.1, 0.2, None], dtype="Float64")
- expected = np.array(["0.1", "0.2", "<NA>"], dtype=object)
+ expected = np.array(["0.1", "0.2", "<NA>"], dtype="U32")
tm.assert_numpy_array_equal(a.astype(str), expected)
tm.assert_numpy_array_equal(a.astype("str"), expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50028 | 2022-12-02T21:55:37Z | 2022-12-05T09:27:23Z | 2022-12-05T09:27:23Z | 2022-12-05T16:47:43Z |
DOC: add examples BusinessMonthEnd(0) and SemiMonthEnd(0) | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4c6493652b216..70d437e0eea11 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -2460,16 +2460,25 @@ cdef class BusinessMonthEnd(MonthOffset):
"""
DateOffset increments between the last business day of the month.
+ BusinessMonthEnd goes to the next date which is the last business day of the month.
+ To get the last business day of the current month pass the parameter n equals 0.
+
Examples
--------
- >>> from pandas.tseries.offsets import BMonthEnd
- >>> ts = pd.Timestamp('2020-05-24 05:01:15')
- >>> ts + BMonthEnd()
- Timestamp('2020-05-29 05:01:15')
- >>> ts + BMonthEnd(2)
- Timestamp('2020-06-30 05:01:15')
- >>> ts + BMonthEnd(-2)
- Timestamp('2020-03-31 05:01:15')
+ >>> ts = pd.Timestamp(2022, 11, 29)
+ >>> ts + pd.offsets.BMonthEnd()
+ Timestamp('2022-11-30 00:00:00')
+
+ >>> ts = pd.Timestamp(2022, 11, 30)
+ >>> ts + pd.offsets.BMonthEnd()
+ Timestamp('2022-12-30 00:00:00')
+
+ If you want to get the end of the current business month
+ pass the parameter n equals 0:
+
+ >>> ts = pd.Timestamp(2022, 11, 30)
+ >>> ts + pd.offsets.BMonthEnd(0)
+ Timestamp('2022-11-30 00:00:00')
"""
_prefix = "BM"
_day_opt = "business_end"
@@ -2642,11 +2651,24 @@ cdef class SemiMonthEnd(SemiMonthOffset):
Examples
--------
- >>> ts = pd.Timestamp(2022, 1, 1)
+ >>> ts = pd.Timestamp(2022, 1, 14)
>>> ts + pd.offsets.SemiMonthEnd()
Timestamp('2022-01-15 00:00:00')
- """
+ >>> ts = pd.Timestamp(2022, 1, 15)
+ >>> ts + pd.offsets.SemiMonthEnd()
+ Timestamp('2022-01-31 00:00:00')
+
+ >>> ts = pd.Timestamp(2022, 1, 31)
+ >>> ts + pd.offsets.SemiMonthEnd()
+ Timestamp('2022-02-15 00:00:00')
+
+ If you want to get the result for the current month pass the parameter n equals 0:
+
+ >>> ts = pd.Timestamp(2022, 1, 15)
+ >>> ts + pd.offsets.SemiMonthEnd(0)
+ Timestamp('2022-01-15 00:00:00')
+ """
_prefix = "SM"
_min_day_of_month = 1
| This PR is related to PR #49958.
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Updated docs for `BusinessMonthEnd` and `SemiMonthEnd`. Added more examples to highlight “the last day of the month” behavior. Gave examples that use the parameter n equals 0. | https://api.github.com/repos/pandas-dev/pandas/pulls/50027 | 2022-12-02T21:35:45Z | 2022-12-03T12:47:11Z | 2022-12-03T12:47:11Z | 2022-12-03T12:47:11Z |
REF: remove NDFrame._convert | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 038c889e4d5f7..d1e48a3d10a1e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6315,37 +6315,6 @@ def __deepcopy__(self: NDFrameT, memo=None) -> NDFrameT:
"""
return self.copy(deep=True)
- @final
- def _convert(
- self: NDFrameT,
- *,
- datetime: bool_t = False,
- timedelta: bool_t = False,
- ) -> NDFrameT:
- """
- Attempt to infer better dtype for object columns.
-
- Parameters
- ----------
- datetime : bool, default False
- If True, convert to date where possible.
- timedelta : bool, default False
- If True, convert to timedelta where possible.
-
- Returns
- -------
- converted : same as input object
- """
- validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(timedelta, "timedelta")
- return self._constructor(
- self._mgr.convert(
- datetime=datetime,
- timedelta=timedelta,
- copy=True,
- )
- ).__finalize__(self)
-
@final
def infer_objects(self: NDFrameT) -> NDFrameT:
"""
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index a80892a145a70..d3e37a40614b3 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1681,9 +1681,8 @@ def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame:
if self.axis == 1:
result = result.T
- # Note: we only need to pass datetime=True in order to get numeric
- # values converted
- return self._reindex_output(result)._convert(datetime=True)
+ # Note: we really only care about inferring numeric dtypes here
+ return self._reindex_output(result).infer_objects()
def _iterate_column_groupbys(self, obj: DataFrame | Series):
for i, colname in enumerate(obj.columns):
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 324076cd38917..3a634a60e784e 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -600,9 +600,9 @@ def _compute_plot_data(self):
self.subplots = True
data = reconstruct_data_with_by(self.data, by=self.by, cols=self.columns)
- # GH16953, _convert is needed as fallback, for ``Series``
+ # GH16953, infer_objects is needed as fallback, for ``Series``
# with ``dtype == object``
- data = data._convert(datetime=True, timedelta=True)
+ data = data.infer_objects()
include_type = [np.number, "datetime", "datetimetz", "timedelta"]
# GH23719, allow plotting boolean
diff --git a/pandas/plotting/_matplotlib/hist.py b/pandas/plotting/_matplotlib/hist.py
index 27c69a31f31a2..337628aa3bc2e 100644
--- a/pandas/plotting/_matplotlib/hist.py
+++ b/pandas/plotting/_matplotlib/hist.py
@@ -78,7 +78,7 @@ def _args_adjust(self) -> None:
def _calculate_bins(self, data: DataFrame) -> np.ndarray:
"""Calculate bins given data"""
- nd_values = data._convert(datetime=True)._get_numeric_data()
+ nd_values = data.infer_objects()._get_numeric_data()
values = np.ravel(nd_values)
values = values[~isna(values)]
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 7bf1621d0acea..e7c2618d388c2 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -461,7 +461,7 @@ def test_apply_convert_objects():
}
)
- result = expected.apply(lambda x: x, axis=1)._convert(datetime=True)
+ result = expected.apply(lambda x: x, axis=1)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_convert.py b/pandas/tests/frame/methods/test_convert.py
deleted file mode 100644
index c6c70210d1cc4..0000000000000
--- a/pandas/tests/frame/methods/test_convert.py
+++ /dev/null
@@ -1,42 +0,0 @@
-import numpy as np
-import pytest
-
-from pandas import DataFrame
-import pandas._testing as tm
-
-
-class TestConvert:
- def test_convert_objects(self, float_string_frame):
-
- oops = float_string_frame.T.T
- converted = oops._convert(datetime=True)
- tm.assert_frame_equal(converted, float_string_frame)
- assert converted["A"].dtype == np.float64
-
- # force numeric conversion
- float_string_frame["H"] = "1."
- float_string_frame["I"] = "1"
-
- # add in some items that will be nan
- float_string_frame["J"] = "1."
- float_string_frame["K"] = "1"
- float_string_frame.loc[float_string_frame.index[0:5], ["J", "K"]] = "garbled"
- converted = float_string_frame._convert(datetime=True)
- tm.assert_frame_equal(converted, float_string_frame)
-
- # via astype
- converted = float_string_frame.copy()
- converted["H"] = converted["H"].astype("float64")
- converted["I"] = converted["I"].astype("int64")
- assert converted["H"].dtype == "float64"
- assert converted["I"].dtype == "int64"
-
- # via astype, but errors
- converted = float_string_frame.copy()
- with pytest.raises(ValueError, match="invalid literal"):
- converted["H"].astype("int32")
-
- def test_convert_objects_no_conversion(self):
- mixed1 = DataFrame({"a": [1, 2, 3], "b": [4.0, 5, 6], "c": ["x", "y", "z"]})
- mixed2 = mixed1._convert(datetime=True)
- tm.assert_frame_equal(mixed1, mixed2)
diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py
index 4ae31f300cb6f..5633b9f8a71c7 100644
--- a/pandas/tests/io/pytables/test_append.py
+++ b/pandas/tests/io/pytables/test_append.py
@@ -592,7 +592,6 @@ def check_col(key, name, size):
df_dc.loc[df_dc.index[7:9], "string"] = "bar"
df_dc["string2"] = "cool"
df_dc["datetime"] = Timestamp("20010102")
- df_dc = df_dc._convert(datetime=True)
df_dc.loc[df_dc.index[3:5], ["A", "B", "datetime"]] = np.nan
_maybe_remove(store, "df_dc")
diff --git a/pandas/tests/io/pytables/test_errors.py b/pandas/tests/io/pytables/test_errors.py
index 7e590df95f952..7629e8ca7dfc2 100644
--- a/pandas/tests/io/pytables/test_errors.py
+++ b/pandas/tests/io/pytables/test_errors.py
@@ -75,7 +75,7 @@ def test_unimplemented_dtypes_table_columns(setup_path):
df["obj1"] = "foo"
df["obj2"] = "bar"
df["datetime1"] = datetime.date(2001, 1, 2)
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
# this fails because we have a date in the object block......
diff --git a/pandas/tests/io/pytables/test_put.py b/pandas/tests/io/pytables/test_put.py
index 2699d33950412..349fe74cb8e71 100644
--- a/pandas/tests/io/pytables/test_put.py
+++ b/pandas/tests/io/pytables/test_put.py
@@ -197,7 +197,7 @@ def test_put_mixed_type(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
_maybe_remove(store, "df")
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 22873b0096817..1263d61b55cd5 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -129,7 +129,7 @@ def test_repr(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with catch_warnings(record=True):
simplefilter("ignore", pd.errors.PerformanceWarning)
@@ -444,7 +444,7 @@ def test_table_mixed_dtypes(setup_path):
df["datetime1"] = datetime.datetime(2001, 1, 2, 0, 0)
df["datetime2"] = datetime.datetime(2001, 1, 3, 0, 0)
df.loc[df.index[3:6], ["obj1"]] = np.nan
- df = df._consolidate()._convert(datetime=True)
+ df = df._consolidate()
with ensure_clean_store(setup_path) as store:
store.append("df1_mixed", df)
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index b1fcdd8df01ad..ffc5afcc70bb9 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -627,7 +627,7 @@ def try_remove_ws(x):
]
dfnew = df.applymap(try_remove_ws).replace(old, new)
gtnew = ground_truth.applymap(try_remove_ws)
- converted = dfnew._convert(datetime=True)
+ converted = dfnew
date_cols = ["Closing Date", "Updated Date"]
converted[date_cols] = converted[date_cols].apply(to_datetime)
tm.assert_frame_equal(converted, gtnew)
diff --git a/pandas/tests/series/methods/test_convert.py b/pandas/tests/series/methods/test_convert.py
deleted file mode 100644
index f979a28154d4e..0000000000000
--- a/pandas/tests/series/methods/test_convert.py
+++ /dev/null
@@ -1,94 +0,0 @@
-from datetime import datetime
-
-import pytest
-
-from pandas import (
- Series,
- Timestamp,
-)
-import pandas._testing as tm
-
-
-class TestConvert:
- def test_convert(self):
- # GH#10265
- dt = datetime(2001, 1, 1, 0, 0)
- td = dt - datetime(2000, 1, 1, 0, 0)
-
- # Test coercion with mixed types
- ser = Series(["a", "3.1415", dt, td])
-
- # Test standard conversion returns original
- results = ser._convert(datetime=True)
- tm.assert_series_equal(results, ser)
-
- results = ser._convert(timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_numeric_strings_with_other_true_args(self):
- # test pass-through and non-conversion when other types selected
- ser = Series(["1.0", "2.0", "3.0"])
- results = ser._convert(datetime=True, timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_datetime_objects(self):
- ser = Series(
- [datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype="O"
- )
- results = ser._convert(datetime=True, timedelta=True)
- expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)])
- tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=False, timedelta=True)
- tm.assert_series_equal(results, ser)
-
- def test_convert_datetime64(self):
- # no-op if already dt64 dtype
- ser = Series(
- [
- datetime(2001, 1, 1, 0, 0),
- datetime(2001, 1, 2, 0, 0),
- datetime(2001, 1, 3, 0, 0),
- ]
- )
-
- result = ser._convert(datetime=True)
- expected = Series(
- [Timestamp("20010101"), Timestamp("20010102"), Timestamp("20010103")],
- dtype="M8[ns]",
- )
- tm.assert_series_equal(result, expected)
-
- result = ser._convert(datetime=True)
- tm.assert_series_equal(result, expected)
-
- def test_convert_timedeltas(self):
- td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0)
- ser = Series([td, td], dtype="O")
- results = ser._convert(datetime=True, timedelta=True)
- expected = Series([td, td])
- tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=True, timedelta=False)
- tm.assert_series_equal(results, ser)
-
- def test_convert_preserve_non_object(self):
- # preserve if non-object
- ser = Series([1], dtype="float32")
- result = ser._convert(datetime=True)
- tm.assert_series_equal(result, ser)
-
- def test_convert_no_arg_error(self):
- ser = Series(["1.0", "2"])
- msg = r"At least one of datetime or timedelta must be True\."
- with pytest.raises(ValueError, match=msg):
- ser._convert()
-
- def test_convert_preserve_bool(self):
- ser = Series([1, True, 3, 5], dtype=object)
- res = ser._convert(datetime=True)
- tm.assert_series_equal(res, ser)
-
- def test_convert_preserve_all_bool(self):
- ser = Series([False, True, False, False], dtype=object)
- res = ser._convert(datetime=True)
- expected = Series([False, True, False, False], dtype=bool)
- tm.assert_series_equal(res, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50026 | 2022-12-02T20:15:38Z | 2022-12-02T23:03:23Z | 2022-12-02T23:03:23Z | 2022-12-02T23:09:46Z |
BUG: when flooring, ambiguous parameter unnecessarily used (and raising Error) | diff --git a/doc/source/whatsnew/v1.5.3.rst b/doc/source/whatsnew/v1.5.3.rst
index b6c1c857717c7..763f9f87194d5 100644
--- a/doc/source/whatsnew/v1.5.3.rst
+++ b/doc/source/whatsnew/v1.5.3.rst
@@ -33,7 +33,6 @@ Bug fixes
Other
~~~~~
-
--
.. ---------------------------------------------------------------------------
.. _whatsnew_153.contributors:
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 62b0ea5307e41..d0eed405c944c 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -671,7 +671,7 @@ Timezones
^^^^^^^^^
- Bug in :meth:`Series.astype` and :meth:`DataFrame.astype` with object-dtype containing multiple timezone-aware ``datetime`` objects with heterogeneous timezones to a :class:`DatetimeTZDtype` incorrectly raising (:issue:`32581`)
- Bug in :func:`to_datetime` was failing to parse date strings with timezone name when ``format`` was specified with ``%Z`` (:issue:`49748`)
--
+- Better error message when passing invalid values to ``ambiguous`` parameter in :meth:`Timestamp.tz_localize` (:issue:`49565`)
Numeric
^^^^^^^
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index f987a2feb2717..f25114c273bcf 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -6,8 +6,8 @@ construction requirements, we need to do object instantiation in python
(see Timestamp class below). This will serve as a C extension type that
shadows the python class, where we do any heavy lifting.
"""
-import warnings
+import warnings
cimport cython
import numpy as np
@@ -1946,8 +1946,11 @@ default 'raise'
>>> pd.NaT.tz_localize()
NaT
"""
- if ambiguous == "infer":
- raise ValueError("Cannot infer offset with only one time.")
+ if not isinstance(ambiguous, bool) and ambiguous not in {"NaT", "raise"}:
+ raise ValueError(
+ "'ambiguous' parameter must be one of: "
+ "True, False, 'NaT', 'raise' (default)"
+ )
nonexistent_options = ("raise", "NaT", "shift_forward", "shift_backward")
if nonexistent not in nonexistent_options and not PyDelta_Check(nonexistent):
diff --git a/pandas/tests/scalar/timestamp/test_timezones.py b/pandas/tests/scalar/timestamp/test_timezones.py
index 3ebffaad23910..d7db99333cd03 100644
--- a/pandas/tests/scalar/timestamp/test_timezones.py
+++ b/pandas/tests/scalar/timestamp/test_timezones.py
@@ -6,6 +6,7 @@
datetime,
timedelta,
)
+import re
import dateutil
from dateutil.tz import (
@@ -102,7 +103,10 @@ def test_tz_localize_ambiguous(self):
ts_no_dst = ts.tz_localize("US/Eastern", ambiguous=False)
assert (ts_no_dst.value - ts_dst.value) / 1e9 == 3600
- msg = "Cannot infer offset with only one time"
+ msg = re.escape(
+ "'ambiguous' parameter must be one of: "
+ "True, False, 'NaT', 'raise' (default)"
+ )
with pytest.raises(ValueError, match=msg):
ts.tz_localize("US/Eastern", ambiguous="infer")
@@ -182,8 +186,8 @@ def test_tz_localize_ambiguous_compat(self):
pytz_zone = "Europe/London"
dateutil_zone = "dateutil/Europe/London"
- result_pytz = naive.tz_localize(pytz_zone, ambiguous=0)
- result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=0)
+ result_pytz = naive.tz_localize(pytz_zone, ambiguous=False)
+ result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=False)
assert result_pytz.value == result_dateutil.value
assert result_pytz.value == 1382835600000000000
@@ -194,8 +198,8 @@ def test_tz_localize_ambiguous_compat(self):
assert str(result_pytz) == str(result_dateutil)
# 1 hour difference
- result_pytz = naive.tz_localize(pytz_zone, ambiguous=1)
- result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=1)
+ result_pytz = naive.tz_localize(pytz_zone, ambiguous=True)
+ result_dateutil = naive.tz_localize(dateutil_zone, ambiguous=True)
assert result_pytz.value == result_dateutil.value
assert result_pytz.value == 1382832000000000000
@@ -357,7 +361,6 @@ def test_astimezone(self, tzstr):
@td.skip_if_windows
def test_tz_convert_utc_with_system_utc(self):
-
# from system utc to real utc
ts = Timestamp("2001-01-05 11:56", tz=timezones.maybe_get_tz("dateutil/UTC"))
# check that the time hasn't changed.
| - [x] closes #49565
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50024 | 2022-12-02T19:00:38Z | 2022-12-12T18:28:15Z | 2022-12-12T18:28:15Z | 2022-12-12T18:28:22Z |
BUG: pd.DateOffset handle milliseconds | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 11185e0370e30..99ed779487c34 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -799,6 +799,7 @@ Datetimelike
- Bug in :class:`Timestamp` was showing ``UserWarning``, which was not actionable by users, when parsing non-ISO8601 delimited date strings (:issue:`50232`)
- Bug in :func:`to_datetime` was showing misleading ``ValueError`` when parsing dates with format containing ISO week directive and ISO weekday directive (:issue:`50308`)
- Bug in :func:`to_datetime` was not raising ``ValueError`` when invalid format was passed and ``errors`` was ``'ignore'`` or ``'coerce'`` (:issue:`50266`)
+- Bug in :class:`DateOffset` was throwing ``TypeError`` when constructing with milliseconds and another super-daily argument (:issue:`49897`)
-
Timedelta
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index f9905f297be10..470d1e89e5b88 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -298,43 +298,54 @@ _relativedelta_kwds = {"years", "months", "weeks", "days", "year", "month",
cdef _determine_offset(kwds):
- # timedelta is used for sub-daily plural offsets and all singular
- # offsets, relativedelta is used for plural offsets of daily length or
- # more, nanosecond(s) are handled by apply_wraps
- kwds_no_nanos = dict(
- (k, v) for k, v in kwds.items()
- if k not in ("nanosecond", "nanoseconds")
- )
- # TODO: Are nanosecond and nanoseconds allowed somewhere?
-
- _kwds_use_relativedelta = ("years", "months", "weeks", "days",
- "year", "month", "week", "day", "weekday",
- "hour", "minute", "second", "microsecond",
- "millisecond")
-
- use_relativedelta = False
- if len(kwds_no_nanos) > 0:
- if any(k in _kwds_use_relativedelta for k in kwds_no_nanos):
- if "millisecond" in kwds_no_nanos:
- raise NotImplementedError(
- "Using DateOffset to replace `millisecond` component in "
- "datetime object is not supported. Use "
- "`microsecond=timestamp.microsecond % 1000 + ms * 1000` "
- "instead."
- )
- offset = relativedelta(**kwds_no_nanos)
- use_relativedelta = True
- else:
- # sub-daily offset - use timedelta (tz-aware)
- offset = timedelta(**kwds_no_nanos)
- elif any(nano in kwds for nano in ("nanosecond", "nanoseconds")):
- offset = timedelta(days=0)
- else:
- # GH 45643/45890: (historically) defaults to 1 day for non-nano
- # since datetime.timedelta doesn't handle nanoseconds
- offset = timedelta(days=1)
- return offset, use_relativedelta
+ if not kwds:
+ # GH 45643/45890: (historically) defaults to 1 day
+ return timedelta(days=1), False
+
+ if "millisecond" in kwds:
+ raise NotImplementedError(
+ "Using DateOffset to replace `millisecond` component in "
+ "datetime object is not supported. Use "
+ "`microsecond=timestamp.microsecond % 1000 + ms * 1000` "
+ "instead."
+ )
+
+ nanos = {"nanosecond", "nanoseconds"}
+
+ # nanos are handled by apply_wraps
+ if all(k in nanos for k in kwds):
+ return timedelta(days=0), False
+ kwds_no_nanos = {k: v for k, v in kwds.items() if k not in nanos}
+
+ kwds_use_relativedelta = {
+ "year", "month", "day", "hour", "minute",
+ "second", "microsecond", "weekday", "years", "months", "weeks", "days",
+ "hours", "minutes", "seconds", "microseconds"
+ }
+
+ # "weeks" and "days" are left out despite being valid args for timedelta,
+ # because (historically) timedelta is used only for sub-daily.
+ kwds_use_timedelta = {
+ "seconds", "microseconds", "milliseconds", "minutes", "hours",
+ }
+
+ if all(k in kwds_use_timedelta for k in kwds_no_nanos):
+ # Sub-daily offset - use timedelta (tz-aware)
+ # This also handles "milliseconds" (plur): see GH 49897
+ return timedelta(**kwds_no_nanos), False
+
+ # convert milliseconds to microseconds, so relativedelta can parse it
+ if "milliseconds" in kwds_no_nanos:
+ micro = kwds_no_nanos.pop("milliseconds") * 1000
+ kwds_no_nanos["microseconds"] = kwds_no_nanos.get("microseconds", 0) + micro
+
+ if all(k in kwds_use_relativedelta for k in kwds_no_nanos):
+ return relativedelta(**kwds_no_nanos), True
+
+ raise ValueError(
+ f"Invalid argument/s or bad combination of arguments: {list(kwds.keys())}"
+ )
# ---------------------------------------------------------------------
# Mixins & Singletons
@@ -1163,7 +1174,6 @@ cdef class RelativeDeltaOffset(BaseOffset):
def __init__(self, n=1, normalize=False, **kwds):
BaseOffset.__init__(self, n, normalize)
-
off, use_rd = _determine_offset(kwds)
object.__setattr__(self, "_offset", off)
object.__setattr__(self, "_use_relativedelta", use_rd)
diff --git a/pandas/tests/tseries/offsets/test_offsets.py b/pandas/tests/tseries/offsets/test_offsets.py
index 63594c2b2c48a..135227d66d541 100644
--- a/pandas/tests/tseries/offsets/test_offsets.py
+++ b/pandas/tests/tseries/offsets/test_offsets.py
@@ -739,6 +739,33 @@ def test_eq(self):
assert DateOffset(milliseconds=3) != DateOffset(milliseconds=7)
+ @pytest.mark.parametrize(
+ "offset_kwargs, expected_arg",
+ [
+ ({"microseconds": 1, "milliseconds": 1}, "2022-01-01 00:00:00.001001"),
+ ({"seconds": 1, "milliseconds": 1}, "2022-01-01 00:00:01.001"),
+ ({"minutes": 1, "milliseconds": 1}, "2022-01-01 00:01:00.001"),
+ ({"hours": 1, "milliseconds": 1}, "2022-01-01 01:00:00.001"),
+ ({"days": 1, "milliseconds": 1}, "2022-01-02 00:00:00.001"),
+ ({"weeks": 1, "milliseconds": 1}, "2022-01-08 00:00:00.001"),
+ ({"months": 1, "milliseconds": 1}, "2022-02-01 00:00:00.001"),
+ ({"years": 1, "milliseconds": 1}, "2023-01-01 00:00:00.001"),
+ ],
+ )
+ def test_milliseconds_combination(self, offset_kwargs, expected_arg):
+ # GH 49897
+ offset = DateOffset(**offset_kwargs)
+ ts = Timestamp("2022-01-01")
+ result = ts + offset
+ expected = Timestamp(expected_arg)
+
+ assert result == expected
+
+ def test_offset_invalid_arguments(self):
+ msg = "^Invalid argument/s or bad combination of arguments"
+ with pytest.raises(ValueError, match=msg):
+ DateOffset(picoseconds=1)
+
class TestOffsetNames:
def test_get_offset_name(self):
| - [X] closes #49897
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50020 | 2022-12-02T18:06:15Z | 2022-12-22T22:49:14Z | 2022-12-22T22:49:14Z | 2023-01-22T19:58:41Z |
TST: Add test for isin for filtering with mixed types | diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py
index 449724508fcaa..92ebee9ffa7a5 100644
--- a/pandas/tests/series/methods/test_isin.py
+++ b/pandas/tests/series/methods/test_isin.py
@@ -220,3 +220,17 @@ def test_isin_complex_numbers(array, expected):
# GH 17927
result = Series(array).isin([1j, 1 + 1j, 1 + 2j])
tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "data,is_in",
+ [([1, [2]], [1]), (["simple str", [{"values": 3}]], ["simple str"])],
+)
+def test_isin_filtering_with_mixed_object_types(data, is_in):
+ # GH 20883
+
+ ser = Series(data)
+ result = ser.isin(is_in)
+ expected = Series([True, False])
+
+ tm.assert_series_equal(result, expected)
| - [x] closes #20883
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Added tests to check that `.isin` works as expected with different types in the series.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50019 | 2022-12-02T17:12:45Z | 2022-12-02T19:58:42Z | 2022-12-02T19:58:42Z | 2022-12-05T09:10:44Z |
DEV: enable liveserver for docs on gitpod | diff --git a/.gitpod.yml b/.gitpod.yml
index 6bba39823791e..877c16eefb5d6 100644
--- a/.gitpod.yml
+++ b/.gitpod.yml
@@ -32,6 +32,7 @@ vscode:
- yzhang.markdown-all-in-one
- eamodio.gitlens
- lextudio.restructuredtext
+ - ritwickdey.liveserver
# add or remove what you think is generally useful to most contributors
# avoid adding too many. they each open a pop-up window
| Another follow-up for https://github.com/pandas-dev/pandas/issues/47790 | https://api.github.com/repos/pandas-dev/pandas/pulls/50018 | 2022-12-02T17:06:38Z | 2022-12-02T20:29:54Z | 2022-12-02T20:29:54Z | 2022-12-02T20:29:58Z |
DOC: align description of sort argument with implementation | diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 5ce69d2c2ab4c..aced5a73a1f02 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -195,10 +195,7 @@ def concat(
Check whether the new concatenated axis contains duplicates. This can
be very expensive relative to the actual data concatenation.
sort : bool, default False
- Sort non-concatenation axis if it is not already aligned when `join`
- is 'outer'.
- This has no effect when ``join='inner'``, which already preserves
- the order of the non-concatenation axis.
+ Sort non-concatenation axis if it is not already aligned.
.. versionchanged:: 1.0.0
| If sort is provided to concat, the non-concatenation axis is always sorted, so remove the conditional.
- [x] closes #49646
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50017 | 2022-12-02T17:04:24Z | 2022-12-02T18:33:35Z | 2022-12-02T18:33:35Z | 2023-07-04T14:01:06Z |
ENH: add copy on write for df reorder_levels GH49473 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c7f0a7ced7576..eb3365d4f8410 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7407,7 +7407,7 @@ def swaplevel(self, i: Axis = -2, j: Axis = -1, axis: Axis = 0) -> DataFrame:
result.columns = result.columns.swaplevel(i, j)
return result
- def reorder_levels(self, order: Sequence[Axis], axis: Axis = 0) -> DataFrame:
+ def reorder_levels(self, order: Sequence[int | str], axis: Axis = 0) -> DataFrame:
"""
Rearrange index levels using input order. May not drop or duplicate levels.
@@ -7452,7 +7452,7 @@ class diet
if not isinstance(self._get_axis(axis), MultiIndex): # pragma: no cover
raise TypeError("Can only reorder levels on a hierarchical axis.")
- result = self.copy()
+ result = self.copy(deep=None)
if axis == 0:
assert isinstance(result.index, MultiIndex)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 6707f1411cbc7..8015eb93988c9 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -3,6 +3,7 @@
from pandas import (
DataFrame,
+ MultiIndex,
Series,
)
import pandas._testing as tm
@@ -293,7 +294,25 @@ def test_assign(using_copy_on_write):
else:
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
- # modify df2 to trigger CoW for that block
+ df2.iloc[0, 0] = 0
+ if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ tm.assert_frame_equal(df, df_orig)
+
+
+def test_reorder_levels(using_copy_on_write):
+ index = MultiIndex.from_tuples(
+ [(1, 1), (1, 2), (2, 1), (2, 2)], names=["one", "two"]
+ )
+ df = DataFrame({"a": [1, 2, 3, 4]}, index=index)
+ df_orig = df.copy()
+ df2 = df.reorder_levels(order=["two", "one"])
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ else:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+
df2.iloc[0, 0] = 0
if using_copy_on_write:
assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
Added copy-on-write to `df.reorder_levels()`.
Progress towards #49473 via [PyData pandas sprint](https://github.com/noatamir/pydata-global-sprints/issues/11).
Changed type hint because following the existing one caused an error. | https://api.github.com/repos/pandas-dev/pandas/pulls/50016 | 2022-12-02T16:59:06Z | 2022-12-02T21:06:43Z | 2022-12-02T21:06:43Z | 2022-12-02T21:07:00Z |
BUG: frame[object].astype(M8[unsupported]) not raising | diff --git a/pandas/core/dtypes/astype.py b/pandas/core/dtypes/astype.py
index 7b5c77af7864b..53c2cfd345e32 100644
--- a/pandas/core/dtypes/astype.py
+++ b/pandas/core/dtypes/astype.py
@@ -135,16 +135,15 @@ def astype_nansafe(
elif is_object_dtype(arr.dtype):
# if we have a datetime/timedelta array of objects
- # then coerce to a proper dtype and recall astype_nansafe
+ # then coerce to datetime64[ns] and use DatetimeArray.astype
if is_datetime64_dtype(dtype):
from pandas import to_datetime
- return astype_nansafe(
- to_datetime(arr.ravel()).values.reshape(arr.shape),
- dtype,
- copy=copy,
- )
+ dti = to_datetime(arr.ravel())
+ dta = dti._data.reshape(arr.shape)
+ return dta.astype(dtype, copy=False)._ndarray
+
elif is_timedelta64_dtype(dtype):
# bc we know arr.dtype == object, this is equivalent to
# `np.asarray(to_timedelta(arr))`, but using a lower-level API that
diff --git a/pandas/tests/dtypes/test_common.py b/pandas/tests/dtypes/test_common.py
index c8a3c992248ad..472ae80dc1838 100644
--- a/pandas/tests/dtypes/test_common.py
+++ b/pandas/tests/dtypes/test_common.py
@@ -757,7 +757,13 @@ def test_astype_datetime64_bad_dtype_raises(from_type, to_type):
to_type = np.dtype(to_type)
- with pytest.raises(TypeError, match="cannot astype"):
+ msg = "|".join(
+ [
+ "cannot astype a timedelta",
+ "cannot astype a datetimelike",
+ ]
+ )
+ with pytest.raises(TypeError, match=msg):
astype_nansafe(arr, dtype=to_type)
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index 96ef49acdcb21..9d56dba9b480d 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -377,6 +377,16 @@ def test_astype_column_metadata(self, dtype):
df = df.astype(dtype)
tm.assert_index_equal(df.columns, columns)
+ @pytest.mark.parametrize("unit", ["Y", "M", "W", "D", "h", "m"])
+ def test_astype_from_object_to_datetime_unit(self, unit):
+ vals = [
+ ["2015-01-01", "2015-01-02", "2015-01-03"],
+ ["2017-01-01", "2017-01-02", "2017-02-03"],
+ ]
+ df = DataFrame(vals, dtype=object)
+ with pytest.raises(TypeError, match="Cannot cast"):
+ df.astype(f"M8[{unit}]")
+
@pytest.mark.parametrize("dtype", ["M8", "m8"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s", "h", "m", "D"])
def test_astype_from_datetimelike_to_object(self, dtype, unit):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 32a4dc06d08e2..ef80cc847a5b8 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1955,19 +1955,11 @@ def test_constructor_datetimes_with_nulls(self, arr):
@pytest.mark.parametrize("order", ["K", "A", "C", "F"])
@pytest.mark.parametrize(
- "dtype",
- [
- "datetime64[M]",
- "datetime64[D]",
- "datetime64[h]",
- "datetime64[m]",
- "datetime64[s]",
- "datetime64[ms]",
- "datetime64[us]",
- "datetime64[ns]",
- ],
+ "unit",
+ ["M", "D", "h", "m", "s", "ms", "us", "ns"],
)
- def test_constructor_datetimes_non_ns(self, order, dtype):
+ def test_constructor_datetimes_non_ns(self, order, unit):
+ dtype = f"datetime64[{unit}]"
na = np.array(
[
["2015-01-01", "2015-01-02", "2015-01-03"],
@@ -1977,13 +1969,16 @@ def test_constructor_datetimes_non_ns(self, order, dtype):
order=order,
)
df = DataFrame(na)
- expected = DataFrame(
- [
- ["2015-01-01", "2015-01-02", "2015-01-03"],
- ["2017-01-01", "2017-01-02", "2017-02-03"],
- ]
- )
- expected = expected.astype(dtype=dtype)
+ expected = DataFrame(na.astype("M8[ns]"))
+ if unit in ["M", "D", "h", "m"]:
+ with pytest.raises(TypeError, match="Cannot cast"):
+ expected.astype(dtype)
+
+ # instead the constructor casts to the closest supported reso, i.e. "s"
+ expected = expected.astype("datetime64[s]")
+ else:
+ expected = expected.astype(dtype=dtype)
+
tm.assert_frame_equal(df, expected)
@pytest.mark.parametrize("order", ["K", "A", "C", "F"])
diff --git a/pandas/tests/io/xml/test_xml_dtypes.py b/pandas/tests/io/xml/test_xml_dtypes.py
index 5629830767c3c..412c8a8dde175 100644
--- a/pandas/tests/io/xml/test_xml_dtypes.py
+++ b/pandas/tests/io/xml/test_xml_dtypes.py
@@ -128,14 +128,14 @@ def test_dtypes_with_names(parser):
df_result = read_xml(
xml_dates,
names=["Col1", "Col2", "Col3", "Col4"],
- dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64"},
+ dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"},
parser=parser,
)
df_iter = read_xml_iterparse(
xml_dates,
parser=parser,
names=["Col1", "Col2", "Col3", "Col4"],
- dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64"},
+ dtype={"Col2": "string", "Col3": "Int64", "Col4": "datetime64[ns]"},
iterparse={"row": ["shape", "degrees", "sides", "date"]},
)
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 946e7e48148b4..ab589dc26a3ac 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -730,13 +730,13 @@ def test_other_datetime_unit(self, unit):
ser = Series([None, None], index=[101, 102], name="days")
dtype = f"datetime64[{unit}]"
- df2 = ser.astype(dtype).to_frame("days")
if unit in ["D", "h", "m"]:
# not supported so we cast to the nearest supported unit, seconds
exp_dtype = "datetime64[s]"
else:
exp_dtype = dtype
+ df2 = ser.astype(exp_dtype).to_frame("days")
assert df2["days"].dtype == exp_dtype
result = df1.merge(df2, left_on="entity_id", right_index=True)
| Already has a whatsnew _whatsnew_200.api_breaking.astype_to_unsupported_datetimelike | https://api.github.com/repos/pandas-dev/pandas/pulls/50015 | 2022-12-02T16:40:27Z | 2022-12-02T18:56:51Z | 2022-12-02T18:56:51Z | 2022-12-02T19:17:10Z |
STYLE double-quote cython strings #21 | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 0779f9c95f7b4..18f3644a0e0ae 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -27,6 +27,7 @@ repos:
rev: v0.9.1
hooks:
- id: cython-lint
+ - id: double-quote-cython-strings
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.4.0
hooks:
diff --git a/pandas/_libs/algos.pyx b/pandas/_libs/algos.pyx
index 7b9fe6422544c..fcd30ab1faec8 100644
--- a/pandas/_libs/algos.pyx
+++ b/pandas/_libs/algos.pyx
@@ -180,7 +180,7 @@ def is_lexsorted(list_of_arrays: list) -> bint:
cdef int64_t **vecs = <int64_t**>malloc(nlevels * sizeof(int64_t*))
for i in range(nlevels):
arr = list_of_arrays[i]
- assert arr.dtype.name == 'int64'
+ assert arr.dtype.name == "int64"
vecs[i] = <int64_t*>cnp.PyArray_DATA(arr)
# Assume uniqueness??
@@ -514,9 +514,9 @@ def validate_limit(nobs: int | None, limit=None) -> int:
lim = nobs
else:
if not util.is_integer_object(limit):
- raise ValueError('Limit must be an integer')
+ raise ValueError("Limit must be an integer")
if limit < 1:
- raise ValueError('Limit must be greater than 0')
+ raise ValueError("Limit must be greater than 0")
lim = limit
return lim
@@ -958,7 +958,7 @@ def rank_1d(
if not ascending:
tiebreak = TIEBREAK_FIRST_DESCENDING
- keep_na = na_option == 'keep'
+ keep_na = na_option == "keep"
N = len(values)
if labels is not None:
@@ -984,7 +984,7 @@ def rank_1d(
# with mask, without obfuscating location of missing data
# in values array
if numeric_object_t is object and values.dtype != np.object_:
- masked_vals = values.astype('O')
+ masked_vals = values.astype("O")
else:
masked_vals = values.copy()
@@ -1005,7 +1005,7 @@ def rank_1d(
# If descending, fill with highest value since descending
# will flip the ordering to still end up with lowest rank.
# Symmetric logic applies to `na_option == 'bottom'`
- nans_rank_highest = ascending ^ (na_option == 'top')
+ nans_rank_highest = ascending ^ (na_option == "top")
nan_fill_val = get_rank_nan_fill_val(nans_rank_highest, <numeric_object_t>0)
if nans_rank_highest:
order = [masked_vals, mask]
@@ -1345,7 +1345,7 @@ def rank_2d(
if not ascending:
tiebreak = TIEBREAK_FIRST_DESCENDING
- keep_na = na_option == 'keep'
+ keep_na = na_option == "keep"
# For cases where a mask is not possible, we can avoid mask checks
check_mask = (
@@ -1362,9 +1362,9 @@ def rank_2d(
if numeric_object_t is object:
if values.dtype != np.object_:
- values = values.astype('O')
+ values = values.astype("O")
- nans_rank_highest = ascending ^ (na_option == 'top')
+ nans_rank_highest = ascending ^ (na_option == "top")
if check_mask:
nan_fill_val = get_rank_nan_fill_val(nans_rank_highest, <numeric_object_t>0)
@@ -1385,7 +1385,7 @@ def rank_2d(
order = (values, ~np.asarray(mask))
n, k = (<object>values).shape
- out = np.empty((n, k), dtype='f8', order='F')
+ out = np.empty((n, k), dtype="f8", order="F")
grp_sizes = np.ones(n, dtype=np.int64)
# lexsort is slower, so only use if we need to worry about the mask
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index a351ad6e461f3..a5b9bf02dcbe2 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -604,12 +604,12 @@ def group_any_all(
intp_t lab
int8_t flag_val, val
- if val_test == 'all':
+ if val_test == "all":
# Because the 'all' value of an empty iterable in Python is True we can
# start with an array full of ones and set to zero when a False value
# is encountered
flag_val = 0
- elif val_test == 'any':
+ elif val_test == "any":
# Because the 'any' value of an empty iterable in Python is False we
# can start with an array full of zeros and set to one only if any
# value encountered is True
@@ -1061,7 +1061,7 @@ def group_ohlc(
N, K = (<object>values).shape
if out.shape[1] != 4:
- raise ValueError('Output array must have 4 columns')
+ raise ValueError("Output array must have 4 columns")
if K > 1:
raise NotImplementedError("Argument 'values' must have only one dimension")
@@ -1157,11 +1157,11 @@ def group_quantile(
)
inter_methods = {
- 'linear': INTERPOLATION_LINEAR,
- 'lower': INTERPOLATION_LOWER,
- 'higher': INTERPOLATION_HIGHER,
- 'nearest': INTERPOLATION_NEAREST,
- 'midpoint': INTERPOLATION_MIDPOINT,
+ "linear": INTERPOLATION_LINEAR,
+ "lower": INTERPOLATION_LOWER,
+ "higher": INTERPOLATION_HIGHER,
+ "nearest": INTERPOLATION_NEAREST,
+ "midpoint": INTERPOLATION_MIDPOINT,
}
interp = inter_methods[interpolation]
diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 27edc83c6f329..eb4e957f644ac 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -184,8 +184,8 @@ cdef class IndexEngine:
if self.is_monotonic_increasing:
values = self.values
try:
- left = values.searchsorted(val, side='left')
- right = values.searchsorted(val, side='right')
+ left = values.searchsorted(val, side="left")
+ right = values.searchsorted(val, side="right")
except TypeError:
# e.g. GH#29189 get_loc(None) with a Float64Index
# 2021-09-29 Now only reached for object-dtype
@@ -353,8 +353,8 @@ cdef class IndexEngine:
remaining_stargets = set()
for starget in stargets:
try:
- start = values.searchsorted(starget, side='left')
- end = values.searchsorted(starget, side='right')
+ start = values.searchsorted(starget, side="left")
+ end = values.searchsorted(starget, side="right")
except TypeError: # e.g. if we tried to search for string in int array
remaining_stargets.add(starget)
else:
@@ -551,7 +551,7 @@ cdef class DatetimeEngine(Int64Engine):
return self._get_loc_duplicates(conv)
values = self.values
- loc = values.searchsorted(conv, side='left')
+ loc = values.searchsorted(conv, side="left")
if loc == len(values) or values[loc] != conv:
raise KeyError(val)
@@ -655,8 +655,8 @@ cdef class BaseMultiIndexCodesEngine:
# with positive integers (-1 for NaN becomes 1). This enables us to
# differentiate between values that are missing in other and matching
# NaNs. We will set values that are not found to 0 later:
- labels_arr = np.array(labels, dtype='int64').T + multiindex_nulls_shift
- codes = labels_arr.astype('uint64', copy=False)
+ labels_arr = np.array(labels, dtype="int64").T + multiindex_nulls_shift
+ codes = labels_arr.astype("uint64", copy=False)
self.level_has_nans = [-1 in lab for lab in labels]
# Map each codes combination in the index to an integer unambiguously
@@ -693,7 +693,7 @@ cdef class BaseMultiIndexCodesEngine:
if self.level_has_nans[i] and codes.hasnans:
result[codes.isna()] += 1
level_codes.append(result)
- return self._codes_to_ints(np.array(level_codes, dtype='uint64').T)
+ return self._codes_to_ints(np.array(level_codes, dtype="uint64").T)
def get_indexer(self, target: np.ndarray) -> np.ndarray:
"""
@@ -754,12 +754,12 @@ cdef class BaseMultiIndexCodesEngine:
ndarray[int64_t, ndim=1] new_codes, new_target_codes
ndarray[intp_t, ndim=1] sorted_indexer
- target_order = np.argsort(target).astype('int64')
+ target_order = np.argsort(target).astype("int64")
target_values = target[target_order]
num_values, num_target_values = len(values), len(target_values)
new_codes, new_target_codes = (
- np.empty((num_values,)).astype('int64'),
- np.empty((num_target_values,)).astype('int64'),
+ np.empty((num_values,)).astype("int64"),
+ np.empty((num_target_values,)).astype("int64"),
)
# `values` and `target_values` are both sorted, so we walk through them
@@ -809,7 +809,7 @@ cdef class BaseMultiIndexCodesEngine:
raise KeyError(key)
# Transform indices into single integer:
- lab_int = self._codes_to_ints(np.array(indices, dtype='uint64'))
+ lab_int = self._codes_to_ints(np.array(indices, dtype="uint64"))
return self._base.get_loc(self, lab_int)
@@ -940,8 +940,8 @@ cdef class SharedEngine:
if self.is_monotonic_increasing:
values = self.values
try:
- left = values.searchsorted(val, side='left')
- right = values.searchsorted(val, side='right')
+ left = values.searchsorted(val, side="left")
+ right = values.searchsorted(val, side="right")
except TypeError:
# e.g. GH#29189 get_loc(None) with a Float64Index
raise KeyError(val)
diff --git a/pandas/_libs/internals.pyx b/pandas/_libs/internals.pyx
index 43e33ef3e7d7e..ee51a4fd402fb 100644
--- a/pandas/_libs/internals.pyx
+++ b/pandas/_libs/internals.pyx
@@ -69,7 +69,7 @@ cdef class BlockPlacement:
or not cnp.PyArray_ISWRITEABLE(val)
or (<ndarray>val).descr.type_num != cnp.NPY_INTP
):
- arr = np.require(val, dtype=np.intp, requirements='W')
+ arr = np.require(val, dtype=np.intp, requirements="W")
else:
arr = val
# Caller is responsible for ensuring arr.ndim == 1
diff --git a/pandas/_libs/interval.pyx b/pandas/_libs/interval.pyx
index 7ed635718e674..5b2cb880195ec 100644
--- a/pandas/_libs/interval.pyx
+++ b/pandas/_libs/interval.pyx
@@ -42,7 +42,7 @@ from pandas._libs.tslibs.util cimport (
is_timedelta64_object,
)
-VALID_CLOSED = frozenset(['left', 'right', 'both', 'neither'])
+VALID_CLOSED = frozenset(["left", "right", "both", "neither"])
cdef class IntervalMixin:
@@ -59,7 +59,7 @@ cdef class IntervalMixin:
bool
True if the Interval is closed on the left-side.
"""
- return self.closed in ('left', 'both')
+ return self.closed in ("left", "both")
@property
def closed_right(self):
@@ -73,7 +73,7 @@ cdef class IntervalMixin:
bool
True if the Interval is closed on the left-side.
"""
- return self.closed in ('right', 'both')
+ return self.closed in ("right", "both")
@property
def open_left(self):
@@ -172,9 +172,9 @@ cdef class IntervalMixin:
>>> pd.IntervalIndex(ivs).is_empty
array([ True, False])
"""
- return (self.right == self.left) & (self.closed != 'both')
+ return (self.right == self.left) & (self.closed != "both")
- def _check_closed_matches(self, other, name='other'):
+ def _check_closed_matches(self, other, name="other"):
"""
Check if the closed attribute of `other` matches.
@@ -197,9 +197,9 @@ cdef class IntervalMixin:
cdef bint _interval_like(other):
- return (hasattr(other, 'left')
- and hasattr(other, 'right')
- and hasattr(other, 'closed'))
+ return (hasattr(other, "left")
+ and hasattr(other, "right")
+ and hasattr(other, "closed"))
cdef class Interval(IntervalMixin):
@@ -311,7 +311,7 @@ cdef class Interval(IntervalMixin):
Either ``left``, ``right``, ``both`` or ``neither``.
"""
- def __init__(self, left, right, str closed='right'):
+ def __init__(self, left, right, str closed="right"):
# note: it is faster to just do these checks than to use a special
# constructor (__cinit__/__new__) to avoid them
@@ -343,8 +343,8 @@ cdef class Interval(IntervalMixin):
def __contains__(self, key) -> bool:
if _interval_like(key):
- key_closed_left = key.closed in ('left', 'both')
- key_closed_right = key.closed in ('right', 'both')
+ key_closed_left = key.closed in ("left", "both")
+ key_closed_right = key.closed in ("right", "both")
if self.open_left and key_closed_left:
left_contained = self.left < key.left
else:
@@ -389,15 +389,15 @@ cdef class Interval(IntervalMixin):
left, right = self._repr_base()
name = type(self).__name__
- repr_str = f'{name}({repr(left)}, {repr(right)}, closed={repr(self.closed)})'
+ repr_str = f"{name}({repr(left)}, {repr(right)}, closed={repr(self.closed)})"
return repr_str
def __str__(self) -> str:
left, right = self._repr_base()
- start_symbol = '[' if self.closed_left else '('
- end_symbol = ']' if self.closed_right else ')'
- return f'{start_symbol}{left}, {right}{end_symbol}'
+ start_symbol = "[" if self.closed_left else "("
+ end_symbol = "]" if self.closed_right else ")"
+ return f"{start_symbol}{left}, {right}{end_symbol}"
def __add__(self, y):
if (
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index 4890f82c5fdda..e35cf2fb13768 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -394,7 +394,7 @@ def dicts_to_array(dicts: list, columns: list):
k = len(columns)
n = len(dicts)
- result = np.empty((n, k), dtype='O')
+ result = np.empty((n, k), dtype="O")
for i in range(n):
row = dicts[i]
@@ -768,7 +768,7 @@ def is_all_arraylike(obj: list) -> bool:
for i in range(n):
val = obj[i]
if not (isinstance(val, list) or
- util.is_array(val) or hasattr(val, '_data')):
+ util.is_array(val) or hasattr(val, "_data")):
# TODO: EA?
# exclude tuples, frozensets as they may be contained in an Index
all_arrays = False
@@ -786,7 +786,7 @@ def is_all_arraylike(obj: list) -> bool:
@cython.boundscheck(False)
@cython.wraparound(False)
def generate_bins_dt64(ndarray[int64_t, ndim=1] values, const int64_t[:] binner,
- object closed='left', bint hasnans=False):
+ object closed="left", bint hasnans=False):
"""
Int64 (datetime64) version of generic python version in ``groupby.py``.
"""
@@ -794,7 +794,7 @@ def generate_bins_dt64(ndarray[int64_t, ndim=1] values, const int64_t[:] binner,
Py_ssize_t lenidx, lenbin, i, j, bc
ndarray[int64_t, ndim=1] bins
int64_t r_bin, nat_count
- bint right_closed = closed == 'right'
+ bint right_closed = closed == "right"
nat_count = 0
if hasnans:
@@ -873,7 +873,7 @@ def get_level_sorter(
for i in range(len(starts) - 1):
l, r = starts[i], starts[i + 1]
- out[l:r] = l + codes[l:r].argsort(kind='mergesort')
+ out[l:r] = l + codes[l:r].argsort(kind="mergesort")
return out
@@ -892,7 +892,7 @@ def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
n, k = (<object>mask).shape
if axis == 0:
- counts = np.zeros((max_bin, k), dtype='i8')
+ counts = np.zeros((max_bin, k), dtype="i8")
with nogil:
for i in range(n):
for j in range(k):
@@ -900,7 +900,7 @@ def count_level_2d(ndarray[uint8_t, ndim=2, cast=True] mask,
counts[labels[i], j] += 1
else: # axis == 1
- counts = np.zeros((n, max_bin), dtype='i8')
+ counts = np.zeros((n, max_bin), dtype="i8")
with nogil:
for i in range(n):
for j in range(k):
@@ -1051,7 +1051,7 @@ cpdef bint is_decimal(object obj):
cpdef bint is_interval(object obj):
- return getattr(obj, '_typ', '_typ') == 'interval'
+ return getattr(obj, "_typ", "_typ") == "interval"
def is_period(val: object) -> bool:
@@ -1163,17 +1163,17 @@ _TYPE_MAP = {
# types only exist on certain platform
try:
np.float128
- _TYPE_MAP['float128'] = 'floating'
+ _TYPE_MAP["float128"] = "floating"
except AttributeError:
pass
try:
np.complex256
- _TYPE_MAP['complex256'] = 'complex'
+ _TYPE_MAP["complex256"] = "complex"
except AttributeError:
pass
try:
np.float16
- _TYPE_MAP['float16'] = 'floating'
+ _TYPE_MAP["float16"] = "floating"
except AttributeError:
pass
@@ -1921,7 +1921,7 @@ def is_datetime_with_singletz_array(values: ndarray) -> bool:
for i in range(n):
base_val = values[i]
if base_val is not NaT and base_val is not None and not util.is_nan(base_val):
- base_tz = getattr(base_val, 'tzinfo', None)
+ base_tz = getattr(base_val, "tzinfo", None)
break
for j in range(i, n):
@@ -1929,7 +1929,7 @@ def is_datetime_with_singletz_array(values: ndarray) -> bool:
# NaT can coexist with tz-aware datetimes, so skip if encountered
val = values[j]
if val is not NaT and val is not None and not util.is_nan(val):
- tz = getattr(val, 'tzinfo', None)
+ tz = getattr(val, "tzinfo", None)
if not tz_compare(base_tz, tz):
return False
@@ -2133,7 +2133,7 @@ def maybe_convert_numeric(
returns a boolean mask for the converted values, otherwise returns None.
"""
if len(values) == 0:
- return (np.array([], dtype='i8'), None)
+ return (np.array([], dtype="i8"), None)
# fastpath for ints - try to convert all based on first value
cdef:
@@ -2141,7 +2141,7 @@ def maybe_convert_numeric(
if util.is_integer_object(val):
try:
- maybe_ints = values.astype('i8')
+ maybe_ints = values.astype("i8")
if (maybe_ints == values).all():
return (maybe_ints, None)
except (ValueError, OverflowError, TypeError):
@@ -2231,7 +2231,7 @@ def maybe_convert_numeric(
mask[i] = 1
seen.saw_null()
floats[i] = complexes[i] = NaN
- elif hasattr(val, '__len__') and len(val) == 0:
+ elif hasattr(val, "__len__") and len(val) == 0:
if convert_empty or seen.coerce_numeric:
seen.saw_null()
floats[i] = complexes[i] = NaN
@@ -2469,7 +2469,7 @@ def maybe_convert_objects(ndarray[object] objects,
# if we have an tz's attached then return the objects
if convert_datetime:
- if getattr(val, 'tzinfo', None) is not None:
+ if getattr(val, "tzinfo", None) is not None:
seen.datetimetz_ = True
break
else:
@@ -2900,11 +2900,11 @@ def fast_multiget(dict mapping, ndarray keys, default=np.nan) -> np.ndarray:
cdef:
Py_ssize_t i, n = len(keys)
object val
- ndarray[object] output = np.empty(n, dtype='O')
+ ndarray[object] output = np.empty(n, dtype="O")
if n == 0:
# kludge, for Series
- return np.empty(0, dtype='f8')
+ return np.empty(0, dtype="f8")
for i in range(n):
val = keys[i]
diff --git a/pandas/_libs/ops.pyx b/pandas/_libs/ops.pyx
index 308756e378dde..478e7eaee90c1 100644
--- a/pandas/_libs/ops.pyx
+++ b/pandas/_libs/ops.pyx
@@ -66,7 +66,7 @@ def scalar_compare(object[:] values, object val, object op) -> ndarray:
elif op is operator.ne:
flag = Py_NE
else:
- raise ValueError('Unrecognized operator')
+ raise ValueError("Unrecognized operator")
result = np.empty(n, dtype=bool).view(np.uint8)
isnull_val = checknull(val)
@@ -134,7 +134,7 @@ def vec_compare(ndarray[object] left, ndarray[object] right, object op) -> ndarr
int flag
if n != <Py_ssize_t>len(right):
- raise ValueError(f'Arrays were different lengths: {n} vs {len(right)}')
+ raise ValueError(f"Arrays were different lengths: {n} vs {len(right)}")
if op is operator.lt:
flag = Py_LT
@@ -149,7 +149,7 @@ def vec_compare(ndarray[object] left, ndarray[object] right, object op) -> ndarr
elif op is operator.ne:
flag = Py_NE
else:
- raise ValueError('Unrecognized operator')
+ raise ValueError("Unrecognized operator")
result = np.empty(n, dtype=bool).view(np.uint8)
@@ -234,7 +234,7 @@ def vec_binop(object[:] left, object[:] right, object op) -> ndarray:
object[::1] result
if n != <Py_ssize_t>len(right):
- raise ValueError(f'Arrays were different lengths: {n} vs {len(right)}')
+ raise ValueError(f"Arrays were different lengths: {n} vs {len(right)}")
result = np.empty(n, dtype=object)
@@ -271,8 +271,8 @@ def maybe_convert_bool(ndarray[object] arr,
result = np.empty(n, dtype=np.uint8)
mask = np.zeros(n, dtype=np.uint8)
# the defaults
- true_vals = {'True', 'TRUE', 'true'}
- false_vals = {'False', 'FALSE', 'false'}
+ true_vals = {"True", "TRUE", "true"}
+ false_vals = {"False", "FALSE", "false"}
if true_values is not None:
true_vals = true_vals | set(true_values)
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 85d74e201d5bb..73005c7b5cfa0 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -342,7 +342,7 @@ cdef class TextReader:
set unnamed_cols # set[str]
def __cinit__(self, source,
- delimiter=b',', # bytes | str
+ delimiter=b",", # bytes | str
header=0,
int64_t header_start=0,
uint64_t header_end=0,
@@ -358,7 +358,7 @@ cdef class TextReader:
quoting=0, # int
lineterminator=None, # bytes | str
comment=None,
- decimal=b'.', # bytes | str
+ decimal=b".", # bytes | str
thousands=None, # bytes | str
dtype=None,
usecols=None,
@@ -403,7 +403,7 @@ cdef class TextReader:
self.parser.delim_whitespace = delim_whitespace
else:
if len(delimiter) > 1:
- raise ValueError('only length-1 separators excluded right now')
+ raise ValueError("only length-1 separators excluded right now")
self.parser.delimiter = <char>ord(delimiter)
# ----------------------------------------
@@ -415,26 +415,26 @@ cdef class TextReader:
if lineterminator is not None:
if len(lineterminator) != 1:
- raise ValueError('Only length-1 line terminators supported')
+ raise ValueError("Only length-1 line terminators supported")
self.parser.lineterminator = <char>ord(lineterminator)
if len(decimal) != 1:
- raise ValueError('Only length-1 decimal markers supported')
+ raise ValueError("Only length-1 decimal markers supported")
self.parser.decimal = <char>ord(decimal)
if thousands is not None:
if len(thousands) != 1:
- raise ValueError('Only length-1 thousands markers supported')
+ raise ValueError("Only length-1 thousands markers supported")
self.parser.thousands = <char>ord(thousands)
if escapechar is not None:
if len(escapechar) != 1:
- raise ValueError('Only length-1 escapes supported')
+ raise ValueError("Only length-1 escapes supported")
self.parser.escapechar = <char>ord(escapechar)
self._set_quoting(quotechar, quoting)
- dtype_order = ['int64', 'float64', 'bool', 'object']
+ dtype_order = ["int64", "float64", "bool", "object"]
if quoting == QUOTE_NONNUMERIC:
# consistent with csv module semantics, cast all to float
dtype_order = dtype_order[1:]
@@ -442,7 +442,7 @@ cdef class TextReader:
if comment is not None:
if len(comment) > 1:
- raise ValueError('Only length-1 comment characters supported')
+ raise ValueError("Only length-1 comment characters supported")
self.parser.commentchar = <char>ord(comment)
self.parser.on_bad_lines = on_bad_lines
@@ -491,8 +491,8 @@ cdef class TextReader:
elif float_precision == "high" or float_precision is None:
self.parser.double_converter = precise_xstrtod
else:
- raise ValueError(f'Unrecognized float_precision option: '
- f'{float_precision}')
+ raise ValueError(f"Unrecognized float_precision option: "
+ f"{float_precision}")
# Caller is responsible for ensuring we have one of
# - None
@@ -582,7 +582,7 @@ cdef class TextReader:
dtype = type(quote_char).__name__
raise TypeError(f'"quotechar" must be string, not {dtype}')
- if quote_char is None or quote_char == '':
+ if quote_char is None or quote_char == "":
if quoting != QUOTE_NONE:
raise TypeError("quotechar must be set if quoting enabled")
self.parser.quoting = quoting
@@ -647,11 +647,11 @@ cdef class TextReader:
self.parser.lines < hr):
msg = self.orig_header
if isinstance(msg, list):
- joined = ','.join(str(m) for m in msg)
+ joined = ",".join(str(m) for m in msg)
msg = f"[{joined}], len of {len(msg)},"
raise ParserError(
- f'Passed header={msg} but only '
- f'{self.parser.lines} lines in file')
+ f"Passed header={msg} but only "
+ f"{self.parser.lines} lines in file")
else:
field_count = self.parser.line_fields[hr]
@@ -666,11 +666,11 @@ cdef class TextReader:
name = PyUnicode_DecodeUTF8(word, strlen(word),
self.encoding_errors)
- if name == '':
+ if name == "":
if self.has_mi_columns:
- name = f'Unnamed: {i}_level_{level}'
+ name = f"Unnamed: {i}_level_{level}"
else:
- name = f'Unnamed: {i}'
+ name = f"Unnamed: {i}"
unnamed_count += 1
unnamed_col_indices.append(i)
@@ -693,7 +693,7 @@ cdef class TextReader:
if cur_count > 0:
while cur_count > 0:
counts[old_col] = cur_count + 1
- col = f'{old_col}.{cur_count}'
+ col = f"{old_col}.{cur_count}"
if col in this_header:
cur_count += 1
else:
@@ -779,8 +779,8 @@ cdef class TextReader:
elif self.names is None and nuse < passed_count:
self.leading_cols = field_count - passed_count
elif passed_count != field_count:
- raise ValueError('Number of passed names did not match number of '
- 'header fields in the file')
+ raise ValueError("Number of passed names did not match number of "
+ "header fields in the file")
# oh boy, #2442, #2981
elif self.allow_leading_cols and passed_count < field_count:
self.leading_cols = field_count - passed_count
@@ -854,7 +854,7 @@ cdef class TextReader:
self.parser.warn_msg = NULL
if status < 0:
- raise_parser_error('Error tokenizing data', self.parser)
+ raise_parser_error("Error tokenizing data", self.parser)
# -> dict[int, "ArrayLike"]
cdef _read_rows(self, rows, bint trim):
@@ -871,8 +871,8 @@ cdef class TextReader:
self._tokenize_rows(irows - buffered_lines)
if self.skipfooter > 0:
- raise ValueError('skipfooter can only be used to read '
- 'the whole file')
+ raise ValueError("skipfooter can only be used to read "
+ "the whole file")
else:
with nogil:
status = tokenize_all_rows(self.parser, self.encoding_errors)
@@ -885,15 +885,15 @@ cdef class TextReader:
self.parser.warn_msg = NULL
if status < 0:
- raise_parser_error('Error tokenizing data', self.parser)
+ raise_parser_error("Error tokenizing data", self.parser)
if self.parser_start >= self.parser.lines:
raise StopIteration
- self._end_clock('Tokenization')
+ self._end_clock("Tokenization")
self._start_clock()
columns = self._convert_column_data(rows)
- self._end_clock('Type conversion')
+ self._end_clock("Type conversion")
self._start_clock()
if len(columns) > 0:
rows_read = len(list(columns.values())[0])
@@ -903,7 +903,7 @@ cdef class TextReader:
parser_trim_buffers(self.parser)
self.parser_start -= rows_read
- self._end_clock('Parser memory cleanup')
+ self._end_clock("Parser memory cleanup")
return columns
@@ -913,7 +913,7 @@ cdef class TextReader:
cdef _end_clock(self, str what):
if self.verbose:
elapsed = time.time() - self.clocks.pop(-1)
- print(f'{what} took: {elapsed * 1000:.2f} ms')
+ print(f"{what} took: {elapsed * 1000:.2f} ms")
def set_noconvert(self, i: int) -> None:
self.noconvert.add(i)
@@ -1060,7 +1060,7 @@ cdef class TextReader:
)
if col_res is None:
- raise ParserError(f'Unable to parse column {i}')
+ raise ParserError(f"Unable to parse column {i}")
results[i] = col_res
@@ -1098,11 +1098,11 @@ cdef class TextReader:
# dtype successfully. As a result, we leave the data
# column AS IS with object dtype.
col_res, na_count = self._convert_with_dtype(
- np.dtype('object'), i, start, end, 0,
+ np.dtype("object"), i, start, end, 0,
0, na_hashset, na_flist)
except OverflowError:
col_res, na_count = self._convert_with_dtype(
- np.dtype('object'), i, start, end, na_filter,
+ np.dtype("object"), i, start, end, na_filter,
0, na_hashset, na_flist)
if col_res is not None:
@@ -1131,7 +1131,7 @@ cdef class TextReader:
# only allow safe casts, eg. with a nan you cannot safely cast to int
try:
- col_res = col_res.astype(col_dtype, casting='safe')
+ col_res = col_res.astype(col_dtype, casting="safe")
except TypeError:
# float -> int conversions can fail the above
@@ -1200,7 +1200,7 @@ cdef class TextReader:
na_filter, na_hashset)
na_count = 0
- if result is not None and dtype != 'int64':
+ if result is not None and dtype != "int64":
result = result.astype(dtype)
return result, na_count
@@ -1209,7 +1209,7 @@ cdef class TextReader:
result, na_count = _try_double(self.parser, i, start, end,
na_filter, na_hashset, na_flist)
- if result is not None and dtype != 'float64':
+ if result is not None and dtype != "float64":
result = result.astype(dtype)
return result, na_count
elif is_bool_dtype(dtype):
@@ -1221,7 +1221,7 @@ cdef class TextReader:
raise ValueError(f"Bool column has NA values in column {i}")
return result, na_count
- elif dtype.kind == 'S':
+ elif dtype.kind == "S":
# TODO: na handling
width = dtype.itemsize
if width > 0:
@@ -1231,7 +1231,7 @@ cdef class TextReader:
# treat as a regular string parsing
return self._string_convert(i, start, end, na_filter,
na_hashset)
- elif dtype.kind == 'U':
+ elif dtype.kind == "U":
width = dtype.itemsize
if width > 0:
raise TypeError(f"the dtype {dtype} is not supported for parsing")
@@ -1345,8 +1345,8 @@ cdef _close(TextReader reader):
cdef:
- object _true_values = [b'True', b'TRUE', b'true']
- object _false_values = [b'False', b'FALSE', b'false']
+ object _true_values = [b"True", b"TRUE", b"true"]
+ object _false_values = [b"False", b"FALSE", b"false"]
def _ensure_encoded(list lst):
@@ -1356,7 +1356,7 @@ def _ensure_encoded(list lst):
if isinstance(x, str):
x = PyUnicode_AsUTF8String(x)
elif not isinstance(x, bytes):
- x = str(x).encode('utf-8')
+ x = str(x).encode("utf-8")
result.append(x)
return result
@@ -1565,7 +1565,7 @@ cdef _to_fw_string(parser_t *parser, int64_t col, int64_t line_start,
char *data
ndarray result
- result = np.empty(line_end - line_start, dtype=f'|S{width}')
+ result = np.empty(line_end - line_start, dtype=f"|S{width}")
data = <char*>result.data
with nogil:
@@ -1591,13 +1591,13 @@ cdef inline void _to_fw_string_nogil(parser_t *parser, int64_t col,
cdef:
- char* cinf = b'inf'
- char* cposinf = b'+inf'
- char* cneginf = b'-inf'
+ char* cinf = b"inf"
+ char* cposinf = b"+inf"
+ char* cneginf = b"-inf"
- char* cinfty = b'Infinity'
- char* cposinfty = b'+Infinity'
- char* cneginfty = b'-Infinity'
+ char* cinfty = b"Infinity"
+ char* cposinfty = b"+Infinity"
+ char* cneginfty = b"-Infinity"
# -> tuple[ndarray[float64_t], int] | tuple[None, None]
@@ -1726,14 +1726,14 @@ cdef _try_uint64(parser_t *parser, int64_t col,
if error != 0:
if error == ERROR_OVERFLOW:
# Can't get the word variable
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return None
if uint64_conflict(&state):
- raise ValueError('Cannot convert to numerical dtype')
+ raise ValueError("Cannot convert to numerical dtype")
if state.seen_sint:
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return result
@@ -1796,7 +1796,7 @@ cdef _try_int64(parser_t *parser, int64_t col,
if error != 0:
if error == ERROR_OVERFLOW:
# Can't get the word variable
- raise OverflowError('Overflow')
+ raise OverflowError("Overflow")
return None, None
return result, na_count
@@ -1944,7 +1944,7 @@ cdef kh_str_starts_t* kset_from_list(list values) except NULL:
# None creeps in sometimes, which isn't possible here
if not isinstance(val, bytes):
kh_destroy_str_starts(table)
- raise ValueError('Must be all encoded bytes')
+ raise ValueError("Must be all encoded bytes")
kh_put_str_starts_item(table, PyBytes_AsString(val), &ret)
@@ -2009,11 +2009,11 @@ cdef raise_parser_error(object base, parser_t *parser):
Py_XDECREF(type)
raise old_exc
- message = f'{base}. C error: '
+ message = f"{base}. C error: "
if parser.error_msg != NULL:
- message += parser.error_msg.decode('utf-8')
+ message += parser.error_msg.decode("utf-8")
else:
- message += 'no error message set'
+ message += "no error message set"
raise ParserError(message)
@@ -2078,7 +2078,7 @@ cdef _apply_converter(object f, parser_t *parser, int64_t col,
cdef list _maybe_encode(list values):
if values is None:
return []
- return [x.encode('utf-8') if isinstance(x, str) else x for x in values]
+ return [x.encode("utf-8") if isinstance(x, str) else x for x in values]
def sanitize_objects(ndarray[object] values, set na_values) -> int:
diff --git a/pandas/_libs/properties.pyx b/pandas/_libs/properties.pyx
index 3354290a5f535..33cd2ef27a995 100644
--- a/pandas/_libs/properties.pyx
+++ b/pandas/_libs/properties.pyx
@@ -14,7 +14,7 @@ cdef class CachedProperty:
def __init__(self, fget):
self.fget = fget
self.name = fget.__name__
- self.__doc__ = getattr(fget, '__doc__', None)
+ self.__doc__ = getattr(fget, "__doc__", None)
def __get__(self, obj, typ):
if obj is None:
@@ -22,7 +22,7 @@ cdef class CachedProperty:
return self
# Get the cache or set a default one if needed
- cache = getattr(obj, '_cache', None)
+ cache = getattr(obj, "_cache", None)
if cache is None:
try:
cache = obj._cache = {}
diff --git a/pandas/_libs/reshape.pyx b/pandas/_libs/reshape.pyx
index a012bd92cd573..946ba5ddaa248 100644
--- a/pandas/_libs/reshape.pyx
+++ b/pandas/_libs/reshape.pyx
@@ -103,7 +103,7 @@ def explode(ndarray[object] values):
# find the resulting len
n = len(values)
- counts = np.zeros(n, dtype='int64')
+ counts = np.zeros(n, dtype="int64")
for i in range(n):
v = values[i]
@@ -116,7 +116,7 @@ def explode(ndarray[object] values):
else:
counts[i] += 1
- result = np.empty(counts.sum(), dtype='object')
+ result = np.empty(counts.sum(), dtype="object")
count = 0
for i in range(n):
v = values[i]
diff --git a/pandas/_libs/sparse.pyx b/pandas/_libs/sparse.pyx
index 031417fa50be0..45ddade7b4eb5 100644
--- a/pandas/_libs/sparse.pyx
+++ b/pandas/_libs/sparse.pyx
@@ -62,8 +62,8 @@ cdef class IntIndex(SparseIndex):
return IntIndex, args
def __repr__(self) -> str:
- output = 'IntIndex\n'
- output += f'Indices: {repr(self.indices)}\n'
+ output = "IntIndex\n"
+ output += f"Indices: {repr(self.indices)}\n"
return output
@property
@@ -134,7 +134,7 @@ cdef class IntIndex(SparseIndex):
y = y_.to_int_index()
if self.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
xindices = self.indices
yindices = y.indices
@@ -168,7 +168,7 @@ cdef class IntIndex(SparseIndex):
y = y_.to_int_index()
if self.length != y.length:
- raise ValueError('Indices must reference same underlying length')
+ raise ValueError("Indices must reference same underlying length")
new_indices = np.union1d(self.indices, y.indices)
return IntIndex(self.length, new_indices)
@@ -311,9 +311,9 @@ cdef class BlockIndex(SparseIndex):
return BlockIndex, args
def __repr__(self) -> str:
- output = 'BlockIndex\n'
- output += f'Block locations: {repr(self.blocs)}\n'
- output += f'Block lengths: {repr(self.blengths)}'
+ output = "BlockIndex\n"
+ output += f"Block locations: {repr(self.blocs)}\n"
+ output += f"Block lengths: {repr(self.blengths)}"
return output
@@ -340,23 +340,23 @@ cdef class BlockIndex(SparseIndex):
blengths = self.blengths
if len(blocs) != len(blengths):
- raise ValueError('block bound arrays must be same length')
+ raise ValueError("block bound arrays must be same length")
for i in range(self.nblocks):
if i > 0:
if blocs[i] <= blocs[i - 1]:
- raise ValueError('Locations not in ascending order')
+ raise ValueError("Locations not in ascending order")
if i < self.nblocks - 1:
if blocs[i] + blengths[i] > blocs[i + 1]:
- raise ValueError(f'Block {i} overlaps')
+ raise ValueError(f"Block {i} overlaps")
else:
if blocs[i] + blengths[i] > self.length:
- raise ValueError(f'Block {i} extends beyond end')
+ raise ValueError(f"Block {i} extends beyond end")
# no zero-length blocks
if blengths[i] == 0:
- raise ValueError(f'Zero-length block {i}')
+ raise ValueError(f"Zero-length block {i}")
def equals(self, other: object) -> bool:
if not isinstance(other, BlockIndex):
@@ -411,7 +411,7 @@ cdef class BlockIndex(SparseIndex):
y = other.to_block_index()
if self.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
xloc = self.blocs
xlen = self.blengths
@@ -565,7 +565,7 @@ cdef class BlockMerge:
self.y = y
if x.length != y.length:
- raise Exception('Indices must reference same underlying length')
+ raise Exception("Indices must reference same underlying length")
self.xstart = self.x.blocs
self.ystart = self.y.blocs
@@ -660,7 +660,7 @@ cdef class BlockUnion(BlockMerge):
int32_t xi, yi, ynblocks, nend
if mode != 0 and mode != 1:
- raise Exception('Mode must be 0 or 1')
+ raise Exception("Mode must be 0 or 1")
# so symmetric code will work
if mode == 0:
diff --git a/pandas/_libs/testing.pyx b/pandas/_libs/testing.pyx
index b7457f94f3447..733879154b9d6 100644
--- a/pandas/_libs/testing.pyx
+++ b/pandas/_libs/testing.pyx
@@ -21,15 +21,15 @@ from pandas.core.dtypes.missing import (
cdef bint isiterable(obj):
- return hasattr(obj, '__iter__')
+ return hasattr(obj, "__iter__")
cdef bint has_length(obj):
- return hasattr(obj, '__len__')
+ return hasattr(obj, "__len__")
cdef bint is_dictlike(obj):
- return hasattr(obj, 'keys') and hasattr(obj, '__getitem__')
+ return hasattr(obj, "keys") and hasattr(obj, "__getitem__")
cpdef assert_dict_equal(a, b, bint compare_keys=True):
@@ -91,7 +91,7 @@ cpdef assert_almost_equal(a, b,
Py_ssize_t i, na, nb
double fa, fb
bint is_unequal = False, a_is_ndarray, b_is_ndarray
- str first_diff = ''
+ str first_diff = ""
if lobj is None:
lobj = a
@@ -110,9 +110,9 @@ cpdef assert_almost_equal(a, b,
if obj is None:
if a_is_ndarray or b_is_ndarray:
- obj = 'numpy array'
+ obj = "numpy array"
else:
- obj = 'Iterable'
+ obj = "Iterable"
if isiterable(a):
@@ -131,11 +131,11 @@ cpdef assert_almost_equal(a, b,
if a.shape != b.shape:
from pandas._testing import raise_assert_detail
raise_assert_detail(
- obj, f'{obj} shapes are different', a.shape, b.shape)
+ obj, f"{obj} shapes are different", a.shape, b.shape)
if check_dtype and not is_dtype_equal(a.dtype, b.dtype):
from pandas._testing import assert_attr_equal
- assert_attr_equal('dtype', a, b, obj=obj)
+ assert_attr_equal("dtype", a, b, obj=obj)
if array_equivalent(a, b, strict_nan=True):
return True
diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index 7fee48c0a5d1f..b78174483be51 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -86,9 +86,9 @@ def _test_parse_iso8601(ts: str):
obj = _TSObject()
- if ts == 'now':
+ if ts == "now":
return Timestamp.utcnow()
- elif ts == 'today':
+ elif ts == "today":
return Timestamp.now().normalize()
string_to_dts(ts, &obj.dts, &out_bestunit, &out_local, &out_tzoffset, True)
@@ -145,7 +145,7 @@ def format_array_from_datetime(
cnp.flatiter it = cnp.PyArray_IterNew(values)
if na_rep is None:
- na_rep = 'NaT'
+ na_rep = "NaT"
if tz is None:
# if we don't have a format nor tz, then choose
@@ -182,21 +182,21 @@ def format_array_from_datetime(
elif basic_format_day:
pandas_datetime_to_datetimestruct(val, reso, &dts)
- res = f'{dts.year}-{dts.month:02d}-{dts.day:02d}'
+ res = f"{dts.year}-{dts.month:02d}-{dts.day:02d}"
elif basic_format:
pandas_datetime_to_datetimestruct(val, reso, &dts)
- res = (f'{dts.year}-{dts.month:02d}-{dts.day:02d} '
- f'{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}')
+ res = (f"{dts.year}-{dts.month:02d}-{dts.day:02d} "
+ f"{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}")
if show_ns:
ns = dts.ps // 1000
- res += f'.{ns + dts.us * 1000:09d}'
+ res += f".{ns + dts.us * 1000:09d}"
elif show_us:
- res += f'.{dts.us:06d}'
+ res += f".{dts.us:06d}"
elif show_ms:
- res += f'.{dts.us // 1000:03d}'
+ res += f".{dts.us // 1000:03d}"
else:
@@ -266,9 +266,9 @@ def array_with_unit_to_datetime(
int64_t mult
int prec = 0
ndarray[float64_t] fvalues
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
- bint is_raise = errors=='raise'
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
+ bint is_raise = errors=="raise"
bint need_to_iterate = True
ndarray[int64_t] iresult
ndarray[object] oresult
@@ -324,8 +324,8 @@ def array_with_unit_to_datetime(
return result, tz
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
try:
for i in range(n):
@@ -442,7 +442,7 @@ def first_non_null(values: ndarray) -> int:
@cython.boundscheck(False)
cpdef array_to_datetime(
ndarray[object] values,
- str errors='raise',
+ str errors="raise",
bint dayfirst=False,
bint yearfirst=False,
bint utc=False,
@@ -494,9 +494,9 @@ cpdef array_to_datetime(
bint seen_integer = False
bint seen_datetime = False
bint seen_datetime_offset = False
- bint is_raise = errors=='raise'
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
+ bint is_raise = errors=="raise"
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
bint is_same_offsets
_TSObject _ts
int64_t value
@@ -511,8 +511,8 @@ cpdef array_to_datetime(
# specify error conditions
assert is_raise or is_ignore or is_coerce
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
try:
for i in range(n):
@@ -571,7 +571,7 @@ cpdef array_to_datetime(
# if we have previously (or in future accept
# datetimes/strings, then we must coerce)
try:
- iresult[i] = cast_from_unit(val, 'ns')
+ iresult[i] = cast_from_unit(val, "ns")
except OverflowError:
iresult[i] = NPY_NAT
@@ -632,7 +632,7 @@ cpdef array_to_datetime(
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
- out_tzoffset_vals.add('naive')
+ out_tzoffset_vals.add("naive")
_ts = convert_datetime_to_tsobject(py_dt, None)
iresult[i] = _ts.value
@@ -653,7 +653,7 @@ cpdef array_to_datetime(
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
- out_tzoffset_vals.add('naive')
+ out_tzoffset_vals.add("naive")
iresult[i] = value
check_dts_bounds(&dts)
@@ -791,9 +791,9 @@ cdef _array_to_datetime_object(
cdef:
Py_ssize_t i, n = len(values)
object val
- bint is_ignore = errors == 'ignore'
- bint is_coerce = errors == 'coerce'
- bint is_raise = errors == 'raise'
+ bint is_ignore = errors == "ignore"
+ bint is_coerce = errors == "coerce"
+ bint is_raise = errors == "raise"
ndarray[object] oresult
npy_datetimestruct dts
@@ -816,7 +816,7 @@ cdef _array_to_datetime_object(
val = str(val)
if len(val) == 0 or val in nat_strings:
- oresult[i] = 'NaT'
+ oresult[i] = "NaT"
continue
try:
oresult[i] = parse_datetime_string(val, dayfirst=dayfirst,
diff --git a/pandas/_libs/tslibs/ccalendar.pyx b/pandas/_libs/tslibs/ccalendar.pyx
index 00ee15b73f551..19c732e2a313b 100644
--- a/pandas/_libs/tslibs/ccalendar.pyx
+++ b/pandas/_libs/tslibs/ccalendar.pyx
@@ -29,21 +29,21 @@ cdef int32_t* month_offset = [
0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366]
# Canonical location for other modules to find name constants
-MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL',
- 'AUG', 'SEP', 'OCT', 'NOV', 'DEC']
+MONTHS = ["JAN", "FEB", "MAR", "APR", "MAY", "JUN", "JUL",
+ "AUG", "SEP", "OCT", "NOV", "DEC"]
# The first blank line is consistent with calendar.month_name in the calendar
# standard library
-MONTHS_FULL = ['', 'January', 'February', 'March', 'April', 'May', 'June',
- 'July', 'August', 'September', 'October', 'November',
- 'December']
+MONTHS_FULL = ["", "January", "February", "March", "April", "May", "June",
+ "July", "August", "September", "October", "November",
+ "December"]
MONTH_NUMBERS = {name: num for num, name in enumerate(MONTHS)}
cdef dict c_MONTH_NUMBERS = MONTH_NUMBERS
MONTH_ALIASES = {(num + 1): name for num, name in enumerate(MONTHS)}
MONTH_TO_CAL_NUM = {name: num + 1 for num, name in enumerate(MONTHS)}
-DAYS = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
-DAYS_FULL = ['Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday',
- 'Saturday', 'Sunday']
+DAYS = ["MON", "TUE", "WED", "THU", "FRI", "SAT", "SUN"]
+DAYS_FULL = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday",
+ "Saturday", "Sunday"]
int_to_weekday = {num: name for num, name in enumerate(DAYS)}
weekday_to_int = {int_to_weekday[key]: key for key in int_to_weekday}
diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 17facf9e16f4b..1b6dace6e90b1 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -76,8 +76,8 @@ from pandas._libs.tslibs.tzconversion cimport (
# ----------------------------------------------------------------------
# Constants
-DT64NS_DTYPE = np.dtype('M8[ns]')
-TD64NS_DTYPE = np.dtype('m8[ns]')
+DT64NS_DTYPE = np.dtype("M8[ns]")
+TD64NS_DTYPE = np.dtype("m8[ns]")
# ----------------------------------------------------------------------
@@ -315,8 +315,8 @@ cdef _TSObject convert_to_tsobject(object ts, tzinfo tz, str unit,
if isinstance(ts, Period):
raise ValueError("Cannot convert Period to Timestamp "
"unambiguously. Use to_timestamp")
- raise TypeError(f'Cannot convert input [{ts}] of type {type(ts)} to '
- f'Timestamp')
+ raise TypeError(f"Cannot convert input [{ts}] of type {type(ts)} to "
+ f"Timestamp")
maybe_localize_tso(obj, tz, obj.creso)
return obj
@@ -497,11 +497,11 @@ cdef _TSObject _convert_str_to_tsobject(object ts, tzinfo tz, str unit,
obj.value = NPY_NAT
obj.tzinfo = tz
return obj
- elif ts == 'now':
+ elif ts == "now":
# Issue 9000, we short-circuit rather than going
# into np_datetime_strings which returns utc
dt = datetime.now(tz)
- elif ts == 'today':
+ elif ts == "today":
# Issue 9000, we short-circuit rather than going
# into np_datetime_strings which returns a normalized datetime
dt = datetime.now(tz)
@@ -702,19 +702,19 @@ cdef tzinfo convert_timezone(
if utc_convert:
pass
elif found_naive:
- raise ValueError('Tz-aware datetime.datetime '
- 'cannot be converted to '
- 'datetime64 unless utc=True')
+ raise ValueError("Tz-aware datetime.datetime "
+ "cannot be converted to "
+ "datetime64 unless utc=True")
elif tz_out is not None and not tz_compare(tz_out, tz_in):
- raise ValueError('Tz-aware datetime.datetime '
- 'cannot be converted to '
- 'datetime64 unless utc=True')
+ raise ValueError("Tz-aware datetime.datetime "
+ "cannot be converted to "
+ "datetime64 unless utc=True")
else:
tz_out = tz_in
else:
if found_tz and not utc_convert:
- raise ValueError('Cannot mix tz-aware with '
- 'tz-naive values')
+ raise ValueError("Cannot mix tz-aware with "
+ "tz-naive values")
return tz_out
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index dda26ad3bebc6..7e5d1d13cbda3 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -73,13 +73,13 @@ def build_field_sarray(const int64_t[:] dtindex, NPY_DATETIMEUNIT reso):
out = np.empty(count, dtype=sa_dtype)
- years = out['Y']
- months = out['M']
- days = out['D']
- hours = out['h']
- minutes = out['m']
- seconds = out['s']
- mus = out['u']
+ years = out["Y"]
+ months = out["M"]
+ days = out["D"]
+ hours = out["h"]
+ minutes = out["m"]
+ seconds = out["s"]
+ mus = out["u"]
for i in range(count):
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
@@ -154,11 +154,11 @@ def get_date_name_field(
out = np.empty(count, dtype=object)
- if field == 'day_name':
+ if field == "day_name":
if locale is None:
names = np.array(DAYS_FULL, dtype=np.object_)
else:
- names = np.array(_get_locale_names('f_weekday', locale),
+ names = np.array(_get_locale_names("f_weekday", locale),
dtype=np.object_)
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -169,11 +169,11 @@ def get_date_name_field(
dow = dayofweek(dts.year, dts.month, dts.day)
out[i] = names[dow].capitalize()
- elif field == 'month_name':
+ elif field == "month_name":
if locale is None:
names = np.array(MONTHS_FULL, dtype=np.object_)
else:
- names = np.array(_get_locale_names('f_month', locale),
+ names = np.array(_get_locale_names("f_month", locale),
dtype=np.object_)
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -237,20 +237,20 @@ def get_start_end_field(
npy_datetimestruct dts
int compare_month, modby
- out = np.zeros(count, dtype='int8')
+ out = np.zeros(count, dtype="int8")
if freqstr:
- if freqstr == 'C':
+ if freqstr == "C":
raise ValueError(f"Custom business days is not supported by {field}")
- is_business = freqstr[0] == 'B'
+ is_business = freqstr[0] == "B"
# YearBegin(), BYearBegin() use month = starting month of year.
# QuarterBegin(), BQuarterBegin() use startingMonth = starting
# month of year. Other offsets use month, startingMonth as ending
# month of year.
- if (freqstr[0:2] in ['MS', 'QS', 'AS']) or (
- freqstr[1:3] in ['MS', 'QS', 'AS']):
+ if (freqstr[0:2] in ["MS", "QS", "AS"]) or (
+ freqstr[1:3] in ["MS", "QS", "AS"]):
end_month = 12 if month_kw == 1 else month_kw - 1
start_month = month_kw
else:
@@ -339,9 +339,9 @@ def get_date_field(
ndarray[int32_t] out
npy_datetimestruct dts
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype="i4")
- if field == 'Y':
+ if field == "Y":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -352,7 +352,7 @@ def get_date_field(
out[i] = dts.year
return out
- elif field == 'M':
+ elif field == "M":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -363,7 +363,7 @@ def get_date_field(
out[i] = dts.month
return out
- elif field == 'D':
+ elif field == "D":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -374,7 +374,7 @@ def get_date_field(
out[i] = dts.day
return out
- elif field == 'h':
+ elif field == "h":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -386,7 +386,7 @@ def get_date_field(
# TODO: can we de-dup with period.pyx <accessor>s?
return out
- elif field == 'm':
+ elif field == "m":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -397,7 +397,7 @@ def get_date_field(
out[i] = dts.min
return out
- elif field == 's':
+ elif field == "s":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -408,7 +408,7 @@ def get_date_field(
out[i] = dts.sec
return out
- elif field == 'us':
+ elif field == "us":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -419,7 +419,7 @@ def get_date_field(
out[i] = dts.us
return out
- elif field == 'ns':
+ elif field == "ns":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -429,7 +429,7 @@ def get_date_field(
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
out[i] = dts.ps // 1000
return out
- elif field == 'doy':
+ elif field == "doy":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -440,7 +440,7 @@ def get_date_field(
out[i] = get_day_of_year(dts.year, dts.month, dts.day)
return out
- elif field == 'dow':
+ elif field == "dow":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -451,7 +451,7 @@ def get_date_field(
out[i] = dayofweek(dts.year, dts.month, dts.day)
return out
- elif field == 'woy':
+ elif field == "woy":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -462,7 +462,7 @@ def get_date_field(
out[i] = get_week_of_year(dts.year, dts.month, dts.day)
return out
- elif field == 'q':
+ elif field == "q":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -474,7 +474,7 @@ def get_date_field(
out[i] = ((out[i] - 1) // 3) + 1
return out
- elif field == 'dim':
+ elif field == "dim":
with nogil:
for i in range(count):
if dtindex[i] == NPY_NAT:
@@ -484,8 +484,8 @@ def get_date_field(
pandas_datetime_to_datetimestruct(dtindex[i], reso, &dts)
out[i] = get_days_in_month(dts.year, dts.month)
return out
- elif field == 'is_leap_year':
- return isleapyear_arr(get_date_field(dtindex, 'Y', reso=reso))
+ elif field == "is_leap_year":
+ return isleapyear_arr(get_date_field(dtindex, "Y", reso=reso))
raise ValueError(f"Field {field} not supported")
@@ -506,9 +506,9 @@ def get_timedelta_field(
ndarray[int32_t] out
pandas_timedeltastruct tds
- out = np.empty(count, dtype='i4')
+ out = np.empty(count, dtype="i4")
- if field == 'days':
+ if field == "days":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -519,7 +519,7 @@ def get_timedelta_field(
out[i] = tds.days
return out
- elif field == 'seconds':
+ elif field == "seconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -530,7 +530,7 @@ def get_timedelta_field(
out[i] = tds.seconds
return out
- elif field == 'microseconds':
+ elif field == "microseconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -541,7 +541,7 @@ def get_timedelta_field(
out[i] = tds.microseconds
return out
- elif field == 'nanoseconds':
+ elif field == "nanoseconds":
with nogil:
for i in range(count):
if tdindex[i] == NPY_NAT:
@@ -560,7 +560,7 @@ cpdef isleapyear_arr(ndarray years):
cdef:
ndarray[int8_t] out
- out = np.zeros(len(years), dtype='int8')
+ out = np.zeros(len(years), dtype="int8")
out[np.logical_or(years % 400 == 0,
np.logical_and(years % 4 == 0,
years % 100 > 0))] = 1
diff --git a/pandas/_libs/tslibs/nattype.pyx b/pandas/_libs/tslibs/nattype.pyx
index dcb7358d8e69a..1f18f8cae4ae8 100644
--- a/pandas/_libs/tslibs/nattype.pyx
+++ b/pandas/_libs/tslibs/nattype.pyx
@@ -259,7 +259,7 @@ cdef class _NaT(datetime):
"""
Return a numpy.datetime64 object with 'ns' precision.
"""
- return np.datetime64('NaT', "ns")
+ return np.datetime64("NaT", "ns")
def to_numpy(self, dtype=None, copy=False) -> np.datetime64 | np.timedelta64:
"""
diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx
index d49c41e54764f..e5f683c56da9b 100644
--- a/pandas/_libs/tslibs/np_datetime.pyx
+++ b/pandas/_libs/tslibs/np_datetime.pyx
@@ -211,10 +211,10 @@ cdef check_dts_bounds(npy_datetimestruct *dts, NPY_DATETIMEUNIT unit=NPY_FR_ns):
error = True
if error:
- fmt = (f'{dts.year}-{dts.month:02d}-{dts.day:02d} '
- f'{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}')
+ fmt = (f"{dts.year}-{dts.month:02d}-{dts.day:02d} "
+ f"{dts.hour:02d}:{dts.min:02d}:{dts.sec:02d}")
# TODO: "nanosecond" in the message assumes NPY_FR_ns
- raise OutOfBoundsDatetime(f'Out of bounds nanosecond timestamp: {fmt}')
+ raise OutOfBoundsDatetime(f"Out of bounds nanosecond timestamp: {fmt}")
# ----------------------------------------------------------------------
@@ -289,7 +289,7 @@ cdef inline int string_to_dts(
buf = get_c_string_buf_and_size(val, &length)
if format is None:
- format_buf = b''
+ format_buf = b""
format_length = 0
exact = False
else:
diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index 4c6493652b216..97554556b0082 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -240,9 +240,9 @@ cdef _get_calendar(weekmask, holidays, calendar):
holidays = [_to_dt64D(dt) for dt in holidays]
holidays = tuple(sorted(holidays))
- kwargs = {'weekmask': weekmask}
+ kwargs = {"weekmask": weekmask}
if holidays:
- kwargs['holidays'] = holidays
+ kwargs["holidays"] = holidays
busdaycalendar = np.busdaycalendar(**kwargs)
return busdaycalendar, holidays
@@ -253,7 +253,7 @@ cdef _to_dt64D(dt):
# > np.datetime64(dt.datetime(2013,5,1),dtype='datetime64[D]')
# numpy.datetime64('2013-05-01T02:00:00.000000+0200')
# Thus astype is needed to cast datetime to datetime64[D]
- if getattr(dt, 'tzinfo', None) is not None:
+ if getattr(dt, "tzinfo", None) is not None:
# Get the nanosecond timestamp,
# equiv `Timestamp(dt).value` or `dt.timestamp() * 10**9`
# The `naive` must be the `dt` naive wall time
@@ -274,7 +274,7 @@ cdef _to_dt64D(dt):
cdef _validate_business_time(t_input):
if isinstance(t_input, str):
try:
- t = time.strptime(t_input, '%H:%M')
+ t = time.strptime(t_input, "%H:%M")
return dt_time(hour=t.tm_hour, minute=t.tm_min)
except ValueError:
raise ValueError("time data must match '%H:%M' format")
@@ -303,14 +303,14 @@ cdef _determine_offset(kwds):
# more, nanosecond(s) are handled by apply_wraps
kwds_no_nanos = dict(
(k, v) for k, v in kwds.items()
- if k not in ('nanosecond', 'nanoseconds')
+ if k not in ("nanosecond", "nanoseconds")
)
# TODO: Are nanosecond and nanoseconds allowed somewhere?
- _kwds_use_relativedelta = ('years', 'months', 'weeks', 'days',
- 'year', 'month', 'week', 'day', 'weekday',
- 'hour', 'minute', 'second', 'microsecond',
- 'millisecond')
+ _kwds_use_relativedelta = ("years", "months", "weeks", "days",
+ "year", "month", "week", "day", "weekday",
+ "hour", "minute", "second", "microsecond",
+ "millisecond")
use_relativedelta = False
if len(kwds_no_nanos) > 0:
@@ -327,7 +327,7 @@ cdef _determine_offset(kwds):
else:
# sub-daily offset - use timedelta (tz-aware)
offset = timedelta(**kwds_no_nanos)
- elif any(nano in kwds for nano in ('nanosecond', 'nanoseconds')):
+ elif any(nano in kwds for nano in ("nanosecond", "nanoseconds")):
offset = timedelta(days=0)
else:
# GH 45643/45890: (historically) defaults to 1 day for non-nano
@@ -424,11 +424,11 @@ cdef class BaseOffset:
# cython attributes are not in __dict__
all_paras[attr] = getattr(self, attr)
- if 'holidays' in all_paras and not all_paras['holidays']:
- all_paras.pop('holidays')
- exclude = ['kwds', 'name', 'calendar']
+ if "holidays" in all_paras and not all_paras["holidays"]:
+ all_paras.pop("holidays")
+ exclude = ["kwds", "name", "calendar"]
attrs = [(k, v) for k, v in all_paras.items()
- if (k not in exclude) and (k[0] != '_')]
+ if (k not in exclude) and (k[0] != "_")]
attrs = sorted(set(attrs))
params = tuple([str(type(self))] + attrs)
return params
@@ -481,7 +481,7 @@ cdef class BaseOffset:
def __sub__(self, other):
if PyDateTime_Check(other):
- raise TypeError('Cannot subtract datetime from offset.')
+ raise TypeError("Cannot subtract datetime from offset.")
elif type(other) == type(self):
return type(self)(self.n - other.n, normalize=self.normalize,
**self.kwds)
@@ -736,13 +736,13 @@ cdef class BaseOffset:
ValueError if n != int(n)
"""
if util.is_timedelta64_object(n):
- raise TypeError(f'`n` argument must be an integer, got {type(n)}')
+ raise TypeError(f"`n` argument must be an integer, got {type(n)}")
try:
nint = int(n)
except (ValueError, TypeError):
- raise TypeError(f'`n` argument must be an integer, got {type(n)}')
+ raise TypeError(f"`n` argument must be an integer, got {type(n)}")
if n != nint:
- raise ValueError(f'`n` argument must be an integer, got {n}')
+ raise ValueError(f"`n` argument must be an integer, got {n}")
return nint
def __setstate__(self, state):
@@ -1700,7 +1700,7 @@ cdef class BusinessHour(BusinessMixin):
out = super()._repr_attrs()
# Use python string formatting to be faster than strftime
hours = ",".join(
- f'{st.hour:02d}:{st.minute:02d}-{en.hour:02d}:{en.minute:02d}'
+ f"{st.hour:02d}:{st.minute:02d}-{en.hour:02d}:{en.minute:02d}"
for st, en in zip(self.start, self.end)
)
attrs = [f"{self._prefix}={hours}"]
@@ -3675,7 +3675,7 @@ cdef class _CustomBusinessMonth(BusinessMixin):
Define default roll function to be called in apply method.
"""
cbday_kwds = self.kwds.copy()
- cbday_kwds['offset'] = timedelta(0)
+ cbday_kwds["offset"] = timedelta(0)
cbday = CustomBusinessDay(n=1, normalize=False, **cbday_kwds)
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 232169f3844b3..25a2722c48bd6 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -94,7 +94,7 @@ PARSING_WARNING_MSG = (
)
cdef:
- set _not_datelike_strings = {'a', 'A', 'm', 'M', 'p', 'P', 't', 'T'}
+ set _not_datelike_strings = {"a", "A", "m", "M", "p", "P", "t", "T"}
# ----------------------------------------------------------------------
cdef:
@@ -165,38 +165,38 @@ cdef inline object _parse_delimited_date(str date_string, bint dayfirst):
month = _parse_2digit(buf)
day = _parse_2digit(buf + 3)
year = _parse_4digit(buf + 6)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 9 and _is_delimiter(buf[1]) and _is_delimiter(buf[4]):
# parsing M?DD?YYYY and D?MM?YYYY dates
month = _parse_1digit(buf)
day = _parse_2digit(buf + 2)
year = _parse_4digit(buf + 5)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 9 and _is_delimiter(buf[2]) and _is_delimiter(buf[4]):
# parsing MM?D?YYYY and DD?M?YYYY dates
month = _parse_2digit(buf)
day = _parse_1digit(buf + 3)
year = _parse_4digit(buf + 5)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 8 and _is_delimiter(buf[1]) and _is_delimiter(buf[3]):
# parsing M?D?YYYY and D?M?YYYY dates
month = _parse_1digit(buf)
day = _parse_1digit(buf + 2)
year = _parse_4digit(buf + 4)
- reso = 'day'
+ reso = "day"
can_swap = 1
elif length == 7 and _is_delimiter(buf[2]):
# parsing MM?YYYY dates
- if buf[2] == b'.':
+ if buf[2] == b".":
# we cannot reliably tell whether e.g. 10.2010 is a float
# or a date, thus we refuse to parse it here
return None, None
month = _parse_2digit(buf)
year = _parse_4digit(buf + 3)
- reso = 'month'
+ reso = "month"
else:
return None, None
@@ -214,16 +214,16 @@ cdef inline object _parse_delimited_date(str date_string, bint dayfirst):
if dayfirst and not swapped_day_and_month:
warnings.warn(
PARSING_WARNING_MSG.format(
- format='MM/DD/YYYY',
- dayfirst='True',
+ format="MM/DD/YYYY",
+ dayfirst="True",
),
stacklevel=find_stack_level(),
)
elif not dayfirst and swapped_day_and_month:
warnings.warn(
PARSING_WARNING_MSG.format(
- format='DD/MM/YYYY',
- dayfirst='False (the default)',
+ format="DD/MM/YYYY",
+ dayfirst="False (the default)",
),
stacklevel=find_stack_level(),
)
@@ -255,11 +255,11 @@ cdef inline bint does_string_look_like_time(str parse_string):
buf = get_c_string_buf_and_size(parse_string, &length)
if length >= 4:
- if buf[1] == b':':
+ if buf[1] == b":":
# h:MM format
hour = getdigit_ascii(buf[0], -1)
minute = _parse_2digit(buf + 2)
- elif buf[2] == b':':
+ elif buf[2] == b":":
# HH:MM format
hour = _parse_2digit(buf)
minute = _parse_2digit(buf + 3)
@@ -289,7 +289,7 @@ def parse_datetime_string(
datetime dt
if not _does_string_look_like_datetime(date_string):
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
if does_string_look_like_time(date_string):
# use current datetime as default, not pass _DEFAULT_DATETIME
@@ -323,7 +323,7 @@ def parse_datetime_string(
except TypeError:
# following may be raised from dateutil
# TypeError: 'NoneType' object is not iterable
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
return dt
@@ -399,7 +399,7 @@ cdef parse_datetime_string_with_reso(
int out_tzoffset
if not _does_string_look_like_datetime(date_string):
- raise ValueError(f'Given date string {date_string} not likely a datetime')
+ raise ValueError(f"Given date string {date_string} not likely a datetime")
parsed, reso = _parse_delimited_date(date_string, dayfirst)
if parsed is not None:
@@ -478,7 +478,7 @@ cpdef bint _does_string_look_like_datetime(str py_string):
buf = get_c_string_buf_and_size(py_string, &length)
if length >= 1:
first = buf[0]
- if first == b'0':
+ if first == b"0":
# Strings starting with 0 are more consistent with a
# date-like string than a number
return True
@@ -492,7 +492,7 @@ cpdef bint _does_string_look_like_datetime(str py_string):
# a float number can be used, b'\0' - not to use a thousand
# separator, 1 - skip extra spaces before and after,
converted_date = xstrtod(buf, &endptr,
- b'.', b'e', b'\0', 1, &error, NULL)
+ b".", b"e", b"\0", 1, &error, NULL)
# if there were no errors and the whole line was parsed, then ...
if error == 0 and endptr == buf + length:
return converted_date >= 1000
@@ -512,7 +512,7 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
assert isinstance(date_string, str)
if date_string in nat_strings:
- return NaT, ''
+ return NaT, ""
date_string = date_string.upper()
date_len = len(date_string)
@@ -521,21 +521,21 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
# parse year only like 2000
try:
ret = default.replace(year=int(date_string))
- return ret, 'year'
+ return ret, "year"
except ValueError:
pass
try:
if 4 <= date_len <= 7:
- i = date_string.index('Q', 1, 6)
+ i = date_string.index("Q", 1, 6)
if i == 1:
quarter = int(date_string[0])
if date_len == 4 or (date_len == 5
- and date_string[i + 1] == '-'):
+ and date_string[i + 1] == "-"):
# r'(\d)Q-?(\d\d)')
year = 2000 + int(date_string[-2:])
elif date_len == 6 or (date_len == 7
- and date_string[i + 1] == '-'):
+ and date_string[i + 1] == "-"):
# r'(\d)Q-?(\d\d\d\d)')
year = int(date_string[-4:])
else:
@@ -543,14 +543,14 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
elif i == 2 or i == 3:
# r'(\d\d)-?Q(\d)'
if date_len == 4 or (date_len == 5
- and date_string[i - 1] == '-'):
+ and date_string[i - 1] == "-"):
quarter = int(date_string[-1])
year = 2000 + int(date_string[:2])
else:
raise ValueError
elif i == 4 or i == 5:
if date_len == 6 or (date_len == 7
- and date_string[i - 1] == '-'):
+ and date_string[i - 1] == "-"):
# r'(\d\d\d\d)-?Q(\d)'
quarter = int(date_string[-1])
year = int(date_string[:4])
@@ -558,9 +558,9 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
raise ValueError
if not (1 <= quarter <= 4):
- raise DateParseError(f'Incorrect quarterly string is given, '
- f'quarter must be '
- f'between 1 and 4: {date_string}')
+ raise DateParseError(f"Incorrect quarterly string is given, "
+ f"quarter must be "
+ f"between 1 and 4: {date_string}")
try:
# GH#1228
@@ -571,30 +571,30 @@ cdef inline object _parse_dateabbr_string(object date_string, datetime default,
f"freq: {freq}")
ret = default.replace(year=year, month=month)
- return ret, 'quarter'
+ return ret, "quarter"
except DateParseError:
raise
except ValueError:
pass
- if date_len == 6 and freq == 'M':
+ if date_len == 6 and freq == "M":
year = int(date_string[:4])
month = int(date_string[4:6])
try:
ret = default.replace(year=year, month=month)
- return ret, 'month'
+ return ret, "month"
except ValueError:
pass
- for pat in ['%Y-%m', '%b %Y', '%b-%Y']:
+ for pat in ["%Y-%m", "%b %Y", "%b-%Y"]:
try:
ret = datetime.strptime(date_string, pat)
- return ret, 'month'
+ return ret, "month"
except ValueError:
pass
- raise ValueError(f'Unable to parse {date_string}')
+ raise ValueError(f"Unable to parse {date_string}")
cpdef quarter_to_myear(int year, int quarter, str freq):
@@ -664,11 +664,11 @@ cdef dateutil_parse(
if reso is None:
raise ValueError(f"Unable to parse datetime string: {timestr}")
- if reso == 'microsecond':
- if repl['microsecond'] == 0:
- reso = 'second'
- elif repl['microsecond'] % 1000 == 0:
- reso = 'millisecond'
+ if reso == "microsecond":
+ if repl["microsecond"] == 0:
+ reso = "second"
+ elif repl["microsecond"] % 1000 == 0:
+ reso = "millisecond"
ret = default.replace(**repl)
if res.weekday is not None and not res.day:
@@ -712,7 +712,7 @@ def try_parse_dates(
object[::1] result
n = len(values)
- result = np.empty(n, dtype='O')
+ result = np.empty(n, dtype="O")
if parser is None:
if default is None: # GH2618
@@ -725,7 +725,7 @@ def try_parse_dates(
# EAFP here
try:
for i in range(n):
- if values[i] == '':
+ if values[i] == "":
result[i] = np.nan
else:
result[i] = parse_date(values[i])
@@ -736,7 +736,7 @@ def try_parse_dates(
parse_date = parser
for i in range(n):
- if values[i] == '':
+ if values[i] == "":
result[i] = np.nan
else:
result[i] = parse_date(values[i])
@@ -754,8 +754,8 @@ def try_parse_year_month_day(
n = len(years)
# TODO(cython3): Use len instead of `shape[0]`
if months.shape[0] != n or days.shape[0] != n:
- raise ValueError('Length of years/months/days must all be equal')
- result = np.empty(n, dtype='O')
+ raise ValueError("Length of years/months/days must all be equal")
+ result = np.empty(n, dtype="O")
for i in range(n):
result[i] = datetime(int(years[i]), int(months[i]), int(days[i]))
@@ -786,8 +786,8 @@ def try_parse_datetime_components(object[:] years,
or minutes.shape[0] != n
or seconds.shape[0] != n
):
- raise ValueError('Length of all datetime components must be equal')
- result = np.empty(n, dtype='O')
+ raise ValueError("Length of all datetime components must be equal")
+ result = np.empty(n, dtype="O")
for i in range(n):
float_secs = float(seconds[i])
@@ -818,15 +818,15 @@ def try_parse_datetime_components(object[:] years,
# Copyright (c) 2017 - dateutil contributors
class _timelex:
def __init__(self, instream):
- if getattr(instream, 'decode', None) is not None:
+ if getattr(instream, "decode", None) is not None:
instream = instream.decode()
if isinstance(instream, str):
self.stream = instream
- elif getattr(instream, 'read', None) is None:
+ elif getattr(instream, "read", None) is None:
raise TypeError(
- 'Parser must be a string or character stream, not '
- f'{type(instream).__name__}')
+ "Parser must be a string or character stream, not "
+ f"{type(instream).__name__}")
else:
self.stream = instream.read()
@@ -846,7 +846,7 @@ class _timelex:
cdef:
Py_ssize_t n
- stream = self.stream.replace('\x00', '')
+ stream = self.stream.replace("\x00", "")
# TODO: Change \s --> \s+ (this doesn't match existing behavior)
# TODO: change the punctuation block to punc+ (does not match existing)
@@ -865,10 +865,10 @@ class _timelex:
# Kludge to match ,-decimal behavior; it'd be better to do this
# later in the process and have a simpler tokenization
if (token is not None and token.isdigit() and
- tokens[n + 1] == ',' and tokens[n + 2].isdigit()):
+ tokens[n + 1] == "," and tokens[n + 2].isdigit()):
# Have to check None b/c it might be replaced during the loop
# TODO: I _really_ don't faking the value here
- tokens[n] = token + '.' + tokens[n + 2]
+ tokens[n] = token + "." + tokens[n + 2]
tokens[n + 1] = None
tokens[n + 2] = None
@@ -889,12 +889,12 @@ def format_is_iso(f: str) -> bint:
Generally of form YYYY-MM-DDTHH:MM:SS - date separator can be different
but must be consistent. Leading 0s in dates and times are optional.
"""
- iso_template = '%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S{micro_or_tz}'.format
- excluded_formats = ['%Y%m%d', '%Y%m', '%Y']
+ iso_template = "%Y{date_sep}%m{date_sep}%d{time_sep}%H:%M:%S{micro_or_tz}".format
+ excluded_formats = ["%Y%m%d", "%Y%m", "%Y"]
- for date_sep in [' ', '/', '\\', '-', '.', '']:
- for time_sep in [' ', 'T']:
- for micro_or_tz in ['', '%z', '.%f', '.%f%z']:
+ for date_sep in [" ", "/", "\\", "-", ".", ""]:
+ for time_sep in [" ", "T"]:
+ for micro_or_tz in ["", "%z", ".%f", ".%f%z"]:
if (iso_template(date_sep=date_sep,
time_sep=time_sep,
micro_or_tz=micro_or_tz,
@@ -922,25 +922,25 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
datetime format string (for `strftime` or `strptime`),
or None if it can't be guessed.
"""
- day_attribute_and_format = (('day',), '%d', 2)
+ day_attribute_and_format = (("day",), "%d", 2)
# attr name, format, padding (if any)
datetime_attrs_to_format = [
- (('year', 'month', 'day'), '%Y%m%d', 0),
- (('year',), '%Y', 0),
- (('month',), '%B', 0),
- (('month',), '%b', 0),
- (('month',), '%m', 2),
+ (("year", "month", "day"), "%Y%m%d", 0),
+ (("year",), "%Y", 0),
+ (("month",), "%B", 0),
+ (("month",), "%b", 0),
+ (("month",), "%m", 2),
day_attribute_and_format,
- (('hour',), '%H', 2),
- (('minute',), '%M', 2),
- (('second',), '%S', 2),
- (('second', 'microsecond'), '%S.%f', 0),
- (('tzinfo',), '%z', 0),
- (('tzinfo',), '%Z', 0),
- (('day_of_week',), '%a', 0),
- (('day_of_week',), '%A', 0),
- (('meridiem',), '%p', 0),
+ (("hour",), "%H", 2),
+ (("minute",), "%M", 2),
+ (("second",), "%S", 2),
+ (("second", "microsecond"), "%S.%f", 0),
+ (("tzinfo",), "%z", 0),
+ (("tzinfo",), "%Z", 0),
+ (("day_of_week",), "%a", 0),
+ (("day_of_week",), "%A", 0),
+ (("meridiem",), "%p", 0),
]
if dayfirst:
@@ -967,13 +967,13 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# instead of ‘+09:00’.
if parsed_datetime.tzinfo is not None:
offset_index = None
- if len(tokens) > 0 and tokens[-1] == 'Z':
+ if len(tokens) > 0 and tokens[-1] == "Z":
# the last 'Z' means zero offset
offset_index = -1
- elif len(tokens) > 1 and tokens[-2] in ('+', '-'):
+ elif len(tokens) > 1 and tokens[-2] in ("+", "-"):
# ex. [..., '+', '0900']
offset_index = -2
- elif len(tokens) > 3 and tokens[-4] in ('+', '-'):
+ elif len(tokens) > 3 and tokens[-4] in ("+", "-"):
# ex. [..., '+', '09', ':', '00']
offset_index = -4
@@ -1017,10 +1017,10 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# We make exceptions for %Y and %Y-%m (only with the `-` separator)
# as they conform with ISO8601.
if (
- len({'year', 'month', 'day'} & found_attrs) != 3
- and format_guess != ['%Y']
+ len({"year", "month", "day"} & found_attrs) != 3
+ and format_guess != ["%Y"]
and not (
- format_guess == ['%Y', None, '%m'] and tokens[1] == '-'
+ format_guess == ["%Y", None, "%m"] and tokens[1] == "-"
)
):
return None
@@ -1042,7 +1042,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
output_format.append(tokens[i])
- guessed_format = ''.join(output_format)
+ guessed_format = "".join(output_format)
try:
array_strptime(np.asarray([dt_str], dtype=object), guessed_format)
@@ -1050,7 +1050,7 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
# Doesn't parse, so this can't be the correct format.
return None
# rebuild string, capturing any inferred padding
- dt_str = ''.join(tokens)
+ dt_str = "".join(tokens)
if parsed_datetime.strftime(guessed_format) == dt_str:
return guessed_format
else:
@@ -1059,16 +1059,16 @@ def guess_datetime_format(dt_str: str, bint dayfirst=False) -> str | None:
cdef str _fill_token(token: str, padding: int):
cdef str token_filled
- if '.' not in token:
+ if "." not in token:
token_filled = token.zfill(padding)
else:
- seconds, nanoseconds = token.split('.')
- seconds = f'{int(seconds):02d}'
+ seconds, nanoseconds = token.split(".")
+ seconds = f"{int(seconds):02d}"
# right-pad so we get nanoseconds, then only take
# first 6 digits (microseconds) as stdlib datetime
# doesn't support nanoseconds
- nanoseconds = nanoseconds.ljust(9, '0')[:6]
- token_filled = f'{seconds}.{nanoseconds}'
+ nanoseconds = nanoseconds.ljust(9, "0")[:6]
+ token_filled = f"{seconds}.{nanoseconds}"
return token_filled
diff --git a/pandas/_libs/tslibs/period.pyx b/pandas/_libs/tslibs/period.pyx
index 0e7cfa4dd9670..cc9c2d631bcd9 100644
--- a/pandas/_libs/tslibs/period.pyx
+++ b/pandas/_libs/tslibs/period.pyx
@@ -1163,29 +1163,29 @@ cdef str period_format(int64_t value, int freq, object fmt=None):
if fmt is None:
freq_group = get_freq_group(freq)
if freq_group == FR_ANN:
- fmt = b'%Y'
+ fmt = b"%Y"
elif freq_group == FR_QTR:
- fmt = b'%FQ%q'
+ fmt = b"%FQ%q"
elif freq_group == FR_MTH:
- fmt = b'%Y-%m'
+ fmt = b"%Y-%m"
elif freq_group == FR_WK:
left = period_asfreq(value, freq, FR_DAY, 0)
right = period_asfreq(value, freq, FR_DAY, 1)
return f"{period_format(left, FR_DAY)}/{period_format(right, FR_DAY)}"
elif freq_group == FR_BUS or freq_group == FR_DAY:
- fmt = b'%Y-%m-%d'
+ fmt = b"%Y-%m-%d"
elif freq_group == FR_HR:
- fmt = b'%Y-%m-%d %H:00'
+ fmt = b"%Y-%m-%d %H:00"
elif freq_group == FR_MIN:
- fmt = b'%Y-%m-%d %H:%M'
+ fmt = b"%Y-%m-%d %H:%M"
elif freq_group == FR_SEC:
- fmt = b'%Y-%m-%d %H:%M:%S'
+ fmt = b"%Y-%m-%d %H:%M:%S"
elif freq_group == FR_MS:
- fmt = b'%Y-%m-%d %H:%M:%S.%l'
+ fmt = b"%Y-%m-%d %H:%M:%S.%l"
elif freq_group == FR_US:
- fmt = b'%Y-%m-%d %H:%M:%S.%u'
+ fmt = b"%Y-%m-%d %H:%M:%S.%u"
elif freq_group == FR_NS:
- fmt = b'%Y-%m-%d %H:%M:%S.%n'
+ fmt = b"%Y-%m-%d %H:%M:%S.%n"
else:
raise ValueError(f"Unknown freq: {freq}")
@@ -1513,7 +1513,7 @@ def extract_freq(ndarray[object] values) -> BaseOffset:
if is_period_object(value):
return value.freq
- raise ValueError('freq not specified and cannot be inferred')
+ raise ValueError("freq not specified and cannot be inferred")
# -----------------------------------------------------------------------
# period helpers
@@ -1774,7 +1774,7 @@ cdef class _Period(PeriodMixin):
return NaT
return NotImplemented
- def asfreq(self, freq, how='E') -> "Period":
+ def asfreq(self, freq, how="E") -> "Period":
"""
Convert Period to desired frequency, at the start or end of the interval.
@@ -1795,7 +1795,7 @@ cdef class _Period(PeriodMixin):
base2 = freq_to_dtype_code(freq)
# self.n can't be negative or 0
- end = how == 'E'
+ end = how == "E"
if end:
ordinal = self.ordinal + self.freq.n - 1
else:
@@ -1826,13 +1826,13 @@ cdef class _Period(PeriodMixin):
"""
how = validate_end_alias(how)
- end = how == 'E'
+ end = how == "E"
if end:
if freq == "B" or self.freq == "B":
# roll forward to ensure we land on B date
adjust = np.timedelta64(1, "D") - np.timedelta64(1, "ns")
return self.to_timestamp(how="start") + adjust
- endpoint = (self + self.freq).to_timestamp(how='start')
+ endpoint = (self + self.freq).to_timestamp(how="start")
return endpoint - np.timedelta64(1, "ns")
if freq is None:
@@ -2530,7 +2530,7 @@ class Period(_Period):
if not util.is_integer_object(ordinal):
raise ValueError("Ordinal must be an integer")
if freq is None:
- raise ValueError('Must supply freq for ordinal value')
+ raise ValueError("Must supply freq for ordinal value")
elif value is None:
if (year is None and month is None and
@@ -2581,7 +2581,7 @@ class Period(_Period):
else:
nanosecond = ts.nanosecond
if nanosecond != 0:
- reso = 'nanosecond'
+ reso = "nanosecond"
if dt is NaT:
ordinal = NPY_NAT
@@ -2596,18 +2596,18 @@ class Period(_Period):
elif PyDateTime_Check(value):
dt = value
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
if isinstance(dt, Timestamp):
nanosecond = dt.nanosecond
elif util.is_datetime64_object(value):
dt = Timestamp(value)
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
nanosecond = dt.nanosecond
elif PyDate_Check(value):
dt = datetime(year=value.year, month=value.month, day=value.day)
if freq is None:
- raise ValueError('Must supply freq for datetime value')
+ raise ValueError("Must supply freq for datetime value")
else:
msg = "Value must be Period, string, integer, or datetime"
raise ValueError(msg)
@@ -2644,10 +2644,10 @@ cdef int64_t _ordinal_from_fields(int year, int month, quarter, int day,
def validate_end_alias(how: str) -> str: # Literal["E", "S"]
- how_dict = {'S': 'S', 'E': 'E',
- 'START': 'S', 'FINISH': 'E',
- 'BEGIN': 'S', 'END': 'E'}
+ how_dict = {"S": "S", "E": "E",
+ "START": "S", "FINISH": "E",
+ "BEGIN": "S", "END": "E"}
how = how_dict.get(str(how).upper())
- if how not in {'S', 'E'}:
- raise ValueError('How must be one of S or E')
+ if how not in {"S", "E"}:
+ raise ValueError("How must be one of S or E")
return how
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index 79944bc86a8cf..c56b4891da428 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -35,36 +35,36 @@ from pandas._libs.tslibs.np_datetime cimport (
from pandas._libs.tslibs.timestamps cimport _Timestamp
-cdef dict _parse_code_table = {'y': 0,
- 'Y': 1,
- 'm': 2,
- 'B': 3,
- 'b': 4,
- 'd': 5,
- 'H': 6,
- 'I': 7,
- 'M': 8,
- 'S': 9,
- 'f': 10,
- 'A': 11,
- 'a': 12,
- 'w': 13,
- 'j': 14,
- 'U': 15,
- 'W': 16,
- 'Z': 17,
- 'p': 18, # an additional key, only with I
- 'z': 19,
- 'G': 20,
- 'V': 21,
- 'u': 22}
+cdef dict _parse_code_table = {"y": 0,
+ "Y": 1,
+ "m": 2,
+ "B": 3,
+ "b": 4,
+ "d": 5,
+ "H": 6,
+ "I": 7,
+ "M": 8,
+ "S": 9,
+ "f": 10,
+ "A": 11,
+ "a": 12,
+ "w": 13,
+ "j": 14,
+ "U": 15,
+ "W": 16,
+ "Z": 17,
+ "p": 18, # an additional key, only with I
+ "z": 19,
+ "G": 20,
+ "V": 21,
+ "u": 22}
def array_strptime(
ndarray[object] values,
str fmt,
bint exact=True,
- errors='raise',
+ errors="raise",
bint utc=False,
):
"""
@@ -88,9 +88,9 @@ def array_strptime(
int iso_week, iso_year
int64_t us, ns
object val, group_key, ampm, found, timezone
- bint is_raise = errors=='raise'
- bint is_ignore = errors=='ignore'
- bint is_coerce = errors=='coerce'
+ bint is_raise = errors=="raise"
+ bint is_ignore = errors=="ignore"
+ bint is_coerce = errors=="coerce"
bint found_naive = False
bint found_tz = False
tzinfo tz_out = None
@@ -98,12 +98,12 @@ def array_strptime(
assert is_raise or is_ignore or is_coerce
if fmt is not None:
- if '%W' in fmt or '%U' in fmt:
- if '%Y' not in fmt and '%y' not in fmt:
+ if "%W" in fmt or "%U" in fmt:
+ if "%Y" not in fmt and "%y" not in fmt:
raise ValueError("Cannot use '%W' or '%U' without day and year")
- if '%A' not in fmt and '%a' not in fmt and '%w' not in fmt:
+ if "%A" not in fmt and "%a" not in fmt and "%w" not in fmt:
raise ValueError("Cannot use '%W' or '%U' without day and year")
- elif '%Z' in fmt and '%z' in fmt:
+ elif "%Z" in fmt and "%z" in fmt:
raise ValueError("Cannot parse both %Z and %z")
global _TimeRE_cache, _regex_cache
@@ -132,9 +132,9 @@ def array_strptime(
raise ValueError(f"stray % in format '{fmt}'")
_regex_cache[fmt] = format_regex
- result = np.empty(n, dtype='M8[ns]')
- iresult = result.view('i8')
- result_timezone = np.empty(n, dtype='object')
+ result = np.empty(n, dtype="M8[ns]")
+ iresult = result.view("i8")
+ result_timezone = np.empty(n, dtype="object")
dts.us = dts.ps = dts.as = 0
@@ -216,7 +216,7 @@ def array_strptime(
parse_code = _parse_code_table[group_key]
if parse_code == 0:
- year = int(found_dict['y'])
+ year = int(found_dict["y"])
# Open Group specification for strptime() states that a %y
# value in the range of [00, 68] is in the century 2000, while
# [69,99] is in the century 1900
@@ -225,26 +225,26 @@ def array_strptime(
else:
year += 1900
elif parse_code == 1:
- year = int(found_dict['Y'])
+ year = int(found_dict["Y"])
elif parse_code == 2:
- month = int(found_dict['m'])
+ month = int(found_dict["m"])
# elif group_key == 'B':
elif parse_code == 3:
- month = locale_time.f_month.index(found_dict['B'].lower())
+ month = locale_time.f_month.index(found_dict["B"].lower())
# elif group_key == 'b':
elif parse_code == 4:
- month = locale_time.a_month.index(found_dict['b'].lower())
+ month = locale_time.a_month.index(found_dict["b"].lower())
# elif group_key == 'd':
elif parse_code == 5:
- day = int(found_dict['d'])
+ day = int(found_dict["d"])
# elif group_key == 'H':
elif parse_code == 6:
- hour = int(found_dict['H'])
+ hour = int(found_dict["H"])
elif parse_code == 7:
- hour = int(found_dict['I'])
- ampm = found_dict.get('p', '').lower()
+ hour = int(found_dict["I"])
+ ampm = found_dict.get("p", "").lower()
# If there was no AM/PM indicator, we'll treat this like AM
- if ampm in ('', locale_time.am_pm[0]):
+ if ampm in ("", locale_time.am_pm[0]):
# We're in AM so the hour is correct unless we're
# looking at 12 midnight.
# 12 midnight == 12 AM == hour 0
@@ -257,46 +257,46 @@ def array_strptime(
if hour != 12:
hour += 12
elif parse_code == 8:
- minute = int(found_dict['M'])
+ minute = int(found_dict["M"])
elif parse_code == 9:
- second = int(found_dict['S'])
+ second = int(found_dict["S"])
elif parse_code == 10:
- s = found_dict['f']
+ s = found_dict["f"]
# Pad to always return nanoseconds
s += "0" * (9 - len(s))
us = long(s)
ns = us % 1000
us = us // 1000
elif parse_code == 11:
- weekday = locale_time.f_weekday.index(found_dict['A'].lower())
+ weekday = locale_time.f_weekday.index(found_dict["A"].lower())
elif parse_code == 12:
- weekday = locale_time.a_weekday.index(found_dict['a'].lower())
+ weekday = locale_time.a_weekday.index(found_dict["a"].lower())
elif parse_code == 13:
- weekday = int(found_dict['w'])
+ weekday = int(found_dict["w"])
if weekday == 0:
weekday = 6
else:
weekday -= 1
elif parse_code == 14:
- julian = int(found_dict['j'])
+ julian = int(found_dict["j"])
elif parse_code == 15 or parse_code == 16:
week_of_year = int(found_dict[group_key])
- if group_key == 'U':
+ if group_key == "U":
# U starts week on Sunday.
week_of_year_start = 6
else:
# W starts week on Monday.
week_of_year_start = 0
elif parse_code == 17:
- timezone = pytz.timezone(found_dict['Z'])
+ timezone = pytz.timezone(found_dict["Z"])
elif parse_code == 19:
- timezone = parse_timezone_directive(found_dict['z'])
+ timezone = parse_timezone_directive(found_dict["z"])
elif parse_code == 20:
- iso_year = int(found_dict['G'])
+ iso_year = int(found_dict["G"])
elif parse_code == 21:
- iso_week = int(found_dict['V'])
+ iso_week = int(found_dict["V"])
elif parse_code == 22:
- weekday = int(found_dict['u'])
+ weekday = int(found_dict["u"])
weekday -= 1
# don't assume default values for ISO week/year
@@ -424,7 +424,7 @@ class TimeRE(_TimeRE):
if key == "Z":
# lazy computation
if self._Z is None:
- self._Z = self.__seqToRE(pytz.all_timezones, 'Z')
+ self._Z = self.__seqToRE(pytz.all_timezones, "Z")
# Note: handling Z is the key difference vs using the stdlib
# _strptime.TimeRE. test_to_datetime_parse_tzname_or_tzoffset with
# fmt='%Y-%m-%d %H:%M:%S %Z' fails with the stdlib version.
@@ -543,12 +543,12 @@ cdef tzinfo parse_timezone_directive(str z):
int total_minutes
object gmtoff_remainder, gmtoff_remainder_padding
- if z == 'Z':
+ if z == "Z":
return pytz.FixedOffset(0)
- if z[3] == ':':
+ if z[3] == ":":
z = z[:3] + z[4:]
if len(z) > 5:
- if z[5] != ':':
+ if z[5] != ":":
raise ValueError(f"Inconsistent use of : in {z}")
z = z[:5] + z[6:]
hours = int(z[1:3])
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index 5cc97a722b7a6..fc276d5d024cd 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -410,7 +410,7 @@ def array_to_timedelta64(
# raise here otherwise we segfault below
raise TypeError("array_to_timedelta64 'values' must have object dtype")
- if errors not in {'ignore', 'raise', 'coerce'}:
+ if errors not in {"ignore", "raise", "coerce"}:
raise ValueError("errors must be one of {'ignore', 'raise', or 'coerce'}")
if unit is not None and errors != "coerce":
@@ -442,7 +442,7 @@ def array_to_timedelta64(
except (TypeError, ValueError):
cnp.PyArray_MultiIter_RESET(mi)
- parsed_unit = parse_timedelta_unit(unit or 'ns')
+ parsed_unit = parse_timedelta_unit(unit or "ns")
for i in range(n):
item = <object>(<PyObject**>cnp.PyArray_MultiIter_DATA(mi, 1))[0]
@@ -513,15 +513,15 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
for c in ts:
# skip whitespace / commas
- if c == ' ' or c == ',':
+ if c == " " or c == ",":
pass
# positive signs are ignored
- elif c == '+':
+ elif c == "+":
pass
# neg
- elif c == '-':
+ elif c == "-":
if neg or have_value or have_hhmmss:
raise ValueError("only leading negative signs are allowed")
@@ -550,7 +550,7 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
result += timedelta_as_neg(r, neg)
# hh:mm:ss.
- elif c == ':':
+ elif c == ":":
# we flip this off if we have a leading value
if have_value:
@@ -559,15 +559,15 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
# we are in the pattern hh:mm:ss pattern
if len(number):
if current_unit is None:
- current_unit = 'h'
+ current_unit = "h"
m = 1000000000 * 3600
- elif current_unit == 'h':
- current_unit = 'm'
+ elif current_unit == "h":
+ current_unit = "m"
m = 1000000000 * 60
- elif current_unit == 'm':
- current_unit = 's'
+ elif current_unit == "m":
+ current_unit = "s"
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
have_hhmmss = 1
else:
@@ -576,17 +576,17 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
unit, number = [], []
# after the decimal point
- elif c == '.':
+ elif c == ".":
if len(number) and current_unit is not None:
# by definition we had something like
# so we need to evaluate the final field from a
# hh:mm:ss (so current_unit is 'm')
- if current_unit != 'm':
+ if current_unit != "m":
raise ValueError("expected hh:mm:ss format before .")
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
have_value = 1
unit, number, frac = [], [], []
@@ -622,16 +622,16 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
else:
m = 1
frac = frac[:9]
- r = <int64_t>int(''.join(frac)) * m
+ r = <int64_t>int("".join(frac)) * m
result += timedelta_as_neg(r, neg)
# we have a regular format
# we must have seconds at this point (hence the unit is still 'm')
elif current_unit is not None:
- if current_unit != 'm':
+ if current_unit != "m":
raise ValueError("expected hh:mm:ss format")
m = 1000000000
- r = <int64_t>int(''.join(number)) * m
+ r = <int64_t>int("".join(number)) * m
result += timedelta_as_neg(r, neg)
# we have a last abbreviation
@@ -652,7 +652,7 @@ cdef inline int64_t parse_timedelta_string(str ts) except? -1:
if have_value:
raise ValueError("have leftover units")
if len(number):
- r = timedelta_from_spec(number, frac, 'ns')
+ r = timedelta_from_spec(number, frac, "ns")
result += timedelta_as_neg(r, neg)
return result
@@ -683,20 +683,20 @@ cdef inline timedelta_from_spec(object number, object frac, object unit):
cdef:
str n
- unit = ''.join(unit)
+ unit = "".join(unit)
if unit in ["M", "Y", "y"]:
raise ValueError(
"Units 'M', 'Y' and 'y' do not represent unambiguous timedelta "
"values and are not supported."
)
- if unit == 'M':
+ if unit == "M":
# To parse ISO 8601 string, 'M' should be treated as minute,
# not month
- unit = 'm'
+ unit = "m"
unit = parse_timedelta_unit(unit)
- n = ''.join(number) + '.' + ''.join(frac)
+ n = "".join(number) + "." + "".join(frac)
return cast_from_unit(float(n), unit)
@@ -770,9 +770,9 @@ def _binary_op_method_timedeltalike(op, name):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return f(self, item)
- elif other.dtype.kind in ['m', 'M']:
+ elif other.dtype.kind in ["m", "M"]:
return op(self.to_timedelta64(), other)
- elif other.dtype.kind == 'O':
+ elif other.dtype.kind == "O":
return np.array([op(self, x) for x in other])
else:
return NotImplemented
@@ -838,7 +838,7 @@ cdef inline int64_t parse_iso_format_string(str ts) except? -1:
unicode c
int64_t result = 0, r
int p = 0, sign = 1
- object dec_unit = 'ms', err_msg
+ object dec_unit = "ms", err_msg
bint have_dot = 0, have_value = 0, neg = 0
list number = [], unit = []
@@ -854,65 +854,65 @@ cdef inline int64_t parse_iso_format_string(str ts) except? -1:
have_value = 1
if have_dot:
- if p == 3 and dec_unit != 'ns':
+ if p == 3 and dec_unit != "ns":
unit.append(dec_unit)
- if dec_unit == 'ms':
- dec_unit = 'us'
- elif dec_unit == 'us':
- dec_unit = 'ns'
+ if dec_unit == "ms":
+ dec_unit = "us"
+ elif dec_unit == "us":
+ dec_unit = "ns"
p = 0
p += 1
if not len(unit):
number.append(c)
else:
- r = timedelta_from_spec(number, '0', unit)
+ r = timedelta_from_spec(number, "0", unit)
result += timedelta_as_neg(r, neg)
neg = 0
unit, number = [], [c]
else:
- if c == 'P' or c == 'T':
+ if c == "P" or c == "T":
pass # ignore marking characters P and T
- elif c == '-':
+ elif c == "-":
if neg or have_value:
raise ValueError(err_msg)
else:
neg = 1
elif c == "+":
pass
- elif c in ['W', 'D', 'H', 'M']:
- if c in ['H', 'M'] and len(number) > 2:
+ elif c in ["W", "D", "H", "M"]:
+ if c in ["H", "M"] and len(number) > 2:
raise ValueError(err_msg)
- if c == 'M':
- c = 'min'
+ if c == "M":
+ c = "min"
unit.append(c)
- r = timedelta_from_spec(number, '0', unit)
+ r = timedelta_from_spec(number, "0", unit)
result += timedelta_as_neg(r, neg)
neg = 0
unit, number = [], []
- elif c == '.':
+ elif c == ".":
# append any seconds
if len(number):
- r = timedelta_from_spec(number, '0', 'S')
+ r = timedelta_from_spec(number, "0", "S")
result += timedelta_as_neg(r, neg)
unit, number = [], []
have_dot = 1
- elif c == 'S':
+ elif c == "S":
if have_dot: # ms, us, or ns
if not len(number) or p > 3:
raise ValueError(err_msg)
# pad to 3 digits as required
pad = 3 - p
while pad > 0:
- number.append('0')
+ number.append("0")
pad -= 1
- r = timedelta_from_spec(number, '0', dec_unit)
+ r = timedelta_from_spec(number, "0", dec_unit)
result += timedelta_as_neg(r, neg)
else: # seconds
- r = timedelta_from_spec(number, '0', 'S')
+ r = timedelta_from_spec(number, "0", "S")
result += timedelta_as_neg(r, neg)
else:
raise ValueError(err_msg)
@@ -1435,7 +1435,7 @@ cdef class _Timedelta(timedelta):
else:
sign = " "
- if format == 'all':
+ if format == "all":
fmt = ("{days} days{sign}{hours:02}:{minutes:02}:{seconds:02}."
"{milliseconds:03}{microseconds:03}{nanoseconds:03}")
else:
@@ -1451,24 +1451,24 @@ cdef class _Timedelta(timedelta):
else:
seconds_fmt = "{seconds:02}"
- if format == 'sub_day' and not self._d:
+ if format == "sub_day" and not self._d:
fmt = "{hours:02}:{minutes:02}:" + seconds_fmt
- elif subs or format == 'long':
+ elif subs or format == "long":
fmt = "{days} days{sign}{hours:02}:{minutes:02}:" + seconds_fmt
else:
fmt = "{days} days"
comp_dict = self.components._asdict()
- comp_dict['sign'] = sign
+ comp_dict["sign"] = sign
return fmt.format(**comp_dict)
def __repr__(self) -> str:
- repr_based = self._repr_base(format='long')
+ repr_based = self._repr_base(format="long")
return f"Timedelta('{repr_based}')"
def __str__(self) -> str:
- return self._repr_base(format='long')
+ return self._repr_base(format="long")
def __bool__(self) -> bool:
return self.value != 0
@@ -1512,14 +1512,14 @@ cdef class _Timedelta(timedelta):
'P500DT12H0M0S'
"""
components = self.components
- seconds = (f'{components.seconds}.'
- f'{components.milliseconds:0>3}'
- f'{components.microseconds:0>3}'
- f'{components.nanoseconds:0>3}')
+ seconds = (f"{components.seconds}."
+ f"{components.milliseconds:0>3}"
+ f"{components.microseconds:0>3}"
+ f"{components.nanoseconds:0>3}")
# Trim unnecessary 0s, 1.000000000 -> 1
- seconds = seconds.rstrip('0').rstrip('.')
- tpl = (f'P{components.days}DT{components.hours}'
- f'H{components.minutes}M{seconds}S')
+ seconds = seconds.rstrip("0").rstrip(".")
+ tpl = (f"P{components.days}DT{components.hours}"
+ f"H{components.minutes}M{seconds}S")
return tpl
# ----------------------------------------------------------------
@@ -1665,22 +1665,22 @@ class Timedelta(_Timedelta):
# are taken into consideration.
seconds = int((
(
- (kwargs.get('days', 0) + kwargs.get('weeks', 0) * 7) * 24
- + kwargs.get('hours', 0)
+ (kwargs.get("days", 0) + kwargs.get("weeks", 0) * 7) * 24
+ + kwargs.get("hours", 0)
) * 3600
- + kwargs.get('minutes', 0) * 60
- + kwargs.get('seconds', 0)
+ + kwargs.get("minutes", 0) * 60
+ + kwargs.get("seconds", 0)
) * 1_000_000_000
)
value = np.timedelta64(
- int(kwargs.get('nanoseconds', 0))
- + int(kwargs.get('microseconds', 0) * 1_000)
- + int(kwargs.get('milliseconds', 0) * 1_000_000)
+ int(kwargs.get("nanoseconds", 0))
+ + int(kwargs.get("microseconds", 0) * 1_000)
+ + int(kwargs.get("milliseconds", 0) * 1_000_000)
+ seconds
)
- if unit in {'Y', 'y', 'M'}:
+ if unit in {"Y", "y", "M"}:
raise ValueError(
"Units 'M', 'Y', and 'y' are no longer supported, as they do not "
"represent unambiguous timedelta values durations."
@@ -1702,8 +1702,8 @@ class Timedelta(_Timedelta):
elif isinstance(value, str):
if unit is not None:
raise ValueError("unit must not be specified if the value is a str")
- if (len(value) > 0 and value[0] == 'P') or (
- len(value) > 1 and value[:2] == '-P'
+ if (len(value) > 0 and value[0] == "P") or (
+ len(value) > 1 and value[:2] == "-P"
):
value = parse_iso_format_string(value)
else:
@@ -1757,7 +1757,7 @@ class Timedelta(_Timedelta):
)
if is_timedelta64_object(value):
- value = value.view('i8')
+ value = value.view("i8")
# nat
if value == NPY_NAT:
@@ -1839,14 +1839,14 @@ class Timedelta(_Timedelta):
# Arithmetic Methods
# TODO: Can some of these be defined in the cython class?
- __neg__ = _op_unary_method(lambda x: -x, '__neg__')
- __pos__ = _op_unary_method(lambda x: x, '__pos__')
- __abs__ = _op_unary_method(lambda x: abs(x), '__abs__')
+ __neg__ = _op_unary_method(lambda x: -x, "__neg__")
+ __pos__ = _op_unary_method(lambda x: x, "__pos__")
+ __abs__ = _op_unary_method(lambda x: abs(x), "__abs__")
- __add__ = _binary_op_method_timedeltalike(lambda x, y: x + y, '__add__')
- __radd__ = _binary_op_method_timedeltalike(lambda x, y: x + y, '__radd__')
- __sub__ = _binary_op_method_timedeltalike(lambda x, y: x - y, '__sub__')
- __rsub__ = _binary_op_method_timedeltalike(lambda x, y: y - x, '__rsub__')
+ __add__ = _binary_op_method_timedeltalike(lambda x, y: x + y, "__add__")
+ __radd__ = _binary_op_method_timedeltalike(lambda x, y: x + y, "__radd__")
+ __sub__ = _binary_op_method_timedeltalike(lambda x, y: x - y, "__sub__")
+ __rsub__ = _binary_op_method_timedeltalike(lambda x, y: y - x, "__rsub__")
def __mul__(self, other):
if is_integer_object(other) or is_float_object(other):
@@ -1947,7 +1947,7 @@ class Timedelta(_Timedelta):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return self.__floordiv__(item)
- if other.dtype.kind == 'm':
+ if other.dtype.kind == "m":
# also timedelta-like
# TODO: could suppress
# RuntimeWarning: invalid value encountered in floor_divide
@@ -1959,13 +1959,13 @@ class Timedelta(_Timedelta):
result[mask] = np.nan
return result
- elif other.dtype.kind in ['i', 'u', 'f']:
+ elif other.dtype.kind in ["i", "u", "f"]:
if other.ndim == 0:
return self // other.item()
else:
return self.to_timedelta64() // other
- raise TypeError(f'Invalid dtype {other.dtype} for __floordiv__')
+ raise TypeError(f"Invalid dtype {other.dtype} for __floordiv__")
return NotImplemented
@@ -1987,7 +1987,7 @@ class Timedelta(_Timedelta):
item = cnp.PyArray_ToScalar(cnp.PyArray_DATA(other), other)
return self.__rfloordiv__(item)
- if other.dtype.kind == 'm':
+ if other.dtype.kind == "m":
# also timedelta-like
# TODO: could suppress
# RuntimeWarning: invalid value encountered in floor_divide
@@ -2000,7 +2000,7 @@ class Timedelta(_Timedelta):
return result
# Includes integer array // Timedelta, disallowed in GH#19761
- raise TypeError(f'Invalid dtype {other.dtype} for __floordiv__')
+ raise TypeError(f"Invalid dtype {other.dtype} for __floordiv__")
return NotImplemented
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 8e9c8d40398d9..f987a2feb2717 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -434,7 +434,7 @@ cdef class _Timestamp(ABCTimestamp):
raise integer_op_not_supported(self)
elif is_array(other):
- if other.dtype.kind in ['i', 'u']:
+ if other.dtype.kind in ["i", "u"]:
raise integer_op_not_supported(self)
if other.dtype.kind == "m":
if self.tz is None:
@@ -465,7 +465,7 @@ cdef class _Timestamp(ABCTimestamp):
return self + neg_other
elif is_array(other):
- if other.dtype.kind in ['i', 'u']:
+ if other.dtype.kind in ["i", "u"]:
raise integer_op_not_supported(self)
if other.dtype.kind == "m":
if self.tz is None:
@@ -563,7 +563,7 @@ cdef class _Timestamp(ABCTimestamp):
if freq:
kwds = freq.kwds
- month_kw = kwds.get('startingMonth', kwds.get('month', 12))
+ month_kw = kwds.get("startingMonth", kwds.get("month", 12))
freqstr = freq.freqstr
else:
month_kw = 12
@@ -929,15 +929,15 @@ cdef class _Timestamp(ABCTimestamp):
zone = None
try:
- stamp += self.strftime('%z')
+ stamp += self.strftime("%z")
except ValueError:
year2000 = self.replace(year=2000)
- stamp += year2000.strftime('%z')
+ stamp += year2000.strftime("%z")
if self.tzinfo:
zone = get_timezone(self.tzinfo)
try:
- stamp += zone.strftime(' %%Z')
+ stamp += zone.strftime(" %%Z")
except AttributeError:
# e.g. tzlocal has no `strftime`
pass
@@ -954,16 +954,16 @@ cdef class _Timestamp(ABCTimestamp):
def _date_repr(self) -> str:
# Ideal here would be self.strftime("%Y-%m-%d"), but
# the datetime strftime() methods require year >= 1900 and is slower
- return f'{self.year}-{self.month:02d}-{self.day:02d}'
+ return f"{self.year}-{self.month:02d}-{self.day:02d}"
@property
def _time_repr(self) -> str:
- result = f'{self.hour:02d}:{self.minute:02d}:{self.second:02d}'
+ result = f"{self.hour:02d}:{self.minute:02d}:{self.second:02d}"
if self.nanosecond != 0:
- result += f'.{self.nanosecond + 1000 * self.microsecond:09d}'
+ result += f".{self.nanosecond + 1000 * self.microsecond:09d}"
elif self.microsecond != 0:
- result += f'.{self.microsecond:06d}'
+ result += f".{self.microsecond:06d}"
return result
@@ -1451,7 +1451,7 @@ class Timestamp(_Timestamp):
# GH#17690 tzinfo must be a datetime.tzinfo object, ensured
# by the cython annotation.
if tz is not None:
- raise ValueError('Can provide at most one of tz, tzinfo')
+ raise ValueError("Can provide at most one of tz, tzinfo")
# User passed tzinfo instead of tz; avoid silently ignoring
tz, tzinfo = tzinfo, None
@@ -1465,7 +1465,7 @@ class Timestamp(_Timestamp):
if (ts_input is not _no_input and not (
PyDateTime_Check(ts_input) and
- getattr(ts_input, 'tzinfo', None) is None)):
+ getattr(ts_input, "tzinfo", None) is None)):
raise ValueError(
"Cannot pass fold with possibly unambiguous input: int, "
"float, numpy.datetime64, str, or timezone-aware "
@@ -1479,7 +1479,7 @@ class Timestamp(_Timestamp):
"timezones."
)
- if hasattr(ts_input, 'fold'):
+ if hasattr(ts_input, "fold"):
ts_input = ts_input.replace(fold=fold)
# GH 30543 if pd.Timestamp already passed, return it
@@ -1536,7 +1536,7 @@ class Timestamp(_Timestamp):
# passed positionally see test_constructor_nanosecond
nanosecond = microsecond
- if getattr(ts_input, 'tzinfo', None) is not None and tz is not None:
+ if getattr(ts_input, "tzinfo", None) is not None and tz is not None:
raise ValueError("Cannot pass a datetime or Timestamp with tzinfo with "
"the tz parameter. Use tz_convert instead.")
@@ -1558,7 +1558,7 @@ class Timestamp(_Timestamp):
return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, ts.fold, ts.creso)
- def _round(self, freq, mode, ambiguous='raise', nonexistent='raise'):
+ def _round(self, freq, mode, ambiguous="raise", nonexistent="raise"):
cdef:
int64_t nanos
@@ -1581,7 +1581,7 @@ class Timestamp(_Timestamp):
)
return result
- def round(self, freq, ambiguous='raise', nonexistent='raise'):
+ def round(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Round the Timestamp to the specified resolution.
@@ -1676,7 +1676,7 @@ timedelta}, default 'raise'
freq, RoundTo.NEAREST_HALF_EVEN, ambiguous, nonexistent
)
- def floor(self, freq, ambiguous='raise', nonexistent='raise'):
+ def floor(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Return a new Timestamp floored to this resolution.
@@ -1765,7 +1765,7 @@ timedelta}, default 'raise'
"""
return self._round(freq, RoundTo.MINUS_INFTY, ambiguous, nonexistent)
- def ceil(self, freq, ambiguous='raise', nonexistent='raise'):
+ def ceil(self, freq, ambiguous="raise", nonexistent="raise"):
"""
Return a new Timestamp ceiled to this resolution.
@@ -1875,7 +1875,7 @@ timedelta}, default 'raise'
"Use tz_localize() or tz_convert() as appropriate"
)
- def tz_localize(self, tz, ambiguous='raise', nonexistent='raise'):
+ def tz_localize(self, tz, ambiguous="raise", nonexistent="raise"):
"""
Localize the Timestamp to a timezone.
@@ -1946,10 +1946,10 @@ default 'raise'
>>> pd.NaT.tz_localize()
NaT
"""
- if ambiguous == 'infer':
- raise ValueError('Cannot infer offset with only one time.')
+ if ambiguous == "infer":
+ raise ValueError("Cannot infer offset with only one time.")
- nonexistent_options = ('raise', 'NaT', 'shift_forward', 'shift_backward')
+ nonexistent_options = ("raise", "NaT", "shift_forward", "shift_backward")
if nonexistent not in nonexistent_options and not PyDelta_Check(nonexistent):
raise ValueError(
"The nonexistent argument must be one of 'raise', "
@@ -2122,21 +2122,21 @@ default 'raise'
return v
if year is not None:
- dts.year = validate('year', year)
+ dts.year = validate("year", year)
if month is not None:
- dts.month = validate('month', month)
+ dts.month = validate("month", month)
if day is not None:
- dts.day = validate('day', day)
+ dts.day = validate("day", day)
if hour is not None:
- dts.hour = validate('hour', hour)
+ dts.hour = validate("hour", hour)
if minute is not None:
- dts.min = validate('minute', minute)
+ dts.min = validate("minute", minute)
if second is not None:
- dts.sec = validate('second', second)
+ dts.sec = validate("second", second)
if microsecond is not None:
- dts.us = validate('microsecond', microsecond)
+ dts.us = validate("microsecond", microsecond)
if nanosecond is not None:
- dts.ps = validate('nanosecond', nanosecond) * 1000
+ dts.ps = validate("nanosecond", nanosecond) * 1000
if tzinfo is not object:
tzobj = tzinfo
@@ -2150,10 +2150,10 @@ default 'raise'
is_dst=not bool(fold))
tzobj = ts_input.tzinfo
else:
- kwargs = {'year': dts.year, 'month': dts.month, 'day': dts.day,
- 'hour': dts.hour, 'minute': dts.min, 'second': dts.sec,
- 'microsecond': dts.us, 'tzinfo': tzobj,
- 'fold': fold}
+ kwargs = {"year": dts.year, "month": dts.month, "day": dts.day,
+ "hour": dts.hour, "minute": dts.min, "second": dts.sec,
+ "microsecond": dts.us, "tzinfo": tzobj,
+ "fold": fold}
ts_input = datetime(**kwargs)
ts = convert_datetime_to_tsobject(
diff --git a/pandas/_libs/tslibs/timezones.pyx b/pandas/_libs/tslibs/timezones.pyx
index abf8bbc5ca5b9..8d7bebe5d46c2 100644
--- a/pandas/_libs/tslibs/timezones.pyx
+++ b/pandas/_libs/tslibs/timezones.pyx
@@ -97,12 +97,12 @@ cdef inline bint is_tzlocal(tzinfo tz):
cdef inline bint treat_tz_as_pytz(tzinfo tz):
- return (hasattr(tz, '_utc_transition_times') and
- hasattr(tz, '_transition_info'))
+ return (hasattr(tz, "_utc_transition_times") and
+ hasattr(tz, "_transition_info"))
cdef inline bint treat_tz_as_dateutil(tzinfo tz):
- return hasattr(tz, '_trans_list') and hasattr(tz, '_trans_idx')
+ return hasattr(tz, "_trans_list") and hasattr(tz, "_trans_idx")
# Returns str or tzinfo object
@@ -125,16 +125,16 @@ cpdef inline object get_timezone(tzinfo tz):
return tz
else:
if treat_tz_as_dateutil(tz):
- if '.tar.gz' in tz._filename:
+ if ".tar.gz" in tz._filename:
raise ValueError(
- 'Bad tz filename. Dateutil on python 3 on windows has a '
- 'bug which causes tzfile._filename to be the same for all '
- 'timezone files. Please construct dateutil timezones '
+ "Bad tz filename. Dateutil on python 3 on windows has a "
+ "bug which causes tzfile._filename to be the same for all "
+ "timezone files. Please construct dateutil timezones "
'implicitly by passing a string like "dateutil/Europe'
'/London" when you construct your pandas objects instead '
- 'of passing a timezone object. See '
- 'https://github.com/pandas-dev/pandas/pull/7362')
- return 'dateutil/' + tz._filename
+ "of passing a timezone object. See "
+ "https://github.com/pandas-dev/pandas/pull/7362")
+ return "dateutil/" + tz._filename
else:
# tz is a pytz timezone or unknown.
try:
@@ -152,19 +152,19 @@ cpdef inline tzinfo maybe_get_tz(object tz):
it to construct a timezone object. Otherwise, just return tz.
"""
if isinstance(tz, str):
- if tz == 'tzlocal()':
+ if tz == "tzlocal()":
tz = _dateutil_tzlocal()
- elif tz.startswith('dateutil/'):
+ elif tz.startswith("dateutil/"):
zone = tz[9:]
tz = dateutil_gettz(zone)
# On Python 3 on Windows, the filename is not always set correctly.
- if isinstance(tz, _dateutil_tzfile) and '.tar.gz' in tz._filename:
+ if isinstance(tz, _dateutil_tzfile) and ".tar.gz" in tz._filename:
tz._filename = zone
- elif tz[0] in {'-', '+'}:
+ elif tz[0] in {"-", "+"}:
hours = int(tz[0:3])
minutes = int(tz[0] + tz[4:6])
tz = timezone(timedelta(hours=hours, minutes=minutes))
- elif tz[0:4] in {'UTC-', 'UTC+'}:
+ elif tz[0:4] in {"UTC-", "UTC+"}:
hours = int(tz[3:6])
minutes = int(tz[3] + tz[7:9])
tz = timezone(timedelta(hours=hours, minutes=minutes))
@@ -211,16 +211,16 @@ cdef inline object tz_cache_key(tzinfo tz):
if isinstance(tz, _pytz_BaseTzInfo):
return tz.zone
elif isinstance(tz, _dateutil_tzfile):
- if '.tar.gz' in tz._filename:
- raise ValueError('Bad tz filename. Dateutil on python 3 on '
- 'windows has a bug which causes tzfile._filename '
- 'to be the same for all timezone files. Please '
- 'construct dateutil timezones implicitly by '
+ if ".tar.gz" in tz._filename:
+ raise ValueError("Bad tz filename. Dateutil on python 3 on "
+ "windows has a bug which causes tzfile._filename "
+ "to be the same for all timezone files. Please "
+ "construct dateutil timezones implicitly by "
'passing a string like "dateutil/Europe/London" '
- 'when you construct your pandas objects instead '
- 'of passing a timezone object. See '
- 'https://github.com/pandas-dev/pandas/pull/7362')
- return 'dateutil' + tz._filename
+ "when you construct your pandas objects instead "
+ "of passing a timezone object. See "
+ "https://github.com/pandas-dev/pandas/pull/7362")
+ return "dateutil" + tz._filename
else:
return None
@@ -276,7 +276,7 @@ cdef int64_t[::1] unbox_utcoffsets(object transinfo):
int64_t[::1] arr
sz = len(transinfo)
- arr = np.empty(sz, dtype='i8')
+ arr = np.empty(sz, dtype="i8")
for i in range(sz):
arr[i] = int(transinfo[i][0].total_seconds()) * 1_000_000_000
@@ -312,35 +312,35 @@ cdef object get_dst_info(tzinfo tz):
if cache_key not in dst_cache:
if treat_tz_as_pytz(tz):
- trans = np.array(tz._utc_transition_times, dtype='M8[ns]')
- trans = trans.view('i8')
+ trans = np.array(tz._utc_transition_times, dtype="M8[ns]")
+ trans = trans.view("i8")
if tz._utc_transition_times[0].year == 1:
trans[0] = NPY_NAT + 1
deltas = unbox_utcoffsets(tz._transition_info)
- typ = 'pytz'
+ typ = "pytz"
elif treat_tz_as_dateutil(tz):
if len(tz._trans_list):
# get utc trans times
trans_list = _get_utc_trans_times_from_dateutil_tz(tz)
trans = np.hstack([
- np.array([0], dtype='M8[s]'), # place holder for 1st item
- np.array(trans_list, dtype='M8[s]')]).astype(
- 'M8[ns]') # all trans listed
- trans = trans.view('i8')
+ np.array([0], dtype="M8[s]"), # place holder for 1st item
+ np.array(trans_list, dtype="M8[s]")]).astype(
+ "M8[ns]") # all trans listed
+ trans = trans.view("i8")
trans[0] = NPY_NAT + 1
# deltas
deltas = np.array([v.offset for v in (
- tz._ttinfo_before,) + tz._trans_idx], dtype='i8')
+ tz._ttinfo_before,) + tz._trans_idx], dtype="i8")
deltas *= 1_000_000_000
- typ = 'dateutil'
+ typ = "dateutil"
elif is_fixed_offset(tz):
trans = np.array([NPY_NAT + 1], dtype=np.int64)
deltas = np.array([tz._ttinfo_std.offset],
- dtype='i8') * 1_000_000_000
- typ = 'fixed'
+ dtype="i8") * 1_000_000_000
+ typ = "fixed"
else:
# 2018-07-12 this is not reached in the tests, and this case
# is not handled in any of the functions that call
@@ -367,8 +367,8 @@ def infer_tzinfo(datetime start, datetime end):
if start is not None and end is not None:
tz = start.tzinfo
if not tz_compare(tz, end.tzinfo):
- raise AssertionError(f'Inputs must both have the same timezone, '
- f'{tz} != {end.tzinfo}')
+ raise AssertionError(f"Inputs must both have the same timezone, "
+ f"{tz} != {end.tzinfo}")
elif start is not None:
tz = start.tzinfo
elif end is not None:
diff --git a/pandas/_libs/tslibs/tzconversion.pyx b/pandas/_libs/tslibs/tzconversion.pyx
index afdf6d3d9b001..f74c72dc4e35c 100644
--- a/pandas/_libs/tslibs/tzconversion.pyx
+++ b/pandas/_libs/tslibs/tzconversion.pyx
@@ -248,9 +248,9 @@ timedelta-like}
# silence false-positive compiler warning
ambiguous_array = np.empty(0, dtype=bool)
if isinstance(ambiguous, str):
- if ambiguous == 'infer':
+ if ambiguous == "infer":
infer_dst = True
- elif ambiguous == 'NaT':
+ elif ambiguous == "NaT":
fill = True
elif isinstance(ambiguous, bool):
is_dst = True
@@ -258,23 +258,23 @@ timedelta-like}
ambiguous_array = np.ones(len(vals), dtype=bool)
else:
ambiguous_array = np.zeros(len(vals), dtype=bool)
- elif hasattr(ambiguous, '__iter__'):
+ elif hasattr(ambiguous, "__iter__"):
is_dst = True
if len(ambiguous) != len(vals):
raise ValueError("Length of ambiguous bool-array must be "
"the same size as vals")
ambiguous_array = np.asarray(ambiguous, dtype=bool)
- if nonexistent == 'NaT':
+ if nonexistent == "NaT":
fill_nonexist = True
- elif nonexistent == 'shift_forward':
+ elif nonexistent == "shift_forward":
shift_forward = True
- elif nonexistent == 'shift_backward':
+ elif nonexistent == "shift_backward":
shift_backward = True
elif PyDelta_Check(nonexistent):
from .timedeltas import delta_to_nanoseconds
shift_delta = delta_to_nanoseconds(nonexistent, reso=creso)
- elif nonexistent not in ('raise', None):
+ elif nonexistent not in ("raise", None):
msg = ("nonexistent must be one of {'NaT', 'raise', 'shift_forward', "
"shift_backwards} or a timedelta object")
raise ValueError(msg)
diff --git a/pandas/_libs/window/aggregations.pyx b/pandas/_libs/window/aggregations.pyx
index 702706f00455b..57ef3601b7461 100644
--- a/pandas/_libs/window/aggregations.pyx
+++ b/pandas/_libs/window/aggregations.pyx
@@ -1158,11 +1158,11 @@ cdef enum InterpolationType:
interpolation_types = {
- 'linear': LINEAR,
- 'lower': LOWER,
- 'higher': HIGHER,
- 'nearest': NEAREST,
- 'midpoint': MIDPOINT,
+ "linear": LINEAR,
+ "lower": LOWER,
+ "higher": HIGHER,
+ "nearest": NEAREST,
+ "midpoint": MIDPOINT,
}
@@ -1419,7 +1419,7 @@ def roll_apply(object obj,
# ndarray input
if raw and not arr.flags.c_contiguous:
- arr = arr.copy('C')
+ arr = arr.copy("C")
counts = roll_sum(np.isfinite(arr).astype(float), start, end, minp)
diff --git a/pandas/_libs/window/indexers.pyx b/pandas/_libs/window/indexers.pyx
index 465865dec23c4..02934346130a5 100644
--- a/pandas/_libs/window/indexers.pyx
+++ b/pandas/_libs/window/indexers.pyx
@@ -53,16 +53,16 @@ def calculate_variable_window_bounds(
Py_ssize_t i, j
if num_values <= 0:
- return np.empty(0, dtype='int64'), np.empty(0, dtype='int64')
+ return np.empty(0, dtype="int64"), np.empty(0, dtype="int64")
# default is 'right'
if closed is None:
- closed = 'right'
+ closed = "right"
- if closed in ['right', 'both']:
+ if closed in ["right", "both"]:
right_closed = True
- if closed in ['left', 'both']:
+ if closed in ["left", "both"]:
left_closed = True
# GH 43997:
@@ -76,9 +76,9 @@ def calculate_variable_window_bounds(
if index[num_values - 1] < index[0]:
index_growth_sign = -1
- start = np.empty(num_values, dtype='int64')
+ start = np.empty(num_values, dtype="int64")
start.fill(-1)
- end = np.empty(num_values, dtype='int64')
+ end = np.empty(num_values, dtype="int64")
end.fill(-1)
start[0] = 0
diff --git a/pandas/_libs/writers.pyx b/pandas/_libs/writers.pyx
index cd42b08a03474..fbd08687d7c82 100644
--- a/pandas/_libs/writers.pyx
+++ b/pandas/_libs/writers.pyx
@@ -89,14 +89,14 @@ def convert_json_to_lines(arr: str) -> str:
unsigned char val, newline, comma, left_bracket, right_bracket, quote
unsigned char backslash
- newline = ord('\n')
- comma = ord(',')
- left_bracket = ord('{')
- right_bracket = ord('}')
+ newline = ord("\n")
+ comma = ord(",")
+ left_bracket = ord("{")
+ right_bracket = ord("}")
quote = ord('"')
- backslash = ord('\\')
+ backslash = ord("\\")
- narr = np.frombuffer(arr.encode('utf-8'), dtype='u1').copy()
+ narr = np.frombuffer(arr.encode("utf-8"), dtype="u1").copy()
length = narr.shape[0]
for i in range(length):
val = narr[i]
@@ -114,7 +114,7 @@ def convert_json_to_lines(arr: str) -> str:
if not in_quotes:
num_open_brackets_seen -= 1
- return narr.tobytes().decode('utf-8') + '\n' # GH:36888
+ return narr.tobytes().decode("utf-8") + "\n" # GH:36888
# stata, pytables
diff --git a/pandas/io/sas/sas.pyx b/pandas/io/sas/sas.pyx
index 8c13566c656b7..7d0f549a2f976 100644
--- a/pandas/io/sas/sas.pyx
+++ b/pandas/io/sas/sas.pyx
@@ -343,7 +343,7 @@ cdef class Parser:
self.bit_offset = self.parser._page_bit_offset
self.subheader_pointer_length = self.parser._subheader_pointer_length
self.is_little_endian = parser.byte_order == "<"
- self.column_types = np.empty(self.column_count, dtype='int64')
+ self.column_types = np.empty(self.column_count, dtype="int64")
# page indicators
self.update_next_page()
@@ -352,9 +352,9 @@ cdef class Parser:
# map column types
for j in range(self.column_count):
- if column_types[j] == b'd':
+ if column_types[j] == b"d":
self.column_types[j] = column_type_decimal
- elif column_types[j] == b's':
+ elif column_types[j] == b"s":
self.column_types[j] = column_type_string
else:
raise ValueError(f"unknown column type: {self.parser.columns[j].ctype}")
| task here is:
add id: double-quote-cython-strings to
https://github.com/pandas-dev/pandas/blob/3d0d197cec32ed5ce30d28b922f329510c03f153/.pre-commit-config.yaml#L29
run pre-commit run double-quote-cython-strings --all-files
git add -u, git commit -m 'double quote cython strings', git push origin HEAD`
Motivation for this comes from https://github.com/pandas-dev/pandas/pull/49866#discussion_r1030853463
pyData 2022 sprint
https://github.com/noatamir/pydata-global-sprints/issues/21
@jorisvandenbossche, @MarcoGorelli, @WillAyd, @rhshadrach, @phofl @noatamir
| https://api.github.com/repos/pandas-dev/pandas/pulls/50013 | 2022-12-02T16:07:02Z | 2022-12-02T18:11:14Z | 2022-12-02T18:11:14Z | 2023-03-30T16:14:22Z |
DOC: Add missing :ref: to a link in a docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..aa61399f94330 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -5168,7 +5168,7 @@ def drop(
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
multi-index, labels on different levels can be removed by specifying
- the level. See the `user guide <advanced.shown_levels>`
+ the level. See the :ref:`user guide <advanced.shown_levels>`
for more information about the now unused levels.
Parameters
| - [x] closes https://github.com/noatamir/pydata-global-sprints/issues/18
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Docstring fix.
Validation outcome:
```
>>> python scripts/validate_docstrings.py pandas.DataFrame.drop
Matplotlib created a temporary config/cache directory at /tmp/matplotlib-e6oti6qf because the default path (/home/gitpod/.cache/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
################################################################################
###################### Docstring (pandas.DataFrame.drop) ######################
################################################################################
Drop specified labels from rows or columns.
Remove rows or columns by specifying label names and corresponding
axis, or by specifying directly index or column names. When using a
multi-index, labels on different levels can be removed by specifying
the level. See the :ref:`user guide <advanced.shown_levels>`
for more information about the now unused levels.
Parameters
----------
labels : single label or list-like
Index or column labels to drop. A tuple will be used as a single
label and not treated as a list-like.
axis : {0 or 'index', 1 or 'columns'}, default 0
Whether to drop labels from the index (0 or 'index') or
columns (1 or 'columns').
index : single label or list-like
Alternative to specifying axis (``labels, axis=0``
is equivalent to ``index=labels``).
columns : single label or list-like
Alternative to specifying axis (``labels, axis=1``
is equivalent to ``columns=labels``).
level : int or level name, optional
For MultiIndex, level from which the labels will be removed.
inplace : bool, default False
If False, return a copy. Otherwise, do operation
inplace and return None.
errors : {'ignore', 'raise'}, default 'raise'
If 'ignore', suppress error and only existing labels are
dropped.
Returns
-------
DataFrame or None
DataFrame without the removed index or column labels or
None if ``inplace=True``.
Raises
------
KeyError
If any of the labels is not found in the selected axis.
See Also
--------
DataFrame.loc : Label-location based indexer for selection by label.
DataFrame.dropna : Return DataFrame with labels on given axis omitted
where (all or any) data are missing.
DataFrame.drop_duplicates : Return DataFrame with duplicate rows
removed, optionally only considering certain columns.
Series.drop : Return Series with specified index labels removed.
Examples
--------
>>> df = pd.DataFrame(np.arange(12).reshape(3, 4),
... columns=['A', 'B', 'C', 'D'])
>>> df
A B C D
0 0 1 2 3
1 4 5 6 7
2 8 9 10 11
Drop columns
>>> df.drop(['B', 'C'], axis=1)
A D
0 0 3
1 4 7
2 8 11
>>> df.drop(columns=['B', 'C'])
A D
0 0 3
1 4 7
2 8 11
Drop a row by index
>>> df.drop([0, 1])
A B C D
2 8 9 10 11
Drop columns and/or rows of MultiIndex DataFrame
>>> midx = pd.MultiIndex(levels=[['lama', 'cow', 'falcon'],
... ['speed', 'weight', 'length']],
... codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2],
... [0, 1, 2, 0, 1, 2, 0, 1, 2]])
>>> df = pd.DataFrame(index=midx, columns=['big', 'small'],
... data=[[45, 30], [200, 100], [1.5, 1], [30, 20],
... [250, 150], [1.5, 0.8], [320, 250],
... [1, 0.8], [0.3, 0.2]])
>>> df
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
weight 1.0 0.8
length 0.3 0.2
Drop a specific index combination from the MultiIndex
DataFrame, i.e., drop the combination ``'falcon'`` and
``'weight'``, which deletes only the corresponding row
>>> df.drop(index=('falcon', 'weight'))
big small
lama speed 45.0 30.0
weight 200.0 100.0
length 1.5 1.0
cow speed 30.0 20.0
weight 250.0 150.0
length 1.5 0.8
falcon speed 320.0 250.0
length 0.3 0.2
>>> df.drop(index='cow', columns='small')
big
lama speed 45.0
weight 200.0
length 1.5
falcon speed 320.0
weight 1.0
length 0.3
>>> df.drop(index='length', level=1)
big small
lama speed 45.0 30.0
weight 200.0 100.0
cow speed 30.0 20.0
weight 250.0 150.0
falcon speed 320.0 250.0
weight 1.0 0.8
################################################################################
################################## Validation ##################################
################################################################################
Docstring for "pandas.DataFrame.drop" correct. :)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/50012 | 2022-12-02T15:51:20Z | 2022-12-02T17:36:15Z | 2022-12-02T17:36:15Z | 2022-12-02T17:58:01Z |
REF: remove numeric arg from NDFrame._convert | diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 60488a8ef9715..36c713cab7123 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -954,8 +954,8 @@ def coerce_indexer_dtype(indexer, categories) -> np.ndarray:
def soft_convert_objects(
values: np.ndarray,
+ *,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
period: bool = True,
copy: bool = True,
@@ -968,7 +968,6 @@ def soft_convert_objects(
----------
values : np.ndarray[object]
datetime : bool, default True
- numeric: bool, default True
timedelta : bool, default True
period : bool, default True
copy : bool, default True
@@ -978,16 +977,15 @@ def soft_convert_objects(
np.ndarray or ExtensionArray
"""
validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(numeric, "numeric")
validate_bool_kwarg(timedelta, "timedelta")
validate_bool_kwarg(copy, "copy")
- conversion_count = sum((datetime, numeric, timedelta))
+ conversion_count = sum((datetime, timedelta))
if conversion_count == 0:
- raise ValueError("At least one of datetime, numeric or timedelta must be True.")
+ raise ValueError("At least one of datetime or timedelta must be True.")
# Soft conversions
- if datetime or timedelta:
+ if datetime or timedelta or period:
# GH 20380, when datetime is beyond year 2262, hence outside
# bound of nanosecond-resolution 64-bit integers.
converted = lib.maybe_convert_objects(
@@ -999,13 +997,6 @@ def soft_convert_objects(
if converted is not values:
return converted
- if numeric and is_object_dtype(values.dtype):
- converted, _ = lib.maybe_convert_numeric(values, set(), coerce_numeric=True)
-
- # If all NaNs, then do not-alter
- values = converted if not isna(converted).all() else values
- values = values.copy() if copy else values
-
return values
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2de83bb7a4468..038c889e4d5f7 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6318,8 +6318,8 @@ def __deepcopy__(self: NDFrameT, memo=None) -> NDFrameT:
@final
def _convert(
self: NDFrameT,
+ *,
datetime: bool_t = False,
- numeric: bool_t = False,
timedelta: bool_t = False,
) -> NDFrameT:
"""
@@ -6329,9 +6329,6 @@ def _convert(
----------
datetime : bool, default False
If True, convert to date where possible.
- numeric : bool, default False
- If True, attempt to convert to numbers (including strings), with
- unconvertible values becoming NaN.
timedelta : bool, default False
If True, convert to timedelta where possible.
@@ -6340,12 +6337,10 @@ def _convert(
converted : same as input object
"""
validate_bool_kwarg(datetime, "datetime")
- validate_bool_kwarg(numeric, "numeric")
validate_bool_kwarg(timedelta, "timedelta")
return self._constructor(
self._mgr.convert(
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=True,
)
@@ -6390,11 +6385,8 @@ def infer_objects(self: NDFrameT) -> NDFrameT:
A int64
dtype: object
"""
- # numeric=False necessary to only soft convert;
- # python objects will still be converted to
- # native numpy numeric types
return self._constructor(
- self._mgr.convert(datetime=True, numeric=False, timedelta=True, copy=True)
+ self._mgr.convert(datetime=True, timedelta=True, copy=True)
).__finalize__(self, method="infer_objects")
@final
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index feca755fd43db..8ddab458e35a9 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -377,9 +377,9 @@ def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
def convert(
self: T,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> T:
def _convert(arr):
@@ -389,7 +389,6 @@ def _convert(arr):
return soft_convert_objects(
arr,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=copy,
)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f1856fce83160..57a0fc81515c5 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -429,9 +429,7 @@ def _maybe_downcast(self, blocks: list[Block], downcast=None) -> list[Block]:
# but ATM it breaks too much existing code.
# split and convert the blocks
- return extend_blocks(
- [blk.convert(datetime=True, numeric=False) for blk in blocks]
- )
+ return extend_blocks([blk.convert(datetime=True) for blk in blocks])
if downcast is None:
return blocks
@@ -451,9 +449,9 @@ def _downcast_2d(self, dtype) -> list[Block]:
def convert(
self,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> list[Block]:
"""
@@ -570,7 +568,7 @@ def replace(
if not (self.is_object and value is None):
# if the user *explicitly* gave None, we keep None, otherwise
# may downcast to NaN
- blocks = blk.convert(numeric=False, copy=False)
+ blocks = blk.convert(copy=False)
else:
blocks = [blk]
return blocks
@@ -642,7 +640,7 @@ def _replace_regex(
replace_regex(new_values, rx, value, mask)
block = self.make_block(new_values)
- return block.convert(numeric=False, copy=False)
+ return block.convert(copy=False)
@final
def replace_list(
@@ -712,9 +710,7 @@ def replace_list(
)
if convert and blk.is_object and not all(x is None for x in dest_list):
# GH#44498 avoid unwanted cast-back
- result = extend_blocks(
- [b.convert(numeric=False, copy=True) for b in result]
- )
+ result = extend_blocks([b.convert(copy=True) for b in result])
new_rb.extend(result)
rb = new_rb
return rb
@@ -1969,9 +1965,9 @@ def reduce(self, func) -> list[Block]:
@maybe_split
def convert(
self,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> list[Block]:
"""
@@ -1987,7 +1983,6 @@ def convert(
res_values = soft_convert_objects(
values,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
copy=copy,
)
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 20cc087adab23..306fea06963ec 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -443,16 +443,15 @@ def astype(self: T, dtype, copy: bool = False, errors: str = "raise") -> T:
def convert(
self: T,
+ *,
copy: bool = True,
datetime: bool = True,
- numeric: bool = True,
timedelta: bool = True,
) -> T:
return self.apply(
"convert",
copy=copy,
datetime=datetime,
- numeric=numeric,
timedelta=timedelta,
)
diff --git a/pandas/tests/frame/methods/test_convert.py b/pandas/tests/frame/methods/test_convert.py
index 118af9f532abe..c6c70210d1cc4 100644
--- a/pandas/tests/frame/methods/test_convert.py
+++ b/pandas/tests/frame/methods/test_convert.py
@@ -1,10 +1,7 @@
import numpy as np
import pytest
-from pandas import (
- DataFrame,
- Series,
-)
+from pandas import DataFrame
import pandas._testing as tm
@@ -21,17 +18,11 @@ def test_convert_objects(self, float_string_frame):
float_string_frame["I"] = "1"
# add in some items that will be nan
- length = len(float_string_frame)
float_string_frame["J"] = "1."
float_string_frame["K"] = "1"
float_string_frame.loc[float_string_frame.index[0:5], ["J", "K"]] = "garbled"
- converted = float_string_frame._convert(datetime=True, numeric=True)
- assert converted["H"].dtype == "float64"
- assert converted["I"].dtype == "int64"
- assert converted["J"].dtype == "float64"
- assert converted["K"].dtype == "float64"
- assert len(converted["J"].dropna()) == length - 5
- assert len(converted["K"].dropna()) == length - 5
+ converted = float_string_frame._convert(datetime=True)
+ tm.assert_frame_equal(converted, float_string_frame)
# via astype
converted = float_string_frame.copy()
@@ -45,14 +36,6 @@ def test_convert_objects(self, float_string_frame):
with pytest.raises(ValueError, match="invalid literal"):
converted["H"].astype("int32")
- def test_convert_mixed_single_column(self):
- # GH#4119, not converting a mixed type (e.g.floats and object)
- # mixed in a single column
- df = DataFrame({"s": Series([1, "na", 3, 4])})
- result = df._convert(datetime=True, numeric=True)
- expected = DataFrame({"s": Series([1, np.nan, 3, 4])})
- tm.assert_frame_equal(result, expected)
-
def test_convert_objects_no_conversion(self):
mixed1 = DataFrame({"a": [1, 2, 3], "b": [4.0, 5, 6], "c": ["x", "y", "z"]})
mixed2 = mixed1._convert(datetime=True)
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index b3e59da4b0130..4d57b3c0adf0d 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -557,14 +557,6 @@ def test_astype_assignment(self):
)
tm.assert_frame_equal(df, expected)
- df = df_orig.copy()
- with tm.assert_produces_warning(FutureWarning, match=msg):
- df.iloc[:, 0:2] = df.iloc[:, 0:2]._convert(datetime=True, numeric=True)
- expected = DataFrame(
- [[1, 2, "3", ".4", 5, 6.0, "foo"]], columns=list("ABCDEFG")
- )
- tm.assert_frame_equal(df, expected)
-
# GH5702 (loc)
df = df_orig.copy()
with tm.assert_produces_warning(FutureWarning, match=msg):
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index ecf247efd74bf..dc7960cde4a61 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -599,9 +599,9 @@ def _compare(old_mgr, new_mgr):
mgr.iset(0, np.array(["1"] * N, dtype=np.object_))
mgr.iset(1, np.array(["2."] * N, dtype=np.object_))
mgr.iset(2, np.array(["foo."] * N, dtype=np.object_))
- new_mgr = mgr.convert(numeric=True)
- assert new_mgr.iget(0).dtype == np.int64
- assert new_mgr.iget(1).dtype == np.float64
+ new_mgr = mgr.convert()
+ assert new_mgr.iget(0).dtype == np.object_
+ assert new_mgr.iget(1).dtype == np.object_
assert new_mgr.iget(2).dtype == np.object_
assert new_mgr.iget(3).dtype == np.int64
assert new_mgr.iget(4).dtype == np.float64
@@ -612,9 +612,9 @@ def _compare(old_mgr, new_mgr):
mgr.iset(0, np.array(["1"] * N, dtype=np.object_))
mgr.iset(1, np.array(["2."] * N, dtype=np.object_))
mgr.iset(2, np.array(["foo."] * N, dtype=np.object_))
- new_mgr = mgr.convert(numeric=True)
- assert new_mgr.iget(0).dtype == np.int64
- assert new_mgr.iget(1).dtype == np.float64
+ new_mgr = mgr.convert()
+ assert new_mgr.iget(0).dtype == np.object_
+ assert new_mgr.iget(1).dtype == np.object_
assert new_mgr.iget(2).dtype == np.object_
assert new_mgr.iget(3).dtype == np.int32
assert new_mgr.iget(4).dtype == np.bool_
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index daa2dffeaa143..b1fcdd8df01ad 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -627,7 +627,7 @@ def try_remove_ws(x):
]
dfnew = df.applymap(try_remove_ws).replace(old, new)
gtnew = ground_truth.applymap(try_remove_ws)
- converted = dfnew._convert(datetime=True, numeric=True)
+ converted = dfnew._convert(datetime=True)
date_cols = ["Closing Date", "Updated Date"]
converted[date_cols] = converted[date_cols].apply(to_datetime)
tm.assert_frame_equal(converted, gtnew)
diff --git a/pandas/tests/series/methods/test_convert.py b/pandas/tests/series/methods/test_convert.py
index 4832780e6d0d3..f979a28154d4e 100644
--- a/pandas/tests/series/methods/test_convert.py
+++ b/pandas/tests/series/methods/test_convert.py
@@ -1,6 +1,5 @@
from datetime import datetime
-import numpy as np
import pytest
from pandas import (
@@ -19,36 +18,27 @@ def test_convert(self):
# Test coercion with mixed types
ser = Series(["a", "3.1415", dt, td])
- results = ser._convert(numeric=True)
- expected = Series([np.nan, 3.1415, np.nan, np.nan])
- tm.assert_series_equal(results, expected)
-
# Test standard conversion returns original
results = ser._convert(datetime=True)
tm.assert_series_equal(results, ser)
- results = ser._convert(numeric=True)
- expected = Series([np.nan, 3.1415, np.nan, np.nan])
- tm.assert_series_equal(results, expected)
+
results = ser._convert(timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_numeric_strings_with_other_true_args(self):
# test pass-through and non-conversion when other types selected
ser = Series(["1.0", "2.0", "3.0"])
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
- expected = Series([1.0, 2.0, 3.0])
- tm.assert_series_equal(results, expected)
- results = ser._convert(True, False, True)
+ results = ser._convert(datetime=True, timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_datetime_objects(self):
ser = Series(
[datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)], dtype="O"
)
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
+ results = ser._convert(datetime=True, timedelta=True)
expected = Series([datetime(2001, 1, 1, 0, 0), datetime(2001, 1, 1, 0, 0)])
tm.assert_series_equal(results, expected)
- results = ser._convert(datetime=False, numeric=True, timedelta=True)
+ results = ser._convert(datetime=False, timedelta=True)
tm.assert_series_equal(results, ser)
def test_convert_datetime64(self):
@@ -74,46 +64,12 @@ def test_convert_datetime64(self):
def test_convert_timedeltas(self):
td = datetime(2001, 1, 1, 0, 0) - datetime(2000, 1, 1, 0, 0)
ser = Series([td, td], dtype="O")
- results = ser._convert(datetime=True, numeric=True, timedelta=True)
+ results = ser._convert(datetime=True, timedelta=True)
expected = Series([td, td])
tm.assert_series_equal(results, expected)
- results = ser._convert(True, True, False)
+ results = ser._convert(datetime=True, timedelta=False)
tm.assert_series_equal(results, ser)
- def test_convert_numeric_strings(self):
- ser = Series([1.0, 2, 3], index=["a", "b", "c"])
- result = ser._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- # force numeric conversion
- res = ser.copy().astype("O")
- res["a"] = "1"
- result = res._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- res = ser.copy().astype("O")
- res["a"] = "1."
- result = res._convert(numeric=True)
- tm.assert_series_equal(result, ser)
-
- res = ser.copy().astype("O")
- res["a"] = "garbled"
- result = res._convert(numeric=True)
- expected = ser.copy()
- expected["a"] = np.nan
- tm.assert_series_equal(result, expected)
-
- def test_convert_mixed_type_noop(self):
- # GH 4119, not converting a mixed type (e.g.floats and object)
- ser = Series([1, "na", 3, 4])
- result = ser._convert(datetime=True, numeric=True)
- expected = Series([1, np.nan, 3, 4])
- tm.assert_series_equal(result, expected)
-
- ser = Series([1, "", 3, 4])
- result = ser._convert(datetime=True, numeric=True)
- tm.assert_series_equal(result, expected)
-
def test_convert_preserve_non_object(self):
# preserve if non-object
ser = Series([1], dtype="float32")
@@ -122,18 +78,17 @@ def test_convert_preserve_non_object(self):
def test_convert_no_arg_error(self):
ser = Series(["1.0", "2"])
- msg = r"At least one of datetime, numeric or timedelta must be True\."
+ msg = r"At least one of datetime or timedelta must be True\."
with pytest.raises(ValueError, match=msg):
ser._convert()
def test_convert_preserve_bool(self):
ser = Series([1, True, 3, 5], dtype=object)
- res = ser._convert(datetime=True, numeric=True)
- expected = Series([1, 1, 3, 5], dtype="i8")
- tm.assert_series_equal(res, expected)
+ res = ser._convert(datetime=True)
+ tm.assert_series_equal(res, ser)
def test_convert_preserve_all_bool(self):
ser = Series([False, True, False, False], dtype=object)
- res = ser._convert(datetime=True, numeric=True)
+ res = ser._convert(datetime=True)
expected = Series([False, True, False, False], dtype=bool)
tm.assert_series_equal(res, expected)
| Moving towards getting rid of _convert and soft_convert_objects and just using infer_objects and maybe_infer_objects | https://api.github.com/repos/pandas-dev/pandas/pulls/50011 | 2022-12-02T15:23:19Z | 2022-12-02T18:39:19Z | 2022-12-02T18:39:18Z | 2022-12-02T19:17:41Z |
ENH/TST: expand copy-on-write to assign() method | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..5358fdb0b4dbd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4887,7 +4887,7 @@ def assign(self, **kwargs) -> DataFrame:
Portland 17.0 62.6 290.15
Berkeley 25.0 77.0 298.15
"""
- data = self.copy()
+ data = self.copy(deep=None)
for k, v in kwargs.items():
data[k] = com.apply_if_callable(v, data)
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index bf65f153b10dd..6707f1411cbc7 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -280,3 +280,21 @@ def test_head_tail(method, using_copy_on_write):
# without CoW enabled, head and tail return views. Mutating df2 also mutates df.
df2.iloc[0, 0] = 1
tm.assert_frame_equal(df, df_orig)
+
+
+def test_assign(using_copy_on_write):
+ df = DataFrame({"a": [1, 2, 3]})
+ df_orig = df.copy()
+ df2 = df.assign()
+ df2._mgr._verify_integrity()
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ else:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+
+ # modify df2 to trigger CoW for that block
+ df2.iloc[0, 0] = 0
+ if using_copy_on_write:
+ assert not np.shares_memory(get_array(df2, "a"), get_array(df, "a"))
+ tm.assert_frame_equal(df, df_orig)
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
Added copy-on-write to `df.assign()`.
Progress towards #49473 via [PyData pandas sprint](https://github.com/noatamir/pydata-global-sprints/issues/11). | https://api.github.com/repos/pandas-dev/pandas/pulls/50010 | 2022-12-02T15:16:44Z | 2022-12-02T17:36:36Z | 2022-12-02T17:36:36Z | 2022-12-02T17:36:37Z |
TST/DEV: remove geopandas downstream test + remove from environment.yml | diff --git a/environment.yml b/environment.yml
index 1a02b522fab06..87c5f5d031fcf 100644
--- a/environment.yml
+++ b/environment.yml
@@ -64,7 +64,6 @@ dependencies:
- cftime
- dask
- ipython
- - geopandas-base
- seaborn
- scikit-learn
- statsmodels
diff --git a/pandas/tests/test_downstream.py b/pandas/tests/test_downstream.py
index 2f603f3700413..ab001d0b5a881 100644
--- a/pandas/tests/test_downstream.py
+++ b/pandas/tests/test_downstream.py
@@ -18,7 +18,7 @@
)
import pandas._testing as tm
-# geopandas, xarray, fsspec, fastparquet all produce these
+# xarray, fsspec, fastparquet all produce these
pytestmark = pytest.mark.filterwarnings(
"ignore:distutils Version classes are deprecated.*:DeprecationWarning"
)
@@ -223,15 +223,6 @@ def test_pandas_datareader():
pandas_datareader.DataReader("F", "quandl", "2017-01-01", "2017-02-01")
-def test_geopandas():
-
- geopandas = import_module("geopandas")
- gdf = geopandas.GeoDataFrame(
- {"col": [1, 2, 3], "geometry": geopandas.points_from_xy([1, 2, 3], [1, 2, 3])}
- )
- assert gdf[["col", "geometry"]].geometry.x.equals(Series([1.0, 2.0, 3.0]))
-
-
# Cython import warning
@pytest.mark.filterwarnings("ignore:can't resolve:ImportWarning")
def test_pyarrow(df):
diff --git a/requirements-dev.txt b/requirements-dev.txt
index 19ed830eca07e..debbdb635901c 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -51,7 +51,6 @@ botocore
cftime
dask
ipython
-geopandas
seaborn
scikit-learn
statsmodels
| See https://github.com/pandas-dev/pandas/pull/49994 for context
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit)
| https://api.github.com/repos/pandas-dev/pandas/pulls/50008 | 2022-12-02T13:26:37Z | 2022-12-02T15:30:06Z | 2022-12-02T15:30:06Z | 2022-12-02T15:31:59Z |
DEV: update gitpod docker | diff --git a/gitpod/Dockerfile b/gitpod/Dockerfile
index 299267a11fdd1..7581abe822816 100644
--- a/gitpod/Dockerfile
+++ b/gitpod/Dockerfile
@@ -13,7 +13,7 @@
# are visible in the host and container.
# The docker image is retrieved from the pandas dockerhub repository
#
-# docker run --rm -it -v $(pwd):/home/pandas pandas/pandas-dev:<image-tag>
+# docker run --rm -it -v $(pwd):/home/pandas pythonpandas/pandas-dev:<image-tag>
#
# By default the container will activate the conda environment pandas-dev
# which contains all the dependencies needed for pandas development
@@ -86,9 +86,9 @@ RUN chmod a+rx /usr/local/bin/workspace_config && \
# the container to create a conda environment from it
COPY environment.yml /tmp/environment.yml
-RUN mamba env create -f /tmp/environment.yml
# ---- Create conda environment ----
-RUN conda activate $CONDA_ENV && \
+RUN mamba env create -f /tmp/environment.yml && \
+ conda activate $CONDA_ENV && \
mamba install ccache -y && \
# needed for docs rendering later on
python -m pip install --no-cache-dir sphinx-autobuild && \
| Follow-up on https://github.com/pandas-dev/pandas/pull/48107 | https://api.github.com/repos/pandas-dev/pandas/pulls/50007 | 2022-12-02T11:50:14Z | 2022-12-02T15:24:10Z | 2022-12-02T15:24:10Z | 2022-12-02T15:25:56Z |
DOC: Update Benchmark Link #49871 | diff --git a/doc/source/development/maintaining.rst b/doc/source/development/maintaining.rst
index 9e32e43f30dfc..6e9f622e18eea 100644
--- a/doc/source/development/maintaining.rst
+++ b/doc/source/development/maintaining.rst
@@ -318,7 +318,7 @@ Benchmark machine
-----------------
The team currently owns dedicated hardware for hosting a website for pandas' ASV performance benchmark. The results
-are published to http://pandas.pydata.org/speed/pandas/
+are published to https://asv-runner.github.io/asv-collection/pandas/
Configuration
`````````````
|
- [ ] Updated the broken benchmark-machine link in `doc/source/development/maintaining.rst` file fixing a bug #49871.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50006 | 2022-12-02T11:45:13Z | 2022-12-02T13:14:08Z | 2022-12-02T13:14:08Z | 2022-12-02T13:14:16Z |
DOC: remove okwarning from scale guide | diff --git a/doc/source/user_guide/scale.rst b/doc/source/user_guide/scale.rst
index 129f43dd36930..a974af4ffe1c5 100644
--- a/doc/source/user_guide/scale.rst
+++ b/doc/source/user_guide/scale.rst
@@ -257,7 +257,6 @@ We'll import ``dask.dataframe`` and notice that the API feels similar to pandas.
We can use Dask's ``read_parquet`` function, but provide a globstring of files to read in.
.. ipython:: python
- :okwarning:
import dask.dataframe as dd
@@ -287,7 +286,6 @@ column names and dtypes. That's because Dask hasn't actually read the data yet.
Rather than executing immediately, doing operations build up a **task graph**.
.. ipython:: python
- :okwarning:
ddf
ddf["name"]
@@ -346,7 +344,6 @@ known automatically. In this case, since we created the parquet files manually,
we need to supply the divisions manually.
.. ipython:: python
- :okwarning:
N = 12
starts = [f"20{i:>02d}-01-01" for i in range(N)]
@@ -359,7 +356,6 @@ we need to supply the divisions manually.
Now we can do things like fast random access with ``.loc``.
.. ipython:: python
- :okwarning:
ddf.loc["2002-01-01 12:01":"2002-01-01 12:05"].compute()
@@ -373,7 +369,6 @@ results will fit in memory, so we can safely call ``compute`` without running
out of memory. At that point it's just a regular pandas object.
.. ipython:: python
- :okwarning:
@savefig dask_resample.png
ddf[["x", "y"]].resample("1D").mean().cumsum().compute().plot()
| - [ ] closes #29960
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Updating an entry in the latest `doc/source/user_guide/scale.rst`
| https://api.github.com/repos/pandas-dev/pandas/pulls/50004 | 2022-12-02T01:06:49Z | 2022-12-03T18:48:23Z | 2022-12-03T18:48:23Z | 2022-12-03T18:48:30Z |
Adding to the function docs that groupby.transform() function parameter can be a string | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 497e0ef724373..c73a2d40a33e1 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -399,7 +399,7 @@ class providing the base-class of operations.
Parameters
----------
-f : function
+f : function, str
Function to apply to each group. See the Notes section below for requirements.
Can also accept a Numba JIT function with
| Docs change: Adding to the function docs that groupby.transform() function parameter can be a string
- [ ] closes #49961
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/50002 | 2022-12-01T23:29:10Z | 2022-12-02T03:25:21Z | 2022-12-02T03:25:21Z | 2022-12-02T05:31:52Z |
REF: avoid _with_infer constructor | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 02ee13d60427e..43020ae471f10 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -274,7 +274,7 @@ def box_expected(expected, box_cls, transpose: bool = True):
else:
expected = pd.array(expected, copy=False)
elif box_cls is Index:
- expected = Index._with_infer(expected)
+ expected = Index(expected)
elif box_cls is Series:
expected = Series(expected)
elif box_cls is DataFrame:
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index c94b1068e5e65..cd719a5256ea3 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -894,7 +894,7 @@ def value_counts(
# For backwards compatibility, we let Index do its normal type
# inference, _except_ for if if infers from object to bool.
- idx = Index._with_infer(keys)
+ idx = Index(keys)
if idx.dtype == bool and keys.dtype == object:
idx = idx.astype(object)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 01a1ebd459616..0b55416d2bd7e 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2678,6 +2678,7 @@ def fillna(self, value=None, downcast=None):
if downcast is None:
# no need to care metadata other than name
# because it can't have freq if it has NaTs
+ # _with_infer needed for test_fillna_categorical
return Index._with_infer(result, name=self.name)
raise NotImplementedError(
f"{type(self).__name__}.fillna does not support 'downcast' "
@@ -4230,10 +4231,10 @@ def _reindex_non_unique(
new_indexer = np.arange(len(self.take(indexer)), dtype=np.intp)
new_indexer[~check] = -1
- if isinstance(self, ABCMultiIndex):
- new_index = type(self).from_tuples(new_labels, names=self.names)
+ if not isinstance(self, ABCMultiIndex):
+ new_index = Index(new_labels, name=self.name)
else:
- new_index = Index._with_infer(new_labels, name=self.name)
+ new_index = type(self).from_tuples(new_labels, names=self.names)
return new_index, indexer, new_indexer
# --------------------------------------------------------------------
@@ -6477,7 +6478,7 @@ def insert(self, loc: int, item) -> Index:
if self._typ == "numericindex":
# Use self._constructor instead of Index to retain NumericIndex GH#43921
# TODO(2.0) can use Index instead of self._constructor
- return self._constructor._with_infer(new_values, name=self.name)
+ return self._constructor(new_values, name=self.name)
else:
return Index._with_infer(new_values, name=self.name)
@@ -6850,7 +6851,7 @@ def ensure_index_from_sequences(sequences, names=None) -> Index:
if len(sequences) == 1:
if names is not None:
names = names[0]
- return Index._with_infer(sequences[0], name=names)
+ return Index(sequences[0], name=names)
else:
return MultiIndex.from_arrays(sequences, names=names)
@@ -6893,7 +6894,7 @@ def ensure_index(index_like: Axes, copy: bool = False) -> Index:
if isinstance(index_like, ABCSeries):
name = index_like.name
- return Index._with_infer(index_like, name=name, copy=copy)
+ return Index(index_like, name=name, copy=copy)
if is_iterator(index_like):
index_like = list(index_like)
@@ -6909,9 +6910,9 @@ def ensure_index(index_like: Axes, copy: bool = False) -> Index:
return MultiIndex.from_arrays(index_like)
else:
- return Index._with_infer(index_like, copy=copy, tupleize_cols=False)
+ return Index(index_like, copy=copy, tupleize_cols=False)
else:
- return Index._with_infer(index_like, copy=copy)
+ return Index(index_like, copy=copy)
def ensure_has_len(seq):
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f0b0ec23dba1a..012a92793acf9 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2112,7 +2112,7 @@ def append(self, other):
# setting names to None automatically
return MultiIndex.from_tuples(new_tuples)
except (TypeError, IndexError):
- return Index._with_infer(new_tuples)
+ return Index(new_tuples)
def argsort(self, *args, **kwargs) -> npt.NDArray[np.intp]:
if len(args) == 0 and len(kwargs) == 0:
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 71a50c69bfee1..8cd4cb976503d 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -319,7 +319,7 @@ def cons_row(x):
out = out.get_level_values(0)
return out
else:
- return Index._with_infer(result, name=name)
+ return Index(result, name=name)
else:
index = self._orig.index
# This is a mess.
diff --git a/pandas/core/util/hashing.py b/pandas/core/util/hashing.py
index 5a5e46e0227aa..e0b18047aa0ec 100644
--- a/pandas/core/util/hashing.py
+++ b/pandas/core/util/hashing.py
@@ -344,9 +344,7 @@ def _hash_ndarray(
)
codes, categories = factorize(vals, sort=False)
- cat = Categorical(
- codes, Index._with_infer(categories), ordered=False, fastpath=True
- )
+ cat = Categorical(codes, Index(categories), ordered=False, fastpath=True)
return _hash_categorical(cat, encoding, hash_key)
try:
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 529dd6baa70c0..f2af85c2e388d 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1147,6 +1147,9 @@ def test_numarr_with_dtype_add_nan(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
+ if box is Index and dtype is object:
+ # TODO: avoid this; match behavior with Series
+ expected = expected.astype(np.float64)
result = np.nan + ser
tm.assert_equal(result, expected)
@@ -1162,6 +1165,9 @@ def test_numarr_with_dtype_add_int(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
+ if box is Index and dtype is object:
+ # TODO: avoid this; match behavior with Series
+ expected = expected.astype(np.int64)
result = 1 + ser
tm.assert_equal(result, expected)
diff --git a/pandas/tests/arrays/integer/test_dtypes.py b/pandas/tests/arrays/integer/test_dtypes.py
index 1566476c32989..f34953876f5f4 100644
--- a/pandas/tests/arrays/integer/test_dtypes.py
+++ b/pandas/tests/arrays/integer/test_dtypes.py
@@ -89,7 +89,7 @@ def test_astype_index(all_data, dropna):
other = all_data
dtype = all_data.dtype
- idx = pd.Index._with_infer(np.array(other))
+ idx = pd.Index(np.array(other))
assert isinstance(idx, ABCIndex)
result = idx.astype(dtype)
diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 1f46442ee13b0..339c6560d6212 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -33,7 +33,7 @@ def test_groupby_extension_agg(self, as_index, data_for_grouping):
_, uniques = pd.factorize(data_for_grouping, sort=True)
if as_index:
- index = pd.Index._with_infer(uniques, name="B")
+ index = pd.Index(uniques, name="B")
expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
else:
@@ -61,7 +61,7 @@ def test_groupby_extension_no_sort(self, data_for_grouping):
result = df.groupby("B", sort=False).A.mean()
_, index = pd.factorize(data_for_grouping, sort=False)
- index = pd.Index._with_infer(index, name="B")
+ index = pd.Index(index, name="B")
expected = pd.Series([1.0, 3.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
diff --git a/pandas/tests/extension/test_string.py b/pandas/tests/extension/test_string.py
index ecc69113882c5..de7967a8578b5 100644
--- a/pandas/tests/extension/test_string.py
+++ b/pandas/tests/extension/test_string.py
@@ -391,7 +391,7 @@ def test_groupby_extension_agg(self, as_index, data_for_grouping):
_, uniques = pd.factorize(data_for_grouping, sort=True)
if as_index:
- index = pd.Index._with_infer(uniques, name="B")
+ index = pd.Index(uniques, name="B")
expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A")
self.assert_series_equal(result, expected)
else:
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 535c2d3e7e0f3..530934df72606 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -20,7 +20,6 @@
DataFrame,
Series,
)
-from pandas.core.indexes.api import ensure_index
from pandas.tests.io.test_compression import _compression_to_extension
from pandas.io.parsers import read_csv
@@ -1144,7 +1143,7 @@ def _convert_categorical(from_frame: DataFrame) -> DataFrame:
if is_categorical_dtype(ser.dtype):
cat = ser._values.remove_unused_categories()
if cat.categories.dtype == object:
- categories = ensure_index(cat.categories._values)
+ categories = pd.Index._with_infer(cat.categories._values)
cat = cat.set_categories(categories)
from_frame[col] = cat
return from_frame
| Trying to get down to Just One constructor. | https://api.github.com/repos/pandas-dev/pandas/pulls/50001 | 2022-12-01T23:00:31Z | 2022-12-03T05:36:16Z | 2022-12-03T05:36:16Z | 2022-12-03T21:42:09Z |
API: dont do inference on object-dtype arithmetic results | diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 7838ef8df4164..a3ba0557bc31c 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -367,6 +367,7 @@ Other API changes
- Passing a sequence containing a type that cannot be converted to :class:`Timedelta` to :func:`to_timedelta` or to the :class:`Series` or :class:`DataFrame` constructor with ``dtype="timedelta64[ns]"`` or to :class:`TimedeltaIndex` now raises ``TypeError`` instead of ``ValueError`` (:issue:`49525`)
- Changed behavior of :class:`Index` constructor with sequence containing at least one ``NaT`` and everything else either ``None`` or ``NaN`` to infer ``datetime64[ns]`` dtype instead of ``object``, matching :class:`Series` behavior (:issue:`49340`)
- :func:`read_stata` with parameter ``index_col`` set to ``None`` (the default) will now set the index on the returned :class:`DataFrame` to a :class:`RangeIndex` instead of a :class:`Int64Index` (:issue:`49745`)
+- Changed behavior of :class:`Index`, :class:`Series`, and :class:`DataFrame` arithmetic methods when working with object-dtypes, the results no longer do type inference on the result of the array operations, use ``result.infer_objects()`` to do type inference on the result (:issue:`49999`)
- Changed behavior of :class:`Index` constructor with an object-dtype ``numpy.ndarray`` containing all-``bool`` values or all-complex values, this will now retain object dtype, consistent with the :class:`Series` behavior (:issue:`49594`)
- Changed behavior of :meth:`DataFrame.shift` with ``axis=1``, an integer ``fill_value``, and homogeneous datetime-like dtype, this now fills new columns with integer dtypes instead of casting to datetimelike (:issue:`49842`)
- Files are now closed when encountering an exception in :func:`read_json` (:issue:`49921`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index dc0359426f07c..7ee9d8ff91b6c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6615,10 +6615,10 @@ def _logical_method(self, other, op):
def _construct_result(self, result, name):
if isinstance(result, tuple):
return (
- Index._with_infer(result[0], name=name),
- Index._with_infer(result[1], name=name),
+ Index(result[0], name=name, dtype=result[0].dtype),
+ Index(result[1], name=name, dtype=result[1].dtype),
)
- return Index._with_infer(result, name=name)
+ return Index(result, name=name, dtype=result.dtype)
def _arith_method(self, other, op):
if (
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index bfedaca093a8e..e514bdcac5265 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -230,18 +230,27 @@ def align_method_FRAME(
def to_series(right):
msg = "Unable to coerce to Series, length must be {req_len}: given {given_len}"
+
+ # pass dtype to avoid doing inference, which would break consistency
+ # with Index/Series ops
+ dtype = None
+ if getattr(right, "dtype", None) == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
if axis is not None and left._get_axis_name(axis) == "index":
if len(left.index) != len(right):
raise ValueError(
msg.format(req_len=len(left.index), given_len=len(right))
)
- right = left._constructor_sliced(right, index=left.index)
+ right = left._constructor_sliced(right, index=left.index, dtype=dtype)
else:
if len(left.columns) != len(right):
raise ValueError(
msg.format(req_len=len(left.columns), given_len=len(right))
)
- right = left._constructor_sliced(right, index=left.columns)
+ right = left._constructor_sliced(right, index=left.columns, dtype=dtype)
return right
if isinstance(right, np.ndarray):
@@ -250,13 +259,25 @@ def to_series(right):
right = to_series(right)
elif right.ndim == 2:
+ # We need to pass dtype=right.dtype to retain object dtype
+ # otherwise we lose consistency with Index and array ops
+ dtype = None
+ if getattr(right, "dtype", None) == object:
+ # can't pass right.dtype unconditionally as that would break on e.g.
+ # datetime64[h] ndarray
+ dtype = object
+
if right.shape == left.shape:
- right = left._constructor(right, index=left.index, columns=left.columns)
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
elif right.shape[0] == left.shape[0] and right.shape[1] == 1:
# Broadcast across columns
right = np.broadcast_to(right, left.shape)
- right = left._constructor(right, index=left.index, columns=left.columns)
+ right = left._constructor(
+ right, index=left.index, columns=left.columns, dtype=dtype
+ )
elif right.shape[1] == left.shape[1] and right.shape[0] == 1:
# Broadcast along rows
@@ -406,7 +427,10 @@ def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt
rvalues = rvalues.reshape(1, -1)
rvalues = np.broadcast_to(rvalues, frame.shape)
- return type(frame)(rvalues, index=frame.index, columns=frame.columns)
+ # pass dtype to avoid doing inference
+ return type(frame)(
+ rvalues, index=frame.index, columns=frame.columns, dtype=rvalues.dtype
+ )
def flex_arith_method_FRAME(op):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1e5f565934b50..bf5a530a28b28 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2995,9 +2995,10 @@ def _construct_result(
assert isinstance(res2, Series)
return (res1, res2)
- # We do not pass dtype to ensure that the Series constructor
- # does inference in the case where `result` has object-dtype.
- out = self._constructor(result, index=self.index)
+ # TODO: result should always be ArrayLike, but this fails for some
+ # JSONArray tests
+ dtype = getattr(result, "dtype", None)
+ out = self._constructor(result, index=self.index, dtype=dtype)
out = out.__finalize__(self)
# Set the result's name after __finalize__ is called because __finalize__
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index f2af85c2e388d..529dd6baa70c0 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1147,9 +1147,6 @@ def test_numarr_with_dtype_add_nan(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
- if box is Index and dtype is object:
- # TODO: avoid this; match behavior with Series
- expected = expected.astype(np.float64)
result = np.nan + ser
tm.assert_equal(result, expected)
@@ -1165,9 +1162,6 @@ def test_numarr_with_dtype_add_int(self, dtype, box_with_array):
ser = tm.box_expected(ser, box)
expected = tm.box_expected(expected, box)
- if box is Index and dtype is object:
- # TODO: avoid this; match behavior with Series
- expected = expected.astype(np.int64)
result = 1 + ser
tm.assert_equal(result, expected)
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index e107ff6b65c0f..cba2b9be255fb 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -187,7 +187,8 @@ def test_series_with_dtype_radd_timedelta(self, dtype):
dtype=dtype,
)
expected = Series(
- [pd.Timedelta("4 days"), pd.Timedelta("5 days"), pd.Timedelta("6 days")]
+ [pd.Timedelta("4 days"), pd.Timedelta("5 days"), pd.Timedelta("6 days")],
+ dtype=dtype,
)
result = pd.Timedelta("3 days") + ser
@@ -227,7 +228,9 @@ def test_mixed_timezone_series_ops_object(self):
name="xxx",
)
assert ser2.dtype == object
- exp = Series([pd.Timedelta("2 days"), pd.Timedelta("4 days")], name="xxx")
+ exp = Series(
+ [pd.Timedelta("2 days"), pd.Timedelta("4 days")], name="xxx", dtype=object
+ )
tm.assert_series_equal(ser2 - ser, exp)
tm.assert_series_equal(ser - ser2, -exp)
@@ -238,7 +241,11 @@ def test_mixed_timezone_series_ops_object(self):
)
assert ser.dtype == object
- exp = Series([pd.Timedelta("01:30:00"), pd.Timedelta("02:30:00")], name="xxx")
+ exp = Series(
+ [pd.Timedelta("01:30:00"), pd.Timedelta("02:30:00")],
+ name="xxx",
+ dtype=object,
+ )
tm.assert_series_equal(ser + pd.Timedelta("00:30:00"), exp)
tm.assert_series_equal(pd.Timedelta("00:30:00") + ser, exp)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index 1fb1e96cea94b..f3ea741607692 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1394,7 +1394,7 @@ def test_td64arr_addsub_anchored_offset_arraylike(self, obox, box_with_array):
# ------------------------------------------------------------------
# Unsorted
- def test_td64arr_add_sub_object_array(self, box_with_array):
+ def test_td64arr_add_sub_object_array(self, box_with_array, using_array_manager):
box = box_with_array
xbox = np.ndarray if box is pd.array else box
@@ -1410,6 +1410,11 @@ def test_td64arr_add_sub_object_array(self, box_with_array):
[Timedelta(days=2), Timedelta(days=4), Timestamp("2000-01-07")]
)
expected = tm.box_expected(expected, xbox)
+ if not using_array_manager:
+ # TODO: avoid mismatched behavior. This occurs bc inference
+ # can happen within TimedeltaArray method, which means results
+ # depend on whether we split blocks.
+ expected = expected.astype(object)
tm.assert_equal(result, expected)
msg = "unsupported operand type|cannot subtract a datelike"
@@ -1422,6 +1427,8 @@ def test_td64arr_add_sub_object_array(self, box_with_array):
expected = pd.Index([Timedelta(0), Timedelta(0), Timestamp("2000-01-01")])
expected = tm.box_expected(expected, xbox)
+ if not using_array_manager:
+ expected = expected.astype(object)
tm.assert_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Broken off from #49714, which also changes TimedeltaArray inference behavior. This is exclusively Index/Series/DataFrame. | https://api.github.com/repos/pandas-dev/pandas/pulls/49999 | 2022-12-01T21:54:19Z | 2022-12-07T18:15:40Z | 2022-12-07T18:15:40Z | 2022-12-16T23:32:56Z |
Fix styler excel example - take 2 | diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index d7cb70b0f5110..c4690730596ff 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -1110,8 +1110,7 @@ def format(
>>> df = pd.DataFrame({"A": [1, 0, -1]})
>>> pseudo_css = "number-format: 0§[Red](0)§-§@;"
- >>> df.style.applymap(lambda: pseudo_css).to_excel("formatted_file.xlsx")
- ... # doctest: +SKIP
+ >>> df.style.applymap(lambda v: pseudo_css).to_excel("formatted_file.xlsx")
.. figure:: ../../_static/style/format_excel_css.png
"""
| TypeError: <lambda>() takes 0 positional arguments but 1 was given.
Following https://github.com/pandas-dev/pandas/pull/49971#issuecomment-1334262698. | https://api.github.com/repos/pandas-dev/pandas/pulls/49996 | 2022-12-01T20:11:08Z | 2022-12-02T21:27:27Z | 2022-12-02T21:27:27Z | 2022-12-02T21:27:34Z |
DOC: Add example for read_csv with nullable dtype | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index a073087f6ec8f..f1c212b53a87a 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -464,6 +464,17 @@ worth trying.
os.remove("foo.csv")
+Setting ``use_nullable_dtypes=True`` will result in nullable dtypes for every column.
+
+.. ipython:: python
+
+ data = """a,b,c,d,e,f,g,h,i,j
+ 1,2.5,True,a,,,,,12-31-2019,
+ 3,4.5,False,b,6,7.5,True,a,12-31-2019,
+ """
+
+ pd.read_csv(StringIO(data), use_nullable_dtypes=True, parse_dates=["i"])
+
.. _io.categorical:
Specifying categorical dtype
| The first issue in the link: https://github.com/noatamir/pyladies-berlin-sprints/issues/4 | https://api.github.com/repos/pandas-dev/pandas/pulls/49995 | 2022-12-01T19:59:47Z | 2022-12-02T03:27:47Z | 2022-12-02T03:27:47Z | 2022-12-02T03:27:54Z |
Add new mamba environment creation command | diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index afa0d0306f1af..3b9075f045e69 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -119,7 +119,7 @@ We'll now kick off a three-step process:
.. code-block:: none
# Create and activate the build environment
- mamba env create
+ mamba env create --file environment.yml
mamba activate pandas-dev
# Build and install pandas
| - [X] closes #49959
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49993 | 2022-12-01T19:15:23Z | 2022-12-01T19:26:27Z | 2022-12-01T19:26:27Z | 2022-12-23T21:04:39Z |
TST/CoW: copy-on-write tests for add_prefix and add_suffix | diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 8015eb93988c9..f5c7b31e59bc5 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -253,6 +253,45 @@ def test_set_index(using_copy_on_write):
tm.assert_frame_equal(df, df_orig)
+def test_add_prefix(using_copy_on_write):
+ # GH 49473
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]})
+ df_orig = df.copy()
+ df2 = df.add_prefix("CoW_")
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "CoW_a"), get_array(df, "a"))
+ df2.iloc[0, 0] = 0
+
+ assert not np.shares_memory(get_array(df2, "CoW_a"), get_array(df, "a"))
+
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "CoW_c"), get_array(df, "c"))
+ expected = DataFrame(
+ {"CoW_a": [0, 2, 3], "CoW_b": [4, 5, 6], "CoW_c": [0.1, 0.2, 0.3]}
+ )
+ tm.assert_frame_equal(df2, expected)
+ tm.assert_frame_equal(df, df_orig)
+
+
+def test_add_suffix(using_copy_on_write):
+ # GH 49473
+ df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [0.1, 0.2, 0.3]})
+ df_orig = df.copy()
+ df2 = df.add_suffix("_CoW")
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "a_CoW"), get_array(df, "a"))
+ df2.iloc[0, 0] = 0
+ assert not np.shares_memory(get_array(df2, "a_CoW"), get_array(df, "a"))
+ if using_copy_on_write:
+ assert np.shares_memory(get_array(df2, "c_CoW"), get_array(df, "c"))
+ expected = DataFrame(
+ {"a_CoW": [0, 2, 3], "b_CoW": [4, 5, 6], "c_CoW": [0.1, 0.2, 0.3]}
+ )
+ tm.assert_frame_equal(df2, expected)
+ tm.assert_frame_equal(df, df_orig)
+
+
@pytest.mark.parametrize(
"method",
[
| [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
This PR is related to https://github.com/pandas-dev/pandas/issues/49473
`add_suffix` and `add_prefix` already had Copy-on-Write implemented, this PR will add test cases that explicitly tests the Copy-on-Write feature. | https://api.github.com/repos/pandas-dev/pandas/pulls/49991 | 2022-12-01T17:46:52Z | 2022-12-03T18:49:26Z | 2022-12-03T18:49:26Z | 2022-12-03T18:49:32Z |
DOC: fix RT02 errors in pd.io.formats.style #49968 | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 4d685bd8e8858..6c62c4efde6bb 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -290,7 +290,7 @@ def concat(self, other: Styler) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -415,7 +415,7 @@ def set_tooltips(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -1424,7 +1424,7 @@ def set_td_classes(self, classes: DataFrame) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1727,7 +1727,7 @@ def apply(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1844,7 +1844,7 @@ def apply_index(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -1948,7 +1948,7 @@ def applymap(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2003,7 +2003,7 @@ def set_table_attributes(self, attributes: str) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2105,7 +2105,7 @@ def use(self, styles: dict[str, Any]) -> Styler:
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2156,7 +2156,7 @@ def set_uuid(self, uuid: str) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2180,7 +2180,7 @@ def set_caption(self, caption: str | tuple) -> Styler:
Returns
-------
- self : Styler
+ Styler
"""
msg = "`caption` must be either a string or 2-tuple of strings."
if isinstance(caption, tuple):
@@ -2218,7 +2218,7 @@ def set_sticky(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2379,7 +2379,7 @@ def set_table_styles(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2504,7 +2504,7 @@ def hide(
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2748,7 +2748,7 @@ def background_gradient(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -2881,7 +2881,7 @@ def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler:
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -2978,7 +2978,7 @@ def bar( # pylint: disable=disallowed-name
Returns
-------
- self : Styler
+ Styler
Notes
-----
@@ -3053,7 +3053,7 @@ def highlight_null(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3099,7 +3099,7 @@ def highlight_max(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3147,7 +3147,7 @@ def highlight_min(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3203,7 +3203,7 @@ def highlight_between(
Returns
-------
- self : Styler
+ Styler
See Also
--------
@@ -3315,7 +3315,7 @@ def highlight_quantile(
Returns
-------
- self : Styler
+ Styler
See Also
--------
| Simplified return type for several Styler methods in pd.io.formats.style.
- [x] xref #49968
- [ ] tests added and passed
- [x] passes pre-commit code checks
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49983 | 2022-12-01T06:24:51Z | 2022-12-05T19:35:05Z | 2022-12-05T19:35:05Z | 2022-12-06T07:59:12Z |
Pip use psycopg2 wheel | diff --git a/requirements-dev.txt b/requirements-dev.txt
index eac825493845c..19ed830eca07e 100644
--- a/requirements-dev.txt
+++ b/requirements-dev.txt
@@ -30,7 +30,7 @@ numexpr>=2.8.0
openpyxl
odfpy
pandas-gbq
-psycopg2
+psycopg2-binary
pyarrow
pymysql
pyreadstat
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index 2ab45b32dee93..f25ac9a24b98b 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -22,7 +22,11 @@
EXCLUDE = {"python", "c-compiler", "cxx-compiler"}
REMAP_VERSION = {"tzdata": "2022.1"}
-RENAME = {"pytables": "tables", "geopandas-base": "geopandas"}
+RENAME = {
+ "pytables": "tables",
+ "geopandas-base": "geopandas",
+ "psycopg2": "psycopg2-binary",
+}
def conda_package_to_pip(package: str):
| For pip users this makes the installation a bit easier | https://api.github.com/repos/pandas-dev/pandas/pulls/49982 | 2022-12-01T04:00:09Z | 2022-12-01T18:43:58Z | 2022-12-01T18:43:58Z | 2023-04-12T20:17:39Z |
Streamline docker usage | diff --git a/.devcontainer.json b/.devcontainer.json
index 8bea96aea29c1..7c5d009260c64 100644
--- a/.devcontainer.json
+++ b/.devcontainer.json
@@ -9,8 +9,7 @@
// You can edit these settings after create using File > Preferences > Settings > Remote.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
- "python.condaPath": "/opt/conda/bin/conda",
- "python.pythonPath": "/opt/conda/bin/python",
+ "python.pythonPath": "/usr/local/bin/python",
"python.formatting.provider": "black",
"python.linting.enabled": true,
"python.linting.flake8Enabled": true,
diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index 540e9481befd6..98770854f53dd 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -158,7 +158,7 @@ jobs:
run: docker build --pull --no-cache --tag pandas-dev-env .
- name: Show environment
- run: docker run -w /home/pandas pandas-dev-env mamba run -n pandas-dev python -c "import pandas as pd; print(pd.show_versions())"
+ run: docker run --rm pandas-dev-env python -c "import pandas as pd; print(pd.show_versions())"
requirements-dev-text-installable:
name: Test install requirements-dev.txt
diff --git a/Dockerfile b/Dockerfile
index 9de8695b24274..c987461e8cbb8 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -1,42 +1,13 @@
-FROM quay.io/condaforge/mambaforge
+FROM python:3.10.8
+WORKDIR /home/pandas
-# if you forked pandas, you can pass in your own GitHub username to use your fork
-# i.e. gh_username=myname
-ARG gh_username=pandas-dev
-ARG pandas_home="/home/pandas"
+RUN apt-get update && apt-get -y upgrade
+RUN apt-get install -y build-essential
-# Avoid warnings by switching to noninteractive
-ENV DEBIAN_FRONTEND=noninteractive
+# hdf5 needed for pytables installation
+RUN apt-get install -y libhdf5-dev
-# Configure apt and install packages
-RUN apt-get update \
- && apt-get -y install --no-install-recommends apt-utils git tzdata dialog 2>&1 \
- #
- # Configure timezone (fix for tests which try to read from "/etc/localtime")
- && ln -fs /usr/share/zoneinfo/Etc/UTC /etc/localtime \
- && dpkg-reconfigure -f noninteractive tzdata \
- #
- # cleanup
- && apt-get autoremove -y \
- && apt-get clean -y \
- && rm -rf /var/lib/apt/lists/*
-
-# Switch back to dialog for any ad-hoc use of apt-get
-ENV DEBIAN_FRONTEND=dialog
-
-# Clone pandas repo
-RUN mkdir "$pandas_home" \
- && git clone "https://github.com/$gh_username/pandas.git" "$pandas_home" \
- && cd "$pandas_home" \
- && git remote add upstream "https://github.com/pandas-dev/pandas.git" \
- && git pull upstream main
-
-# Set up environment
-RUN mamba env create -f "$pandas_home/environment.yml"
-
-# Build C extensions and pandas
-SHELL ["mamba", "run", "--no-capture-output", "-n", "pandas-dev", "/bin/bash", "-c"]
-RUN cd "$pandas_home" \
- && export \
- && python setup.py build_ext -j 4 \
- && python -m pip install --no-build-isolation -e .
+RUN python -m pip install --upgrade pip
+RUN python -m pip install --use-deprecated=legacy-resolver \
+ -r https://raw.githubusercontent.com/pandas-dev/pandas/main/requirements-dev.txt
+CMD ["/bin/bash"]
diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 3b9075f045e69..69f7f054d865d 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -228,34 +228,22 @@ with a full pandas development environment.
Build the Docker image::
- # Build the image pandas-yourname-env
- docker build --tag pandas-yourname-env .
- # Or build the image by passing your GitHub username to use your own fork
- docker build --build-arg gh_username=yourname --tag pandas-yourname-env .
+ # Build the image
+ docker build -t pandas-dev .
Run Container::
# Run a container and bind your local repo to the container
- docker run -it -w /home/pandas --rm -v path-to-local-pandas-repo:/home/pandas pandas-yourname-env
+ # This command assumes you are running from your local repo
+ # but if not alter ${PWD} to match your local repo path
+ docker run -it --rm -v ${PWD}:/home/pandas pandas-dev
-Then a ``pandas-dev`` virtual environment will be available with all the development dependencies.
+When inside the running container you can build and install pandas the same way as the other methods
-.. code-block:: shell
-
- root@... :/home/pandas# conda env list
- # conda environments:
- #
- base * /opt/conda
- pandas-dev /opt/conda/envs/pandas-dev
-
-.. note::
- If you bind your local repo for the first time, you have to build the C extensions afterwards.
- Run the following command inside the container::
-
- python setup.py build_ext -j 4
+.. code-block:: bash
- You need to rebuild the C extensions anytime the Cython code in ``pandas/_libs`` changes.
- This most frequently occurs when changing or merging branches.
+ python setup.py build_ext -j 4
+ python -m pip install -e . --no-build-isolation --no-use-pep517
*Even easier, you can integrate Docker with the following IDEs:*
| Took a look at this setup today and found a few things that could be improved to make it easier for new contributors. Some quick comparisons:
- Image size current 6.1 GB vs new 2.75 GB
- Image build time of current somewhere in the 30-45 minute range versus new at 2.5 min
- Current image blurs the lines of responsibility between a Docker image and a container
- Current image puts another layer of virtualization in with mamba that is arguably unnecessary with Docker
The remaining bottleneck with image creation is https://github.com/pandas-dev/pandas/issues/48828 | https://api.github.com/repos/pandas-dev/pandas/pulls/49981 | 2022-12-01T03:46:58Z | 2022-12-02T10:03:18Z | 2022-12-02T10:03:18Z | 2022-12-23T21:19:01Z |
CLN: assorted | diff --git a/pandas/_libs/tslibs/conversion.pyx b/pandas/_libs/tslibs/conversion.pyx
index 17facf9e16f4b..8c27170f65353 100644
--- a/pandas/_libs/tslibs/conversion.pyx
+++ b/pandas/_libs/tslibs/conversion.pyx
@@ -666,20 +666,20 @@ cpdef inline datetime localize_pydatetime(datetime dt, tzinfo tz):
cdef tzinfo convert_timezone(
- tzinfo tz_in,
- tzinfo tz_out,
- bint found_naive,
- bint found_tz,
- bint utc_convert,
+ tzinfo tz_in,
+ tzinfo tz_out,
+ bint found_naive,
+ bint found_tz,
+ bint utc_convert,
):
"""
Validate that ``tz_in`` can be converted/localized to ``tz_out``.
Parameters
----------
- tz_in : tzinfo
+ tz_in : tzinfo or None
Timezone info of element being processed.
- tz_out : tzinfo
+ tz_out : tzinfo or None
Timezone info of output.
found_naive : bool
Whether a timezone-naive element has been found so far.
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 0c99ae4b8e03d..5d7daec65c7d1 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -531,7 +531,7 @@ def assert_interval_array_equal(
def assert_period_array_equal(left, right, obj: str = "PeriodArray") -> None:
_check_isinstance(left, right, PeriodArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
assert_attr_equal("freq", left, right, obj=obj)
@@ -541,7 +541,7 @@ def assert_datetime_array_equal(
__tracebackhide__ = True
_check_isinstance(left, right, DatetimeArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
if check_freq:
assert_attr_equal("freq", left, right, obj=obj)
assert_attr_equal("tz", left, right, obj=obj)
@@ -552,7 +552,7 @@ def assert_timedelta_array_equal(
) -> None:
__tracebackhide__ = True
_check_isinstance(left, right, TimedeltaArray)
- assert_numpy_array_equal(left._data, right._data, obj=f"{obj}._data")
+ assert_numpy_array_equal(left._ndarray, right._ndarray, obj=f"{obj}._ndarray")
if check_freq:
assert_attr_equal("freq", left, right, obj=obj)
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index a3c201b402b0f..f11d031b2f622 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1646,13 +1646,11 @@ def __array_ufunc__(self, ufunc: np.ufunc, method: str, *inputs, **kwargs):
class ExtensionArraySupportsAnyAll(ExtensionArray):
- def any(self, *, skipna: bool = True) -> bool: # type: ignore[empty-body]
- # error: Missing return statement
- pass
+ def any(self, *, skipna: bool = True) -> bool:
+ raise AbstractMethodError(self)
- def all(self, *, skipna: bool = True) -> bool: # type: ignore[empty-body]
- # error: Missing return statement
- pass
+ def all(self, *, skipna: bool = True) -> bool:
+ raise AbstractMethodError(self)
class ExtensionOpsMixin:
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index a9af210e08741..bf7e28d5a4b98 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -11,7 +11,6 @@
Literal,
Sequence,
TypeVar,
- Union,
cast,
overload,
)
@@ -511,7 +510,7 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
result = self.copy() if copy else self
elif is_categorical_dtype(dtype):
- dtype = cast("Union[str, CategoricalDtype]", dtype)
+ dtype = cast(CategoricalDtype, dtype)
# GH 10696/18593/18630
dtype = self.dtype.update_dtype(dtype)
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index be20d825b0c80..4f01c4892db6c 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -257,13 +257,6 @@ def _check_compatible_with(self, other: DTScalarOrNaT) -> None:
"""
raise AbstractMethodError(self)
- # ------------------------------------------------------------------
- # NDArrayBackedExtensionArray compat
-
- @cache_readonly
- def _data(self) -> np.ndarray:
- return self._ndarray
-
# ------------------------------------------------------------------
def _box_func(self, x):
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 60488a8ef9715..704897722e938 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1195,9 +1195,7 @@ def maybe_cast_to_datetime(
# TODO: _from_sequence would raise ValueError in cases where
# _ensure_nanosecond_dtype raises TypeError
- # Incompatible types in assignment (expression has type "Union[dtype[Any],
- # ExtensionDtype]", variable has type "Optional[dtype[Any]]")
- dtype = _ensure_nanosecond_dtype(dtype) # type: ignore[assignment]
+ _ensure_nanosecond_dtype(dtype)
if is_timedelta64_dtype(dtype):
res = TimedeltaArray._from_sequence(value, dtype=dtype)
@@ -1235,12 +1233,11 @@ def sanitize_to_nanoseconds(values: np.ndarray, copy: bool = False) -> np.ndarra
return values
-def _ensure_nanosecond_dtype(dtype: DtypeObj) -> DtypeObj:
+def _ensure_nanosecond_dtype(dtype: DtypeObj) -> None:
"""
Convert dtypes with granularity less than nanosecond to nanosecond
>>> _ensure_nanosecond_dtype(np.dtype("M8[us]"))
- dtype('<M8[us]')
>>> _ensure_nanosecond_dtype(np.dtype("M8[D]"))
Traceback (most recent call last):
@@ -1277,7 +1274,6 @@ def _ensure_nanosecond_dtype(dtype: DtypeObj) -> DtypeObj:
f"dtype={dtype} is not supported. Supported resolutions are 's', "
"'ms', 'us', and 'ns'"
)
- return dtype
# TODO: other value-dependent functions to standardize here include
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index a225d2cd12eac..000b5ebbdd2f7 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -18,7 +18,6 @@
import pandas._libs.missing as libmissing
from pandas._libs.tslibs import (
NaT,
- Period,
iNaT,
)
@@ -749,10 +748,8 @@ def isna_all(arr: ArrayLike) -> bool:
if dtype.kind == "f" and isinstance(dtype, np.dtype):
checker = nan_checker
- elif (
- (isinstance(dtype, np.dtype) and dtype.kind in ["m", "M"])
- or isinstance(dtype, DatetimeTZDtype)
- or dtype.type is Period
+ elif (isinstance(dtype, np.dtype) and dtype.kind in ["m", "M"]) or isinstance(
+ dtype, (DatetimeTZDtype, PeriodDtype)
):
# error: Incompatible types in assignment (expression has type
# "Callable[[Any], Any]", variable has type "ufunc")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0144aefedaa5f..218c0e33af823 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7475,7 +7475,7 @@ def _cmp_method(self, other, op):
return self._construct_result(new_data)
def _arith_method(self, other, op):
- if ops.should_reindex_frame_op(self, other, op, 1, 1, None, None):
+ if ops.should_reindex_frame_op(self, other, op, 1, None, None):
return ops.frame_arith_method_with_reindex(self, other, op)
axis: Literal[1] = 1 # only relevant for Series other case
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 497e0ef724373..dba36066c7952 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1647,8 +1647,6 @@ def array_func(values: ArrayLike) -> ArrayLike:
return result
- # TypeError -> we may have an exception in trying to aggregate
- # continue and exclude the block
new_mgr = data.grouped_reduce(array_func)
res = self._wrap_agged_manager(new_mgr)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index f0b0ec23dba1a..9cbf3b6167305 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -841,7 +841,9 @@ def _set_levels(
self._reset_cache()
- def set_levels(self, levels, *, level=None, verify_integrity: bool = True):
+ def set_levels(
+ self, levels, *, level=None, verify_integrity: bool = True
+ ) -> MultiIndex:
"""
Set new levels on MultiIndex. Defaults to returning new index.
@@ -856,8 +858,7 @@ def set_levels(self, levels, *, level=None, verify_integrity: bool = True):
Returns
-------
- new index (of same type and class...etc) or None
- The same type as the caller or None if ``inplace=True``.
+ MultiIndex
Examples
--------
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index feca755fd43db..91216a9618365 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -758,9 +758,9 @@ def fast_xs(self, loc: int) -> SingleArrayManager:
result = dtype.construct_array_type()._from_sequence(values, dtype=dtype)
# for datetime64/timedelta64, the np.ndarray constructor cannot handle pd.NaT
elif is_datetime64_ns_dtype(dtype):
- result = DatetimeArray._from_sequence(values, dtype=dtype)._data
+ result = DatetimeArray._from_sequence(values, dtype=dtype)._ndarray
elif is_timedelta64_ns_dtype(dtype):
- result = TimedeltaArray._from_sequence(values, dtype=dtype)._data
+ result = TimedeltaArray._from_sequence(values, dtype=dtype)._ndarray
else:
result = np.array(values, dtype=dtype)
return SingleArrayManager([result], [self._axes[1]])
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index f1856fce83160..c8a6750e165ea 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -2291,6 +2291,6 @@ def external_values(values: ArrayLike) -> ArrayLike:
# NB: for datetime64tz this is different from np.asarray(values), since
# that returns an object-dtype ndarray of Timestamps.
# Avoid raising in .astype in casting from dt64tz to dt64
- return values._data
+ return values._ndarray
else:
return values
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index bfedaca093a8e..76d5fc8128a8f 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -6,7 +6,10 @@
from __future__ import annotations
import operator
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ cast,
+)
import numpy as np
@@ -312,7 +315,7 @@ def to_series(right):
def should_reindex_frame_op(
- left: DataFrame, right, op, axis, default_axis, fill_value, level
+ left: DataFrame, right, op, axis: int, fill_value, level
) -> bool:
"""
Check if this is an operation between DataFrames that will need to reindex.
@@ -326,7 +329,7 @@ def should_reindex_frame_op(
if not isinstance(right, ABCDataFrame):
return False
- if fill_value is None and level is None and axis is default_axis:
+ if fill_value is None and level is None and axis == 1:
# TODO: any other cases we should handle here?
# Intersection is always unique so we have to check the unique columns
@@ -411,17 +414,16 @@ def _maybe_align_series_as_frame(frame: DataFrame, series: Series, axis: AxisInt
def flex_arith_method_FRAME(op):
op_name = op.__name__.strip("_")
- default_axis = "columns"
na_op = get_array_op(op)
doc = make_flex_doc(op_name, "dataframe")
@Appender(doc)
- def f(self, other, axis=default_axis, level=None, fill_value=None):
+ def f(self, other, axis: Axis = "columns", level=None, fill_value=None):
+ axis = self._get_axis_number(axis) if axis is not None else 1
+ axis = cast(int, axis)
- if should_reindex_frame_op(
- self, other, op, axis, default_axis, fill_value, level
- ):
+ if should_reindex_frame_op(self, other, op, axis, fill_value, level):
return frame_arith_method_with_reindex(self, other, op)
if isinstance(other, ABCSeries) and fill_value is not None:
@@ -429,8 +431,6 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
# through the DataFrame path
raise NotImplementedError(f"fill_value {fill_value} not supported.")
- axis = self._get_axis_number(axis) if axis is not None else 1
-
other = maybe_prepare_scalar_for_op(other, self.shape)
self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
@@ -456,14 +456,13 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
def flex_comp_method_FRAME(op):
op_name = op.__name__.strip("_")
- default_axis = "columns" # because we are "flex"
doc = _flex_comp_doc_FRAME.format(
op_name=op_name, desc=_op_descriptions[op_name]["desc"]
)
@Appender(doc)
- def f(self, other, axis=default_axis, level=None):
+ def f(self, other, axis: Axis = "columns", level=None):
axis = self._get_axis_number(axis) if axis is not None else 1
self, other = align_method_FRAME(self, other, axis, flex=True, level=level)
diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py
index 6ed962c8f68e6..252eff8f9a823 100644
--- a/pandas/tests/apply/test_invalid_arg.py
+++ b/pandas/tests/apply/test_invalid_arg.py
@@ -355,7 +355,6 @@ def test_transform_wont_agg_series(string_series, func):
@pytest.mark.parametrize(
"op_wrapper", [lambda x: x, lambda x: [x], lambda x: {"A": x}, lambda x: {"A": [x]}]
)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_transform_reducer_raises(all_reductions, frame_or_series, op_wrapper):
# GH 35964
op = op_wrapper(all_reductions)
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index b4f1c5404d178..c35962d7d2e96 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -2437,7 +2437,7 @@ def test_dt64arr_addsub_object_dtype_2d():
assert isinstance(result, DatetimeArray)
assert result.freq is None
- tm.assert_numpy_array_equal(result._data, expected._data)
+ tm.assert_numpy_array_equal(result._ndarray, expected._ndarray)
with tm.assert_produces_warning(PerformanceWarning):
# Case where we expect to get a TimedeltaArray back
diff --git a/pandas/tests/arrays/datetimes/test_constructors.py b/pandas/tests/arrays/datetimes/test_constructors.py
index 992d047b1afef..6670d07a4c075 100644
--- a/pandas/tests/arrays/datetimes/test_constructors.py
+++ b/pandas/tests/arrays/datetimes/test_constructors.py
@@ -122,10 +122,10 @@ def test_freq_infer_raises(self):
def test_copy(self):
data = np.array([1, 2, 3], dtype="M8[ns]")
arr = DatetimeArray(data, copy=False)
- assert arr._data is data
+ assert arr._ndarray is data
arr = DatetimeArray(data, copy=True)
- assert arr._data is not data
+ assert arr._ndarray is not data
class TestSequenceToDT64NS:
diff --git a/pandas/tests/arrays/period/test_astype.py b/pandas/tests/arrays/period/test_astype.py
index e9245c9ca786b..475a85ca4b644 100644
--- a/pandas/tests/arrays/period/test_astype.py
+++ b/pandas/tests/arrays/period/test_astype.py
@@ -42,12 +42,12 @@ def test_astype_copies():
result = arr.astype(np.int64, copy=False)
# Add the `.base`, since we now use `.asi8` which returns a view.
- # We could maybe override it in PeriodArray to return ._data directly.
- assert result.base is arr._data
+ # We could maybe override it in PeriodArray to return ._ndarray directly.
+ assert result.base is arr._ndarray
result = arr.astype(np.int64, copy=True)
- assert result is not arr._data
- tm.assert_numpy_array_equal(result, arr._data.view("i8"))
+ assert result is not arr._ndarray
+ tm.assert_numpy_array_equal(result, arr._ndarray.view("i8"))
def test_astype_categorical():
diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py
index 3f310d0efa2ca..fbd6f362bd9e7 100644
--- a/pandas/tests/arrays/test_datetimelike.py
+++ b/pandas/tests/arrays/test_datetimelike.py
@@ -220,7 +220,7 @@ def test_unbox_scalar(self):
data = np.arange(10, dtype="i8") * 24 * 3600 * 10**9
arr = self.array_cls(data, freq="D")
result = arr._unbox_scalar(arr[0])
- expected = arr._data.dtype.type
+ expected = arr._ndarray.dtype.type
assert isinstance(result, expected)
result = arr._unbox_scalar(NaT)
@@ -350,13 +350,13 @@ def test_getitem_near_implementation_bounds(self):
def test_getitem_2d(self, arr1d):
# 2d slicing on a 1D array
- expected = type(arr1d)(arr1d._data[:, np.newaxis], dtype=arr1d.dtype)
+ expected = type(arr1d)(arr1d._ndarray[:, np.newaxis], dtype=arr1d.dtype)
result = arr1d[:, np.newaxis]
tm.assert_equal(result, expected)
# Lookup on a 2D array
arr2d = expected
- expected = type(arr2d)(arr2d._data[:3, 0], dtype=arr2d.dtype)
+ expected = type(arr2d)(arr2d._ndarray[:3, 0], dtype=arr2d.dtype)
result = arr2d[:3, 0]
tm.assert_equal(result, expected)
@@ -366,7 +366,7 @@ def test_getitem_2d(self, arr1d):
assert result == expected
def test_iter_2d(self, arr1d):
- data2d = arr1d._data[:3, np.newaxis]
+ data2d = arr1d._ndarray[:3, np.newaxis]
arr2d = type(arr1d)._simple_new(data2d, dtype=arr1d.dtype)
result = list(arr2d)
assert len(result) == 3
@@ -376,7 +376,7 @@ def test_iter_2d(self, arr1d):
assert x.dtype == arr1d.dtype
def test_repr_2d(self, arr1d):
- data2d = arr1d._data[:3, np.newaxis]
+ data2d = arr1d._ndarray[:3, np.newaxis]
arr2d = type(arr1d)._simple_new(data2d, dtype=arr1d.dtype)
result = repr(arr2d)
@@ -632,7 +632,7 @@ def test_array_interface(self, datetime_index):
# default asarray gives the same underlying data (for tz naive)
result = np.asarray(arr)
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, copy=False)
@@ -641,7 +641,7 @@ def test_array_interface(self, datetime_index):
# specifying M8[ns] gives the same result as default
result = np.asarray(arr, dtype="datetime64[ns]")
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, dtype="datetime64[ns]", copy=False)
@@ -720,13 +720,13 @@ def test_array_i8_dtype(self, arr1d):
assert result.base is None
def test_from_array_keeps_base(self):
- # Ensure that DatetimeArray._data.base isn't lost.
+ # Ensure that DatetimeArray._ndarray.base isn't lost.
arr = np.array(["2000-01-01", "2000-01-02"], dtype="M8[ns]")
dta = DatetimeArray(arr)
- assert dta._data is arr
+ assert dta._ndarray is arr
dta = DatetimeArray(arr[:0])
- assert dta._data.base is arr
+ assert dta._ndarray.base is arr
def test_from_dti(self, arr1d):
arr = arr1d
@@ -941,7 +941,7 @@ def test_array_interface(self, timedelta_index):
# default asarray gives the same underlying data
result = np.asarray(arr)
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, copy=False)
@@ -950,7 +950,7 @@ def test_array_interface(self, timedelta_index):
# specifying m8[ns] gives the same result as default
result = np.asarray(arr, dtype="timedelta64[ns]")
- expected = arr._data
+ expected = arr._ndarray
assert result is expected
tm.assert_numpy_array_equal(result, expected)
result = np.array(arr, dtype="timedelta64[ns]", copy=False)
diff --git a/pandas/tests/arrays/test_datetimes.py b/pandas/tests/arrays/test_datetimes.py
index 89c9ba85fcfa9..cd58afe368960 100644
--- a/pandas/tests/arrays/test_datetimes.py
+++ b/pandas/tests/arrays/test_datetimes.py
@@ -659,7 +659,7 @@ def test_shift_fill_value(self):
dti = pd.date_range("2016-01-01", periods=3)
dta = dti._data
- expected = DatetimeArray(np.roll(dta._data, 1))
+ expected = DatetimeArray(np.roll(dta._ndarray, 1))
fv = dta[-1]
for fill_value in [fv, fv.to_pydatetime(), fv.to_datetime64()]:
diff --git a/pandas/tests/arrays/timedeltas/test_constructors.py b/pandas/tests/arrays/timedeltas/test_constructors.py
index d24fabfeecb26..3a076a6828a98 100644
--- a/pandas/tests/arrays/timedeltas/test_constructors.py
+++ b/pandas/tests/arrays/timedeltas/test_constructors.py
@@ -51,11 +51,11 @@ def test_incorrect_dtype_raises(self):
def test_copy(self):
data = np.array([1, 2, 3], dtype="m8[ns]")
arr = TimedeltaArray(data, copy=False)
- assert arr._data is data
+ assert arr._ndarray is data
arr = TimedeltaArray(data, copy=True)
- assert arr._data is not data
- assert arr._data.base is not data
+ assert arr._ndarray is not data
+ assert arr._ndarray.base is not data
def test_from_sequence_dtype(self):
msg = "dtype .*object.* cannot be converted to timedelta64"
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index 703ac6c89fca8..f244b348c6763 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -237,11 +237,11 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
"arr, attr",
[
(pd.Categorical(["a", "b"]), "_codes"),
- (pd.core.arrays.period_array(["2000", "2001"], freq="D"), "_data"),
+ (pd.core.arrays.period_array(["2000", "2001"], freq="D"), "_ndarray"),
(pd.array([0, np.nan], dtype="Int64"), "_data"),
(IntervalArray.from_breaks([0, 1]), "_left"),
(SparseArray([0, 1]), "_sparse_values"),
- (DatetimeArray(np.array([1, 2], dtype="datetime64[ns]")), "_data"),
+ (DatetimeArray(np.array([1, 2], dtype="datetime64[ns]")), "_ndarray"),
# tz-aware Datetime
(
DatetimeArray(
@@ -250,20 +250,14 @@ def test_numpy_array_all_dtypes(any_numpy_dtype):
),
dtype=DatetimeTZDtype(tz="US/Central"),
),
- "_data",
+ "_ndarray",
),
],
)
def test_array(arr, attr, index_or_series, request):
box = index_or_series
- warn = None
- if arr.dtype.name in ("Sparse[int64, 0]") and box is pd.Index:
- mark = pytest.mark.xfail(reason="Index cannot yet store sparse dtype")
- request.node.add_marker(mark)
- warn = FutureWarning
- with tm.assert_produces_warning(warn):
- result = box(arr, copy=False).array
+ result = box(arr, copy=False).array
if attr:
arr = getattr(arr, attr)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index e6f1675bb8bc8..eb6ad4b575414 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -443,15 +443,12 @@ def test_reduce_series(
if not pa.types.is_boolean(pa_dtype):
request.node.add_marker(xfail_mark)
op_name = all_boolean_reductions
- s = pd.Series(data)
- result = getattr(s, op_name)(skipna=skipna)
+ ser = pd.Series(data)
+ result = getattr(ser, op_name)(skipna=skipna)
assert result is (op_name == "any")
class TestBaseGroupby(base.BaseGroupbyTests):
- def test_groupby_agg_extension(self, data_for_grouping, request):
- super().test_groupby_agg_extension(data_for_grouping)
-
def test_groupby_extension_no_sort(self, data_for_grouping, request):
pa_dtype = data_for_grouping.dtype.pyarrow_dtype
if pa.types.is_boolean(pa_dtype):
@@ -515,9 +512,6 @@ def test_in_numeric_groupby(self, data_for_grouping, request):
)
super().test_in_numeric_groupby(data_for_grouping)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("as_index", [True, False])
def test_groupby_extension_agg(self, as_index, data_for_grouping, request):
pa_dtype = data_for_grouping.dtype.pyarrow_dtype
@@ -638,15 +632,17 @@ class TestBaseIndex(base.BaseIndexTests):
class TestBaseInterface(base.BaseInterfaceTests):
- @pytest.mark.xfail(reason="pyarrow.ChunkedArray does not support views.")
+ @pytest.mark.xfail(reason="GH 45419: pyarrow.ChunkedArray does not support views.")
def test_view(self, data):
super().test_view(data)
class TestBaseMissing(base.BaseMissingTests):
- @pytest.mark.filterwarnings("ignore:Falling back:pandas.errors.PerformanceWarning")
def test_dropna_array(self, data_missing):
- super().test_dropna_array(data_missing)
+ with tm.maybe_produces_warning(
+ PerformanceWarning, pa_version_under6p0, check_stacklevel=False
+ ):
+ super().test_dropna_array(data_missing)
def test_fillna_no_op_returns_copy(self, data):
with tm.maybe_produces_warning(
@@ -949,14 +945,26 @@ def test_combine_le(self, data_repeated):
def test_combine_add(self, data_repeated, request):
pa_dtype = next(data_repeated(1)).dtype.pyarrow_dtype
- if pa.types.is_temporal(pa_dtype):
- request.node.add_marker(
- pytest.mark.xfail(
- raises=TypeError,
- reason=f"{pa_dtype} cannot be added to {pa_dtype}",
- )
- )
- super().test_combine_add(data_repeated)
+ if pa.types.is_duration(pa_dtype):
+ # TODO: this fails on the scalar addition constructing 'expected'
+ # but not in the actual 'combine' call, so may be salvage-able
+ mark = pytest.mark.xfail(
+ raises=TypeError,
+ reason=f"{pa_dtype} cannot be added to {pa_dtype}",
+ )
+ request.node.add_marker(mark)
+ super().test_combine_add(data_repeated)
+
+ elif pa.types.is_temporal(pa_dtype):
+ # analogous to datetime64, these cannot be added
+ orig_data1, orig_data2 = data_repeated(2)
+ s1 = pd.Series(orig_data1)
+ s2 = pd.Series(orig_data2)
+ with pytest.raises(TypeError):
+ s1.combine(s2, lambda x1, x2: x1 + x2)
+
+ else:
+ super().test_combine_add(data_repeated)
def test_searchsorted(self, data_for_sorting, as_series, request):
pa_dtype = data_for_sorting.dtype.pyarrow_dtype
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 8331bed881ce1..e27f9fe9995ad 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -340,8 +340,8 @@ def test_setitem_dt64tz(self, timezone_frame):
v1 = df._mgr.arrays[1]
v2 = df._mgr.arrays[2]
tm.assert_extension_array_equal(v1, v2)
- v1base = v1._data.base
- v2base = v2._data.base
+ v1base = v1._ndarray.base
+ v2base = v2._ndarray.base
assert v1base is None or (id(v1base) != id(v2base))
# with nan
diff --git a/pandas/tests/frame/methods/test_rank.py b/pandas/tests/frame/methods/test_rank.py
index 5f648c76d0aa4..b533fc12f4a79 100644
--- a/pandas/tests/frame/methods/test_rank.py
+++ b/pandas/tests/frame/methods/test_rank.py
@@ -247,7 +247,6 @@ def test_rank_methods_frame(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize("dtype", ["O", "f8", "i8"])
- @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_rank_descending(self, method, dtype):
if "i" in dtype:
df = self.df.dropna().astype(dtype)
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index 8aedac036c2c9..ca1c7b8d71071 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -1185,7 +1185,6 @@ def test_zero_len_frame_with_series_corner_cases():
tm.assert_frame_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_frame_single_columns_object_sum_axis_1():
# GH 13758
data = {
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 6c6a923e363ae..4be754994bb28 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -145,7 +145,6 @@ class TestDataFrameAnalytics:
# ---------------------------------------------------------------------
# Reductions
- @pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning")
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize(
"opname",
@@ -186,7 +185,6 @@ def test_stat_op_api_float_string_frame(self, float_string_frame, axis, opname):
if opname != "nunique":
getattr(float_string_frame, opname)(axis=axis, numeric_only=True)
- @pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning")
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize(
"opname",
@@ -283,9 +281,6 @@ def kurt(x):
assert_stat_op_calc("skew", skewness, float_frame_with_na)
assert_stat_op_calc("kurt", kurt, float_frame_with_na)
- # TODO: Ensure warning isn't emitted in the first place
- # ignore mean of empty slice and all-NaN
- @pytest.mark.filterwarnings("ignore::RuntimeWarning")
def test_median(self, float_frame_with_na, int_frame):
def wrapper(x):
if isna(x).any():
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 86c8e36cb7bd4..4664052196797 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -286,7 +286,9 @@ def test_repr_column_name_unicode_truncation_bug(self):
with option_context("display.max_columns", 20):
assert "StringCol" in repr(df)
- @pytest.mark.filterwarnings("ignore::FutureWarning")
+ @pytest.mark.filterwarnings(
+ "ignore:.*DataFrame.to_latex` is expected to utilise:FutureWarning"
+ )
def test_latex_repr(self):
result = r"""\begin{tabular}{llll}
\toprule
diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py
index a06304af7a2d0..f1adff58325ce 100644
--- a/pandas/tests/frame/test_subclass.py
+++ b/pandas/tests/frame/test_subclass.py
@@ -577,7 +577,6 @@ def stretch(row):
assert not isinstance(result, tm.SubclassedDataFrame)
tm.assert_series_equal(result, expected)
- @pytest.mark.filterwarnings("ignore:.*None will no longer:FutureWarning")
def test_subclassed_reductions(self, all_reductions):
# GH 25596
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 03b917edd357b..b1a4eb3fb7dd8 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1075,7 +1075,6 @@ def test_mangle_series_groupby(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.xfail(reason="GH-26611. kwargs for multi-agg.")
- @pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
def test_with_kwargs(self):
f1 = lambda x, y, b=1: x.sum() + y + b
f2 = lambda x, y, b=2: x.sum() + y * b
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 6a89c72354d04..eb667016b1e62 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -25,7 +25,6 @@
from pandas.io.formats.printing import pprint_thing
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
def test_agg_partial_failure_raises():
# GH#43741
diff --git a/pandas/tests/groupby/test_allowlist.py b/pandas/tests/groupby/test_allowlist.py
index 5a130da4937fd..1d18e7dc6c2cf 100644
--- a/pandas/tests/groupby/test_allowlist.py
+++ b/pandas/tests/groupby/test_allowlist.py
@@ -70,7 +70,6 @@ def raw_frame():
@pytest.mark.parametrize("axis", [0, 1])
@pytest.mark.parametrize("skipna", [True, False])
@pytest.mark.parametrize("sort", [True, False])
-@pytest.mark.filterwarnings("ignore:The default value of numeric_only:FutureWarning")
def test_regression_allowlist_methods(raw_frame, op, axis, skipna, sort):
# GH6944
# GH 17537
diff --git a/pandas/tests/groupby/test_any_all.py b/pandas/tests/groupby/test_any_all.py
index 3f61a4ece66c0..e49238a9e6656 100644
--- a/pandas/tests/groupby/test_any_all.py
+++ b/pandas/tests/groupby/test_any_all.py
@@ -171,7 +171,6 @@ def test_object_type_missing_vals(bool_agg_func, data, expected_res, frame_or_se
tm.assert_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
@pytest.mark.parametrize("bool_agg_func", ["any", "all"])
def test_object_NA_raises_with_skipna_false(bool_agg_func):
# GH#37501
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index b35c4158bf420..b9e2bba0b0d31 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -311,7 +311,6 @@ def test_apply(ordered):
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*value of numeric_only.*:FutureWarning")
def test_observed(observed):
# multiple groupers, don't re-expand the output space
# of the grouper
@@ -1316,7 +1315,6 @@ def test_groupby_categorical_axis_1(code):
tm.assert_frame_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_groupby_cat_preserves_structure(observed, ordered):
# GH 28787
df = DataFrame(
diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py
index ef39aabd83d22..7e5bfff53054a 100644
--- a/pandas/tests/groupby/test_function.py
+++ b/pandas/tests/groupby/test_function.py
@@ -137,9 +137,6 @@ def df(self):
)
return df
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["mean", "median"])
def test_averages(self, df, method):
# mean / median
@@ -217,9 +214,6 @@ def test_first_last(self, df, method):
self._check(df, method, expected_columns, expected_columns_numeric)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["sum", "cumsum"])
def test_sum_cumsum(self, df, method):
@@ -233,9 +227,6 @@ def test_sum_cumsum(self, df, method):
self._check(df, method, expected_columns, expected_columns_numeric)
- @pytest.mark.filterwarnings(
- "ignore:The default value of numeric_only:FutureWarning"
- )
@pytest.mark.parametrize("method", ["prod", "cumprod"])
def test_prod_cumprod(self, df, method):
@@ -496,7 +487,6 @@ def test_groupby_non_arithmetic_agg_int_like_precision(i):
],
)
@pytest.mark.parametrize("numeric_only", [True, False])
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_idxmin_idxmax_returns_int_types(func, values, numeric_only):
# GH 25444
df = DataFrame(
@@ -1610,7 +1600,6 @@ def test_corrwith_with_1_axis():
tm.assert_series_equal(result, expected)
-@pytest.mark.filterwarnings("ignore:.* is deprecated:FutureWarning")
def test_multiindex_group_all_columns_when_empty(groupby_func):
# GH 32464
df = DataFrame({"a": [], "b": [], "c": []}).set_index(["a", "b", "c"])
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index a7104c2e21049..59a8141be7db4 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -1263,8 +1263,6 @@ def test_groupby_mixed_type_columns():
tm.assert_frame_equal(result, expected)
-# TODO: Ensure warning isn't emitted in the first place
-@pytest.mark.filterwarnings("ignore:Mean of:RuntimeWarning")
def test_cython_grouper_series_bug_noncontig():
arr = np.empty((100, 100))
arr.fill(np.nan)
@@ -1879,9 +1877,6 @@ def test_pivot_table_values_key_error():
@pytest.mark.parametrize(
"op", ["idxmax", "idxmin", "min", "max", "sum", "prod", "skew"]
)
-@pytest.mark.filterwarnings("ignore:The default value of numeric_only:FutureWarning")
-@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning")
-@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning")
def test_empty_groupby(columns, keys, values, method, op, request, using_array_manager):
# GH8093 & GH26411
override_dtype = None
diff --git a/pandas/tests/indexes/timedeltas/test_constructors.py b/pandas/tests/indexes/timedeltas/test_constructors.py
index 5c23d1dfd83c8..73b742591cd10 100644
--- a/pandas/tests/indexes/timedeltas/test_constructors.py
+++ b/pandas/tests/indexes/timedeltas/test_constructors.py
@@ -47,7 +47,7 @@ def test_int64_nocopy(self):
# and copy=False
arr = np.arange(10, dtype=np.int64)
tdi = TimedeltaIndex(arr, copy=False)
- assert tdi._data._data.base is arr
+ assert tdi._data._ndarray.base is arr
def test_infer_from_tdi(self):
# GH#23539
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index b3e59da4b0130..727d0bade2c2c 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -883,7 +883,7 @@ def test_setitem_dt64_string_scalar(self, tz_naive_fixture, indexer_sli):
if tz is None:
# TODO(EA2D): we can make this no-copy in tz-naive case too
assert ser.dtype == dti.dtype
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
else:
assert ser._values is values
@@ -911,7 +911,7 @@ def test_setitem_dt64_string_values(self, tz_naive_fixture, indexer_sli, key, bo
if tz is None:
# TODO(EA2D): we can make this no-copy in tz-naive case too
assert ser.dtype == dti.dtype
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
else:
assert ser._values is values
@@ -925,7 +925,7 @@ def test_setitem_td64_scalar(self, indexer_sli, scalar):
values._validate_setitem_value(scalar)
indexer_sli(ser)[0] = scalar
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
@pytest.mark.parametrize("box", [list, np.array, pd.array, pd.Categorical, Index])
@pytest.mark.parametrize(
@@ -945,7 +945,7 @@ def test_setitem_td64_string_values(self, indexer_sli, key, box):
values._validate_setitem_value(newvals)
indexer_sli(ser)[key] = newvals
- assert ser._values._data is values._data
+ assert ser._values._ndarray is values._ndarray
def test_extension_array_cross_section():
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index ecf247efd74bf..8d2d165e991f5 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -469,7 +469,7 @@ def test_copy(self, mgr):
assert cp_blk.values.base is blk.values.base
else:
# DatetimeTZBlock has DatetimeIndex values
- assert cp_blk.values._data.base is blk.values._data.base
+ assert cp_blk.values._ndarray.base is blk.values._ndarray.base
# copy(deep=True) consolidates, so the block-wise assertions will
# fail is mgr is not consolidated
diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py
index eaeb769a94c38..e58df00c65608 100644
--- a/pandas/tests/io/test_feather.py
+++ b/pandas/tests/io/test_feather.py
@@ -10,10 +10,6 @@
pyarrow = pytest.importorskip("pyarrow", minversion="1.0.1")
-filter_sparse = pytest.mark.filterwarnings("ignore:The Sparse")
-
-
-@filter_sparse
@pytest.mark.single_cpu
class TestFeather:
def check_error_on_write(self, df, exc, err_msg):
diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py
index faaa61e84a351..86fabf0ed0ef2 100644
--- a/pandas/tests/series/indexing/test_getitem.py
+++ b/pandas/tests/series/indexing/test_getitem.py
@@ -471,7 +471,7 @@ def test_getitem_boolean_dt64_copies(self):
ser = Series(dti._data)
res = ser[key]
- assert res._values._data.base is None
+ assert res._values._ndarray.base is None
# compare with numeric case for reference
ser2 = Series(range(4))
diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py
index 7d77a755e082b..d731aeee1b39b 100644
--- a/pandas/tests/series/indexing/test_setitem.py
+++ b/pandas/tests/series/indexing/test_setitem.py
@@ -418,14 +418,14 @@ def test_setitem_invalidates_datetime_index_freq(self):
ts = dti[1]
ser = Series(dti)
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert dti.freq == "D"
ser.iloc[1] = NaT
assert ser._values.freq is None
# check that the DatetimeIndex was not altered in place
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert dti[1] == ts
assert dti.freq == "D"
@@ -435,9 +435,9 @@ def test_dt64tz_setitem_does_not_mutate_dti(self):
ts = dti[0]
ser = Series(dti)
assert ser._values is not dti
- assert ser._values._data.base is not dti._data._data.base
+ assert ser._values._ndarray.base is not dti._data._ndarray.base
assert ser._mgr.arrays[0] is not dti
- assert ser._mgr.arrays[0]._data.base is not dti._data._data.base
+ assert ser._mgr.arrays[0]._ndarray.base is not dti._data._ndarray.base
ser[::3] = NaT
assert ser[0] is NaT
diff --git a/pandas/tests/series/indexing/test_xs.py b/pandas/tests/series/indexing/test_xs.py
index aaccad0f2bd70..a67f3ec708f24 100644
--- a/pandas/tests/series/indexing/test_xs.py
+++ b/pandas/tests/series/indexing/test_xs.py
@@ -11,7 +11,7 @@
def test_xs_datetimelike_wrapping():
# GH#31630 a case where we shouldn't wrap datetime64 in Timestamp
- arr = date_range("2016-01-01", periods=3)._data._data
+ arr = date_range("2016-01-01", periods=3)._data._ndarray
ser = Series(arr, dtype=object)
for i in range(len(ser)):
diff --git a/pandas/tests/util/test_show_versions.py b/pandas/tests/util/test_show_versions.py
index 99c7e0a1a8956..439971084fba8 100644
--- a/pandas/tests/util/test_show_versions.py
+++ b/pandas/tests/util/test_show_versions.py
@@ -20,14 +20,6 @@
# html5lib
"ignore:Using or importing the ABCs from:DeprecationWarning"
)
-@pytest.mark.filterwarnings(
- # fastparquet
- "ignore:pandas.core.index is deprecated:FutureWarning"
-)
-@pytest.mark.filterwarnings(
- # pandas_datareader
- "ignore:pandas.util.testing is deprecated:FutureWarning"
-)
@pytest.mark.filterwarnings(
# https://github.com/pandas-dev/pandas/issues/35252
"ignore:Distutils:UserWarning"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49979 | 2022-11-30T23:11:51Z | 2022-12-10T19:16:30Z | 2022-12-10T19:16:30Z | 2022-12-10T22:22:55Z |
REF: Refactor merge array conversions for factorizer | diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index e818e367ca83d..b6fa5da857910 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -2364,35 +2364,7 @@ def _factorize_keys(
# "_values_for_factorize"
rk, _ = rk._values_for_factorize() # type: ignore[union-attr]
- klass: type[libhashtable.Factorizer]
- if is_numeric_dtype(lk.dtype):
- if not is_dtype_equal(lk, rk):
- dtype = find_common_type([lk.dtype, rk.dtype])
- if isinstance(dtype, ExtensionDtype):
- cls = dtype.construct_array_type()
- if not isinstance(lk, ExtensionArray):
- lk = cls._from_sequence(lk, dtype=dtype, copy=False)
- else:
- lk = lk.astype(dtype)
-
- if not isinstance(rk, ExtensionArray):
- rk = cls._from_sequence(rk, dtype=dtype, copy=False)
- else:
- rk = rk.astype(dtype)
- else:
- lk = lk.astype(dtype)
- rk = rk.astype(dtype)
- if isinstance(lk, BaseMaskedArray):
- # Invalid index type "type" for "Dict[Type[object], Type[Factorizer]]";
- # expected type "Type[object]"
- klass = _factorizers[lk.dtype.type] # type: ignore[index]
- else:
- klass = _factorizers[lk.dtype.type]
-
- else:
- klass = libhashtable.ObjectFactorizer
- lk = ensure_object(lk)
- rk = ensure_object(rk)
+ klass, lk, rk = _convert_arrays_and_get_rizer_klass(lk, rk)
rizer = klass(max(len(lk), len(rk)))
@@ -2433,6 +2405,41 @@ def _factorize_keys(
return llab, rlab, count
+def _convert_arrays_and_get_rizer_klass(
+ lk: ArrayLike, rk: ArrayLike
+) -> tuple[type[libhashtable.Factorizer], ArrayLike, ArrayLike]:
+ klass: type[libhashtable.Factorizer]
+ if is_numeric_dtype(lk.dtype):
+ if not is_dtype_equal(lk, rk):
+ dtype = find_common_type([lk.dtype, rk.dtype])
+ if isinstance(dtype, ExtensionDtype):
+ cls = dtype.construct_array_type()
+ if not isinstance(lk, ExtensionArray):
+ lk = cls._from_sequence(lk, dtype=dtype, copy=False)
+ else:
+ lk = lk.astype(dtype)
+
+ if not isinstance(rk, ExtensionArray):
+ rk = cls._from_sequence(rk, dtype=dtype, copy=False)
+ else:
+ rk = rk.astype(dtype)
+ else:
+ lk = lk.astype(dtype)
+ rk = rk.astype(dtype)
+ if isinstance(lk, BaseMaskedArray):
+ # Invalid index type "type" for "Dict[Type[object], Type[Factorizer]]";
+ # expected type "Type[object]"
+ klass = _factorizers[lk.dtype.type] # type: ignore[index]
+ else:
+ klass = _factorizers[lk.dtype.type]
+
+ else:
+ klass = libhashtable.ObjectFactorizer
+ lk = ensure_object(lk)
+ rk = ensure_object(rk)
+ return klass, lk, rk
+
+
def _sort_labels(
uniques: np.ndarray, left: npt.NDArray[np.intp], right: npt.NDArray[np.intp]
) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp]]:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49978 | 2022-11-30T21:51:04Z | 2022-12-01T18:59:57Z | 2022-12-01T18:59:56Z | 2022-12-10T20:00:59Z |
DEP: Enforce deprecation of mangle_dupe_cols | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 53bcf6ffd7a8a..a073087f6ec8f 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -155,15 +155,6 @@ usecols : list-like or callable, default ``None``
when using the c engine. The Python engine loads the data first before deciding
which columns to drop.
-mangle_dupe_cols : boolean, default ``True``
- Duplicate columns will be specified as 'X', 'X.1'...'X.N', rather than 'X'...'X'.
- Passing in ``False`` will cause data to be overwritten if there are duplicate
- names in the columns.
-
- .. deprecated:: 1.5.0
- The argument was never implemented, and a new argument where the
- renaming pattern can be specified will be added instead.
-
General parsing configuration
+++++++++++++++++++++++++++++
@@ -587,10 +578,6 @@ If the header is in a row other than the first, pass the row number to
Duplicate names parsing
'''''''''''''''''''''''
- .. deprecated:: 1.5.0
- ``mangle_dupe_cols`` was never implemented, and a new argument where the
- renaming pattern can be specified will be added instead.
-
If the file or header contains duplicate names, pandas will by default
distinguish between them so as to prevent overwriting data:
@@ -599,8 +586,7 @@ distinguish between them so as to prevent overwriting data:
data = "a,b,a\n0,1,2\n3,4,5"
pd.read_csv(StringIO(data))
-There is no more duplicate data because ``mangle_dupe_cols=True`` by default,
-which modifies a series of duplicate columns 'X', ..., 'X' to become
+There is no more duplicate data because duplicate columns 'X', ..., 'X' become
'X', 'X.1', ..., 'X.N'.
.. _io.usecols:
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 1fb9a81e85a83..df82bcd37e971 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -434,6 +434,7 @@ Removal of prior version deprecations/changes
- Removed argument ``inplace`` from :meth:`Categorical.remove_unused_categories` (:issue:`37918`)
- Disallow passing non-round floats to :class:`Timestamp` with ``unit="M"`` or ``unit="Y"`` (:issue:`47266`)
- Remove keywords ``convert_float`` and ``mangle_dupe_cols`` from :func:`read_excel` (:issue:`41176`)
+- Remove keyword ``mangle_dupe_cols`` from :func:`read_csv` and :func:`read_table` (:issue:`48137`)
- Removed ``errors`` keyword from :meth:`DataFrame.where`, :meth:`Series.where`, :meth:`DataFrame.mask` and :meth:`Series.mask` (:issue:`47728`)
- Disallow passing non-keyword arguments to :func:`read_excel` except ``io`` and ``sheet_name`` (:issue:`34418`)
- Disallow passing non-keyword arguments to :meth:`DataFrame.drop` and :meth:`Series.drop` except ``labels`` (:issue:`41486`)
diff --git a/pandas/_libs/parsers.pyi b/pandas/_libs/parsers.pyi
index d888511916e27..60f5304c39ad9 100644
--- a/pandas/_libs/parsers.pyi
+++ b/pandas/_libs/parsers.pyi
@@ -58,7 +58,6 @@ class TextReader:
skiprows=...,
skipfooter: int = ..., # int64_t
verbose: bool = ...,
- mangle_dupe_cols: bool = ...,
float_precision: Literal["round_trip", "legacy", "high"] | None = ...,
skip_blank_lines: bool = ...,
encoding_errors: bytes | str = ...,
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 92874ef201246..85d74e201d5bb 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -317,7 +317,7 @@ cdef class TextReader:
object handle
object orig_header
bint na_filter, keep_default_na, verbose, has_usecols, has_mi_columns
- bint mangle_dupe_cols, allow_leading_cols
+ bint allow_leading_cols
uint64_t parser_start # this is modified after __init__
list clocks
const char *encoding_errors
@@ -373,7 +373,6 @@ cdef class TextReader:
skiprows=None,
skipfooter=0, # int64_t
bint verbose=False,
- bint mangle_dupe_cols=True,
float_precision=None,
bint skip_blank_lines=True,
encoding_errors=b"strict",
@@ -390,8 +389,6 @@ cdef class TextReader:
self.parser = parser_new()
self.parser.chunksize = tokenize_chunksize
- self.mangle_dupe_cols = mangle_dupe_cols
-
# For timekeeping
self.clocks = []
@@ -680,7 +677,7 @@ cdef class TextReader:
this_header.append(name)
- if not self.has_mi_columns and self.mangle_dupe_cols:
+ if not self.has_mi_columns:
# Ensure that regular columns are used before unnamed ones
# to keep given names and mangle unnamed columns
col_loop_order = [i for i in range(len(this_header))
diff --git a/pandas/io/parsers/base_parser.py b/pandas/io/parsers/base_parser.py
index b0f3754271894..7b9794dd434e6 100644
--- a/pandas/io/parsers/base_parser.py
+++ b/pandas/io/parsers/base_parser.py
@@ -125,7 +125,6 @@ def __init__(self, kwds) -> None:
self.true_values = kwds.get("true_values")
self.false_values = kwds.get("false_values")
- self.mangle_dupe_cols = kwds.get("mangle_dupe_cols", True)
self.infer_datetime_format = kwds.pop("infer_datetime_format", False)
self.cache_dates = kwds.pop("cache_dates", True)
@@ -333,34 +332,28 @@ def extract(r):
return names, index_names, col_names, passed_names
@final
- def _maybe_dedup_names(self, names: Sequence[Hashable]) -> Sequence[Hashable]:
- # see gh-7160 and gh-9424: this helps to provide
- # immediate alleviation of the duplicate names
- # issue and appears to be satisfactory to users,
- # but ultimately, not needing to butcher the names
- # would be nice!
- if self.mangle_dupe_cols:
- names = list(names) # so we can index
- counts: DefaultDict[Hashable, int] = defaultdict(int)
- is_potential_mi = _is_potential_multi_index(names, self.index_col)
-
- for i, col in enumerate(names):
- cur_count = counts[col]
-
- while cur_count > 0:
- counts[col] = cur_count + 1
+ def _dedup_names(self, names: Sequence[Hashable]) -> Sequence[Hashable]:
+ names = list(names) # so we can index
+ counts: DefaultDict[Hashable, int] = defaultdict(int)
+ is_potential_mi = _is_potential_multi_index(names, self.index_col)
- if is_potential_mi:
- # for mypy
- assert isinstance(col, tuple)
- col = col[:-1] + (f"{col[-1]}.{cur_count}",)
- else:
- col = f"{col}.{cur_count}"
- cur_count = counts[col]
+ for i, col in enumerate(names):
+ cur_count = counts[col]
- names[i] = col
+ while cur_count > 0:
counts[col] = cur_count + 1
+ if is_potential_mi:
+ # for mypy
+ assert isinstance(col, tuple)
+ col = col[:-1] + (f"{col[-1]}.{cur_count}",)
+ else:
+ col = f"{col}.{cur_count}"
+ cur_count = counts[col]
+
+ names[i] = col
+ counts[col] = cur_count + 1
+
return names
@final
@@ -1182,7 +1175,6 @@ def converter(*date_cols):
"verbose": False,
"encoding": None,
"compression": None,
- "mangle_dupe_cols": True,
"infer_datetime_format": False,
"skip_blank_lines": True,
"encoding_errors": "strict",
diff --git a/pandas/io/parsers/c_parser_wrapper.py b/pandas/io/parsers/c_parser_wrapper.py
index c1f2e6ddb2388..e0daf157d3d3a 100644
--- a/pandas/io/parsers/c_parser_wrapper.py
+++ b/pandas/io/parsers/c_parser_wrapper.py
@@ -227,7 +227,7 @@ def read(
except StopIteration:
if self._first_chunk:
self._first_chunk = False
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
index, columns, col_dict = self._get_empty_meta(
names,
self.index_col,
@@ -281,7 +281,7 @@ def read(
if self.usecols is not None:
names = self._filter_usecols(names)
- names = self._maybe_dedup_names(names)
+ names = self._dedup_names(names)
# rename dict keys
data_tups = sorted(data.items())
@@ -303,7 +303,7 @@ def read(
# assert for mypy, orig_names is List or None, None would error in list(...)
assert self.orig_names is not None
names = list(self.orig_names)
- names = self._maybe_dedup_names(names)
+ names = self._dedup_names(names)
if self.usecols is not None:
names = self._filter_usecols(names)
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 121c52ba1c323..aebf285e669bb 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -259,7 +259,7 @@ def read(
columns: Sequence[Hashable] = list(self.orig_names)
if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
# error: Cannot determine type of 'index_col'
index, columns, col_dict = self._get_empty_meta(
names,
@@ -293,7 +293,7 @@ def _exclude_implicit_index(
self,
alldata: list[np.ndarray],
) -> tuple[Mapping[Hashable, np.ndarray], Sequence[Hashable]]:
- names = self._maybe_dedup_names(self.orig_names)
+ names = self._dedup_names(self.orig_names)
offset = 0
if self._implicit_index:
@@ -424,7 +424,7 @@ def _infer_columns(
else:
this_columns.append(c)
- if not have_mi_columns and self.mangle_dupe_cols:
+ if not have_mi_columns:
counts: DefaultDict = defaultdict(int)
# Ensure that regular columns are used before unnamed ones
# to keep given names and mangle unnamed columns
diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index 575390e9b97a4..d9c2403a19d0c 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -41,10 +41,7 @@
AbstractMethodError,
ParserWarning,
)
-from pandas.util._decorators import (
- Appender,
- deprecate_kwarg,
-)
+from pandas.util._decorators import Appender
from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.common import (
@@ -152,14 +149,6 @@
example of a valid callable argument would be ``lambda x: x.upper() in
['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
parsing time and lower memory usage.
-mangle_dupe_cols : bool, default True
- Duplicate columns will be specified as 'X', 'X.1', ...'X.N', rather than
- 'X'...'X'. Passing in False will cause data to be overwritten if there
- are duplicate names in the columns.
-
- .. deprecated:: 1.5.0
- Not implemented, and a new argument to specify the pattern for the
- names of duplicated columns will be added instead
dtype : Type name or dict of column -> type, optional
Data type for data or columns. E.g. {{'a': np.float64, 'b': np.int32,
'c': 'Int64'}}
@@ -604,7 +593,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -661,7 +649,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -718,7 +705,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -775,7 +761,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -821,7 +806,6 @@ def read_csv(
...
-@deprecate_kwarg(old_arg_name="mangle_dupe_cols", new_arg_name=None)
@Appender(
_doc_read_csv_and_table.format(
func_name="read_csv",
@@ -842,7 +826,6 @@ def read_csv(
names: Sequence[Hashable] | None | lib.NoDefault = lib.no_default,
index_col: IndexLabel | Literal[False] | None = None,
usecols=None,
- mangle_dupe_cols: bool = True,
# General Parsing Configuration
dtype: DtypeArg | None = None,
engine: CSVEngine | None = None,
@@ -923,7 +906,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -980,7 +962,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1037,7 +1018,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1094,7 +1074,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = ...,
index_col: IndexLabel | Literal[False] | None = ...,
usecols=...,
- mangle_dupe_cols: bool = ...,
dtype: DtypeArg | None = ...,
engine: CSVEngine | None = ...,
converters=...,
@@ -1140,7 +1119,6 @@ def read_table(
...
-@deprecate_kwarg(old_arg_name="mangle_dupe_cols", new_arg_name=None)
@Appender(
_doc_read_csv_and_table.format(
func_name="read_table",
@@ -1161,7 +1139,6 @@ def read_table(
names: Sequence[Hashable] | None | lib.NoDefault = lib.no_default,
index_col: IndexLabel | Literal[False] | None = None,
usecols=None,
- mangle_dupe_cols: bool = True,
# General Parsing Configuration
dtype: DtypeArg | None = None,
engine: CSVEngine | None = None,
@@ -1406,9 +1383,6 @@ def _get_options_with_defaults(self, engine: CSVEngine) -> dict[str, Any]:
f"The {repr(argname)} option is not supported with the "
f"'pyarrow' engine"
)
- if argname == "mangle_dupe_cols" and value is False:
- # GH12935
- raise ValueError("Setting mangle_dupe_cols=False is not supported yet")
options[argname] = value
for argname, default in _c_parser_defaults.items():
diff --git a/pandas/tests/io/parser/test_mangle_dupes.py b/pandas/tests/io/parser/test_mangle_dupes.py
index 13b419c3390fc..5709e7e4027e8 100644
--- a/pandas/tests/io/parser/test_mangle_dupes.py
+++ b/pandas/tests/io/parser/test_mangle_dupes.py
@@ -14,22 +14,11 @@
@skip_pyarrow
-@pytest.mark.parametrize("kwargs", [{}, {"mangle_dupe_cols": True}])
-def test_basic(all_parsers, kwargs):
- # TODO: add test for condition "mangle_dupe_cols=False"
- # once it is actually supported (gh-12935)
+def test_basic(all_parsers):
parser = all_parsers
data = "a,a,b,b,b\n1,2,3,4,5"
- if "mangle_dupe_cols" in kwargs:
- with tm.assert_produces_warning(
- FutureWarning,
- match="the 'mangle_dupe_cols' keyword is deprecated",
- check_stacklevel=False,
- ):
- result = parser.read_csv(StringIO(data), sep=",", **kwargs)
- else:
- result = parser.read_csv(StringIO(data), sep=",", **kwargs)
+ result = parser.read_csv(StringIO(data), sep=",")
expected = DataFrame([[1, 2, 3, 4, 5]], columns=["a", "a.1", "b", "b.1", "b.2"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 578cea44a8ed6..185dc733df3c2 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -34,14 +34,10 @@ class TestUnsupportedFeatures:
def test_mangle_dupe_cols_false(self):
# see gh-12935
data = "a b c\n1 2 3"
- msg = "is not supported"
for engine in ("c", "python"):
- with tm.assert_produces_warning(
- FutureWarning, match="the 'mangle_dupe_cols' keyword is deprecated"
- ):
- with pytest.raises(ValueError, match=msg):
- read_csv(StringIO(data), engine=engine, mangle_dupe_cols=False)
+ with pytest.raises(TypeError, match="unexpected keyword"):
+ read_csv(StringIO(data), engine=engine, mangle_dupe_cols=True)
def test_c_engine(self):
# see gh-6607
| #48137 | https://api.github.com/repos/pandas-dev/pandas/pulls/49977 | 2022-11-30T21:42:53Z | 2022-11-30T23:40:46Z | 2022-11-30T23:40:46Z | 2022-12-10T20:00:16Z |
PERF: ArrowExtensionArray.to_numpy | diff --git a/asv_bench/benchmarks/array.py b/asv_bench/benchmarks/array.py
index cb949637ea745..924040ff0648b 100644
--- a/asv_bench/benchmarks/array.py
+++ b/asv_bench/benchmarks/array.py
@@ -92,3 +92,41 @@ def time_setitem_slice(self, multiple_chunks):
def time_tolist(self, multiple_chunks):
self.array.tolist()
+
+
+class ArrowExtensionArray:
+
+ params = [
+ [
+ "boolean[pyarrow]",
+ "float64[pyarrow]",
+ "int64[pyarrow]",
+ "string[pyarrow]",
+ "timestamp[ns][pyarrow]",
+ ],
+ [False, True],
+ ]
+ param_names = ["dtype", "hasna"]
+
+ def setup(self, dtype, hasna):
+ N = 100_000
+ if dtype == "boolean[pyarrow]":
+ data = np.random.choice([True, False], N, replace=True)
+ elif dtype == "float64[pyarrow]":
+ data = np.random.randn(N)
+ elif dtype == "int64[pyarrow]":
+ data = np.arange(N)
+ elif dtype == "string[pyarrow]":
+ data = tm.rands_array(10, N)
+ elif dtype == "timestamp[ns][pyarrow]":
+ data = pd.date_range("2000-01-01", freq="s", periods=N)
+ else:
+ raise NotImplementedError
+
+ arr = pd.array(data, dtype=dtype)
+ if hasna:
+ arr[::2] = pd.NA
+ self.arr = arr
+
+ def time_to_numpy(self, dtype, hasna):
+ self.arr.to_numpy()
diff --git a/doc/source/whatsnew/v2.0.0.rst b/doc/source/whatsnew/v2.0.0.rst
index 215e9c2a85bba..b828d18d1d700 100644
--- a/doc/source/whatsnew/v2.0.0.rst
+++ b/doc/source/whatsnew/v2.0.0.rst
@@ -748,6 +748,7 @@ Performance improvements
- Reduce memory usage of :meth:`DataFrame.to_pickle`/:meth:`Series.to_pickle` when using BZ2 or LZMA (:issue:`49068`)
- Performance improvement for :class:`~arrays.StringArray` constructor passing a numpy array with type ``np.str_`` (:issue:`49109`)
- Performance improvement in :meth:`~arrays.ArrowExtensionArray.factorize` (:issue:`49177`)
+- Performance improvement in :meth:`~arrays.ArrowExtensionArray.to_numpy` (:issue:`49973`)
- Performance improvement in :meth:`DataFrame.join` when joining on a subset of a :class:`MultiIndex` (:issue:`48611`)
- Performance improvement for :meth:`MultiIndex.intersection` (:issue:`48604`)
- Performance improvement in ``var`` for nullable dtypes (:issue:`48379`).
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 254ff8894b36c..d698c5eb11751 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -9,11 +9,13 @@
import numpy as np
+from pandas._libs import lib
from pandas._typing import (
ArrayLike,
Dtype,
FillnaOptions,
Iterator,
+ NpDtype,
PositionalIndexer,
SortKind,
TakeIndexer,
@@ -31,6 +33,7 @@
is_bool_dtype,
is_integer,
is_integer_dtype,
+ is_object_dtype,
is_scalar,
)
from pandas.core.dtypes.missing import isna
@@ -351,6 +354,10 @@ def __arrow_array__(self, type=None):
"""Convert myself to a pyarrow ChunkedArray."""
return self._data
+ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
+ """Correctly construct numpy arrays when passed to `np.asarray()`."""
+ return self.to_numpy(dtype=dtype)
+
def __invert__(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT:
return type(self)(pc.invert(self._data))
@@ -749,6 +756,33 @@ def take(
indices_array[indices_array < 0] += len(self._data)
return type(self)(self._data.take(indices_array))
+ @doc(ExtensionArray.to_numpy)
+ def to_numpy(
+ self,
+ dtype: npt.DTypeLike | None = None,
+ copy: bool = False,
+ na_value: object = lib.no_default,
+ ) -> np.ndarray:
+ if dtype is None and self._hasna:
+ dtype = object
+ if na_value is lib.no_default:
+ na_value = self.dtype.na_value
+
+ pa_type = self._data.type
+ if (
+ is_object_dtype(dtype)
+ or pa.types.is_timestamp(pa_type)
+ or pa.types.is_duration(pa_type)
+ ):
+ result = np.array(list(self), dtype=dtype)
+ else:
+ result = np.asarray(self._data, dtype=dtype)
+ if copy or self._hasna:
+ result = result.copy()
+ if self._hasna:
+ result[self.isna()] = na_value
+ return result
+
def unique(self: ArrowExtensionArrayT) -> ArrowExtensionArrayT:
"""
Compute the ArrowExtensionArray of unique values.
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index b8b1d64d7a093..c79e2f752c5a8 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -12,7 +12,6 @@
)
from pandas._typing import (
Dtype,
- NpDtype,
Scalar,
npt,
)
@@ -151,31 +150,6 @@ def dtype(self) -> StringDtype: # type: ignore[override]
"""
return self._dtype
- def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
- """Correctly construct numpy arrays when passed to `np.asarray()`."""
- return self.to_numpy(dtype=dtype)
-
- def to_numpy(
- self,
- dtype: npt.DTypeLike | None = None,
- copy: bool = False,
- na_value=lib.no_default,
- ) -> np.ndarray:
- """
- Convert to a NumPy ndarray.
- """
- # TODO: copy argument is ignored
-
- result = np.array(self._data, dtype=dtype)
- if self._data.null_count > 0:
- if na_value is lib.no_default:
- if dtype and np.issubdtype(dtype, np.floating):
- return result
- na_value = self._dtype.na_value
- mask = self.isna()
- result[mask] = na_value
- return result
-
def insert(self, loc: int, item) -> ArrowStringArray:
if not isinstance(item, str) and item is not libmissing.NA:
raise TypeError("Scalar must be NA or str")
@@ -219,10 +193,11 @@ def astype(self, dtype, copy: bool = True):
if copy:
return self.copy()
return self
-
elif isinstance(dtype, NumericDtype):
data = self._data.cast(pa.from_numpy_dtype(dtype.numpy_dtype))
return dtype.__from_arrow__(data)
+ elif isinstance(dtype, np.dtype) and np.issubdtype(dtype, np.floating):
+ return self.to_numpy(dtype=dtype, na_value=np.nan)
return super().astype(dtype, copy=copy)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 557cdd96bf00c..c36b129f919e8 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -1421,3 +1421,20 @@ def test_astype_from_non_pyarrow(data):
assert not isinstance(pd_array.dtype, ArrowDtype)
assert isinstance(result.dtype, ArrowDtype)
tm.assert_extension_array_equal(result, data)
+
+
+def test_to_numpy_with_defaults(data):
+ # GH49973
+ result = data.to_numpy()
+
+ pa_type = data._data.type
+ if pa.types.is_duration(pa_type) or pa.types.is_timestamp(pa_type):
+ expected = np.array(list(data))
+ else:
+ expected = np.array(data._data)
+
+ if data._hasna:
+ expected = expected.astype(object)
+ expected[pd.isna(data)] = pd.NA
+
+ tm.assert_numpy_array_equal(result, expected)
| - [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/v2.0.0.rst` file if fixing a bug or adding a new feature.
Moved and refactored `to_numpy` from `ArrowStringArray` to `ArrowExtensionArray` to improve performance for other pyarrow types.
```
before after ratio
<main> <arrow-ea-numpy>
- 63.3±0.5ms 3.17±0.2ms 0.05 array.ArrowExtensionArray.time_to_numpy('float64[pyarrow]', True)
- 63.8±1ms 3.10±0.2ms 0.05 array.ArrowExtensionArray.time_to_numpy('int64[pyarrow]', True)
- 62.1±1ms 1.42±0.02ms 0.02 array.ArrowExtensionArray.time_to_numpy('boolean[pyarrow]', True)
- 31.2±1ms 122±0.3μs 0.00 array.ArrowExtensionArray.time_to_numpy('boolean[pyarrow]', False)
- 31.0±0.4ms 33.9±0.8μs 0.00 array.ArrowExtensionArray.time_to_numpy('int64[pyarrow]', False)
- 31.7±1ms 30.6±1μs 0.00 array.ArrowExtensionArray.time_to_numpy('float64[pyarrow]', False)
```
This results in perf improvements for a number of ops that go through numpy:
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10**5, 10), dtype="float64[pyarrow]")
%timeit df.corr()
402 ms ± 17.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) <- main
26 ms ± 1.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) <- PR
%timeit df.rolling(100).sum()
378 ms ± 9.79 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) <- main
15.2 ms ± 503 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) <- PR
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/49973 | 2022-11-30T12:52:21Z | 2022-12-16T03:26:13Z | 2022-12-16T03:26:13Z | 2022-12-20T00:46:16Z |
Fix `Styler.format()` Excel example | diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 9244a8c5e672d..d7cb70b0f5110 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -1110,7 +1110,7 @@ def format(
>>> df = pd.DataFrame({"A": [1, 0, -1]})
>>> pseudo_css = "number-format: 0§[Red](0)§-§@;"
- >>> df.style.applymap(lambda v: css).to_excel("formatted_file.xlsx")
+ >>> df.style.applymap(lambda: pseudo_css).to_excel("formatted_file.xlsx")
... # doctest: +SKIP
.. figure:: ../../_static/style/format_excel_css.png
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/49971 | 2022-11-30T04:38:27Z | 2022-11-30T23:35:41Z | 2022-11-30T23:35:41Z | 2022-12-01T21:14:28Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.