title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
Backport PR #31704 on branch 1.0.x (DOC: fix contributors listing for v1.0.1) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 5863ff7367626..ef3bb8161d13f 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -76,4 +76,4 @@ Bug fixes
Contributors
~~~~~~~~~~~~
-.. contributors:: v0.25.3..v1.0.1|HEAD
+.. contributors:: v1.0.0..v1.0.1|HEAD
| Backport PR #31704: DOC: fix contributors listing for v1.0.1 | https://api.github.com/repos/pandas-dev/pandas/pulls/31706 | 2020-02-05T15:54:50Z | 2020-02-05T15:57:47Z | 2020-02-05T15:57:47Z | 2020-02-05T15:57:47Z |
DOC: fix contributors listing for v1.0.1 | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 5863ff7367626..ef3bb8161d13f 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -76,4 +76,4 @@ Bug fixes
Contributors
~~~~~~~~~~~~
-.. contributors:: v0.25.3..v1.0.1|HEAD
+.. contributors:: v1.0.0..v1.0.1|HEAD
| Follow-up on https://github.com/pandas-dev/pandas/pull/31699 | https://api.github.com/repos/pandas-dev/pandas/pulls/31704 | 2020-02-05T15:28:05Z | 2020-02-05T15:54:39Z | 2020-02-05T15:54:39Z | 2020-02-05T15:54:43Z |
Backport PR #31616 on branch 1.0.x (REGR: Fixed AssertionError in groupby) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 222690b2aed8d..6b26e46e7eb55 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression when indexing a ``Series`` or ``DataFrame`` indexed by ``DatetimeIndex`` with a slice containg a :class:`datetime.date` (:issue:`31501`)
- Fixed regression in ``DataFrame.__setitem__`` raising an ``AttributeError`` with a :class:`MultiIndex` and a non-monotonic indexer (:issue:`31449`)
- Fixed regression in :class:`Series` multiplication when multiplying a numeric :class:`Series` with >10000 elements with a timedelta-like scalar (:issue:`31457`)
+- Fixed regression in ``.groupby().agg()`` raising an ``AssertionError`` for some reductions like ``min`` on object-dtype columns (:issue:`31522`)
- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index c49677fa27a31..1014723c53a7a 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1019,6 +1019,10 @@ def _cython_agg_blocks(
agg_blocks: List[Block] = []
new_items: List[np.ndarray] = []
deleted_items: List[np.ndarray] = []
+ # Some object-dtype blocks might be split into List[Block[T], Block[U]]
+ split_items: List[np.ndarray] = []
+ split_frames: List[DataFrame] = []
+
no_result = object()
for block in data.blocks:
# Avoid inheriting result from earlier in the loop
@@ -1058,40 +1062,56 @@ def _cython_agg_blocks(
else:
result = cast(DataFrame, result)
# unwrap DataFrame to get array
+ if len(result._data.blocks) != 1:
+ # We've split an object block! Everything we've assumed
+ # about a single block input returning a single block output
+ # is a lie. To keep the code-path for the typical non-split case
+ # clean, we choose to clean up this mess later on.
+ split_items.append(locs)
+ split_frames.append(result)
+ continue
+
assert len(result._data.blocks) == 1
result = result._data.blocks[0].values
if isinstance(result, np.ndarray) and result.ndim == 1:
result = result.reshape(1, -1)
- finally:
- assert not isinstance(result, DataFrame)
-
- if result is not no_result:
- # see if we can cast the block back to the original dtype
- result = maybe_downcast_numeric(result, block.dtype)
-
- if block.is_extension and isinstance(result, np.ndarray):
- # e.g. block.values was an IntegerArray
- # (1, N) case can occur if block.values was Categorical
- # and result is ndarray[object]
- assert result.ndim == 1 or result.shape[0] == 1
- try:
- # Cast back if feasible
- result = type(block.values)._from_sequence(
- result.ravel(), dtype=block.values.dtype
- )
- except ValueError:
- # reshape to be valid for non-Extension Block
- result = result.reshape(1, -1)
+ assert not isinstance(result, DataFrame)
+
+ if result is not no_result:
+ # see if we can cast the block back to the original dtype
+ result = maybe_downcast_numeric(result, block.dtype)
+
+ if block.is_extension and isinstance(result, np.ndarray):
+ # e.g. block.values was an IntegerArray
+ # (1, N) case can occur if block.values was Categorical
+ # and result is ndarray[object]
+ assert result.ndim == 1 or result.shape[0] == 1
+ try:
+ # Cast back if feasible
+ result = type(block.values)._from_sequence(
+ result.ravel(), dtype=block.values.dtype
+ )
+ except ValueError:
+ # reshape to be valid for non-Extension Block
+ result = result.reshape(1, -1)
- agg_block: Block = block.make_block(result)
+ agg_block: Block = block.make_block(result)
new_items.append(locs)
agg_blocks.append(agg_block)
- if not agg_blocks:
+ if not (agg_blocks or split_frames):
raise DataError("No numeric types to aggregate")
+ if split_items:
+ # Clean up the mess left over from split blocks.
+ for locs, result in zip(split_items, split_frames):
+ assert len(locs) == result.shape[1]
+ for i, loc in enumerate(locs):
+ new_items.append(np.array([loc], dtype=locs.dtype))
+ agg_blocks.append(result.iloc[:, [i]]._data.blocks[0])
+
# reset the locs in the blocks to correspond to our
# current ordering
indexer = np.concatenate(new_items)
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 3ce91663c2493..d521a3a500f97 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -378,6 +378,49 @@ def test_agg_index_has_complex_internals(index):
tm.assert_frame_equal(result, expected)
+def test_agg_split_block():
+ # https://github.com/pandas-dev/pandas/issues/31522
+ df = pd.DataFrame(
+ {
+ "key1": ["a", "a", "b", "b", "a"],
+ "key2": ["one", "two", "one", "two", "one"],
+ "key3": ["three", "three", "three", "six", "six"],
+ }
+ )
+ result = df.groupby("key1").min()
+ expected = pd.DataFrame(
+ {"key2": ["one", "one"], "key3": ["six", "six"]},
+ index=pd.Index(["a", "b"], name="key1"),
+ )
+ tm.assert_frame_equal(result, expected)
+
+
+def test_agg_split_object_part_datetime():
+ # https://github.com/pandas-dev/pandas/pull/31616
+ df = pd.DataFrame(
+ {
+ "A": pd.date_range("2000", periods=4),
+ "B": ["a", "b", "c", "d"],
+ "C": [1, 2, 3, 4],
+ "D": ["b", "c", "d", "e"],
+ "E": pd.date_range("2000", periods=4),
+ "F": [1, 2, 3, 4],
+ }
+ ).astype(object)
+ result = df.groupby([0, 0, 0, 0]).min()
+ expected = pd.DataFrame(
+ {
+ "A": [pd.Timestamp("2000")],
+ "B": ["a"],
+ "C": [1],
+ "D": ["b"],
+ "E": [pd.Timestamp("2000")],
+ "F": [1],
+ }
+ )
+ tm.assert_frame_equal(result, expected)
+
+
def test_agg_cython_category_not_implemented_fallback():
# https://github.com/pandas-dev/pandas/issues/31450
df = pd.DataFrame({"col_num": [1, 1, 2, 3]})
| Backport PR #31616: REGR: Fixed AssertionError in groupby | https://api.github.com/repos/pandas-dev/pandas/pulls/31703 | 2020-02-05T14:55:38Z | 2020-02-05T15:40:56Z | 2020-02-05T15:40:56Z | 2020-02-05T15:40:56Z |
Backport PR #31699 on branch 1.0.x (DOC: Update 1.0.1 release notes) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 222690b2aed8d..bf9fa5f9d0708 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -1,7 +1,7 @@
.. _whatsnew_101:
-What's new in 1.0.1 (??)
-------------------------
+What's new in 1.0.1 (February 5, 2020)
+--------------------------------------
These are the changes in pandas 1.0.1. See :ref:`release` for a full changelog
including other versions of pandas.
@@ -74,3 +74,5 @@ Bug fixes
Contributors
~~~~~~~~~~~~
+
+.. contributors:: v0.25.3..v1.0.1|HEAD
diff --git a/doc/sphinxext/announce.py b/doc/sphinxext/announce.py
index fdc5a6b283ba8..9bc353b115852 100755
--- a/doc/sphinxext/announce.py
+++ b/doc/sphinxext/announce.py
@@ -57,6 +57,16 @@ def get_authors(revision_range):
pat = "^.*\\t(.*)$"
lst_release, cur_release = [r.strip() for r in revision_range.split("..")]
+ if "|" in cur_release:
+ # e.g. v1.0.1|HEAD
+ maybe_tag, head = cur_release.split("|")
+ assert head == "HEAD"
+ if maybe_tag in this_repo.tags:
+ cur_release = maybe_tag
+ else:
+ cur_release = head
+ revision_range = f"{lst_release}..{cur_release}"
+
# authors, in current release and previous to current release.
cur = set(re.findall(pat, this_repo.git.shortlog("-s", revision_range), re.M))
pre = set(re.findall(pat, this_repo.git.shortlog("-s", lst_release), re.M))
diff --git a/doc/sphinxext/contributors.py b/doc/sphinxext/contributors.py
index d9ba2bb2cfb07..c2b21e40cadad 100644
--- a/doc/sphinxext/contributors.py
+++ b/doc/sphinxext/contributors.py
@@ -6,7 +6,13 @@
This will be replaced with a message indicating the number of
code contributors and commits, and then list each contributor
-individually.
+individually. For development versions (before a tag is available)
+use::
+
+ .. contributors:: v0.23.0..v0.23.1|HEAD
+
+While the v0.23.1 tag does not exist, that will use the HEAD of the
+branch as the end of the revision range.
"""
from announce import build_components
from docutils import nodes
| Backport PR #31699: DOC: Update 1.0.1 release notes | https://api.github.com/repos/pandas-dev/pandas/pulls/31702 | 2020-02-05T14:43:33Z | 2020-02-05T15:22:36Z | 2020-02-05T15:22:36Z | 2020-02-05T15:22:37Z |
started to fixturize pandas/tests/base | diff --git a/pandas/conftest.py b/pandas/conftest.py
index 131a011c5a101..78feaa817b531 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -17,6 +17,7 @@
from pandas import DataFrame
import pandas._testing as tm
from pandas.core import ops
+from pandas.core.indexes.api import Index, MultiIndex
hypothesis.settings.register_profile(
"ci",
@@ -953,3 +954,69 @@ def __len__(self):
return self._data.__len__()
return TestNonDictMapping
+
+
+indices_dict = {
+ "unicode": tm.makeUnicodeIndex(100),
+ "string": tm.makeStringIndex(100),
+ "datetime": tm.makeDateIndex(100),
+ "datetime-tz": tm.makeDateIndex(100, tz="US/Pacific"),
+ "period": tm.makePeriodIndex(100),
+ "timedelta": tm.makeTimedeltaIndex(100),
+ "int": tm.makeIntIndex(100),
+ "uint": tm.makeUIntIndex(100),
+ "range": tm.makeRangeIndex(100),
+ "float": tm.makeFloatIndex(100),
+ "bool": tm.makeBoolIndex(2),
+ "categorical": tm.makeCategoricalIndex(100),
+ "interval": tm.makeIntervalIndex(100),
+ "empty": Index([]),
+ "tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
+ "repeats": Index([0, 0, 1, 1, 2, 2]),
+}
+
+
+@pytest.fixture(params=indices_dict.keys())
+def indices(request):
+ # copy to avoid mutation, e.g. setting .name
+ return indices_dict[request.param].copy()
+
+
+def _create_series(index):
+ """ Helper for the _series dict """
+ size = len(index)
+ data = np.random.randn(size)
+ return pd.Series(data, index=index, name="a")
+
+
+_series = {
+ f"series-with-{index_id}-index": _create_series(index)
+ for index_id, index in indices_dict.items()
+}
+
+
+_narrow_dtypes = [
+ np.float16,
+ np.float32,
+ np.int8,
+ np.int16,
+ np.int32,
+ np.uint8,
+ np.uint16,
+ np.uint32,
+]
+_narrow_series = {
+ f"{dtype.__name__}-series": tm.makeFloatSeries(name="a").astype(dtype)
+ for dtype in _narrow_dtypes
+}
+
+_index_or_series_objs = {**indices_dict, **_series, **_narrow_series}
+
+
+@pytest.fixture(params=_index_or_series_objs.keys())
+def index_or_series_obj(request):
+ """
+ Fixture for tests on indexes, series and series with a narrow dtype
+ copy to avoid mutation, e.g. setting .name
+ """
+ return _index_or_series_objs[request.param].copy(deep=True)
diff --git a/pandas/tests/base/test_ops.py b/pandas/tests/base/test_ops.py
index e522c7f743a05..9deb56f070d56 100644
--- a/pandas/tests/base/test_ops.py
+++ b/pandas/tests/base/test_ops.py
@@ -109,26 +109,26 @@ def test_binary_ops(klass, op_name, op):
assert expected_str in getattr(klass, "r" + op_name).__doc__
-class TestTranspose(Ops):
+class TestTranspose:
errmsg = "the 'axes' parameter is not supported"
- def test_transpose(self):
- for obj in self.objs:
- tm.assert_equal(obj.transpose(), obj)
+ def test_transpose(self, index_or_series_obj):
+ obj = index_or_series_obj
+ tm.assert_equal(obj.transpose(), obj)
- def test_transpose_non_default_axes(self):
- for obj in self.objs:
- with pytest.raises(ValueError, match=self.errmsg):
- obj.transpose(1)
- with pytest.raises(ValueError, match=self.errmsg):
- obj.transpose(axes=1)
+ def test_transpose_non_default_axes(self, index_or_series_obj):
+ obj = index_or_series_obj
+ with pytest.raises(ValueError, match=self.errmsg):
+ obj.transpose(1)
+ with pytest.raises(ValueError, match=self.errmsg):
+ obj.transpose(axes=1)
- def test_numpy_transpose(self):
- for obj in self.objs:
- tm.assert_equal(np.transpose(obj), obj)
+ def test_numpy_transpose(self, index_or_series_obj):
+ obj = index_or_series_obj
+ tm.assert_equal(np.transpose(obj), obj)
- with pytest.raises(ValueError, match=self.errmsg):
- np.transpose(obj, axes=1)
+ with pytest.raises(ValueError, match=self.errmsg):
+ np.transpose(obj, axes=1)
class TestIndexOps(Ops):
diff --git a/pandas/tests/indexes/conftest.py b/pandas/tests/indexes/conftest.py
deleted file mode 100644
index 57174f206b70d..0000000000000
--- a/pandas/tests/indexes/conftest.py
+++ /dev/null
@@ -1,29 +0,0 @@
-import pytest
-
-import pandas._testing as tm
-from pandas.core.indexes.api import Index, MultiIndex
-
-indices_dict = {
- "unicode": tm.makeUnicodeIndex(100),
- "string": tm.makeStringIndex(100),
- "datetime": tm.makeDateIndex(100),
- "datetime-tz": tm.makeDateIndex(100, tz="US/Pacific"),
- "period": tm.makePeriodIndex(100),
- "timedelta": tm.makeTimedeltaIndex(100),
- "int": tm.makeIntIndex(100),
- "uint": tm.makeUIntIndex(100),
- "range": tm.makeRangeIndex(100),
- "float": tm.makeFloatIndex(100),
- "bool": tm.makeBoolIndex(2),
- "categorical": tm.makeCategoricalIndex(100),
- "interval": tm.makeIntervalIndex(100),
- "empty": Index([]),
- "tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
- "repeats": Index([0, 0, 1, 1, 2, 2]),
-}
-
-
-@pytest.fixture(params=indices_dict.keys())
-def indices(request):
- # copy to avoid mutation, e.g. setting .name
- return indices_dict[request.param].copy()
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 04af9b09bbf89..c64a70af6f2a4 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -34,6 +34,7 @@
period_range,
)
import pandas._testing as tm
+from pandas.conftest import indices_dict
from pandas.core.indexes.api import (
Index,
MultiIndex,
@@ -42,7 +43,6 @@
ensure_index_from_sequences,
)
from pandas.tests.indexes.common import Base
-from pandas.tests.indexes.conftest import indices_dict
class TestIndex(Base):
diff --git a/pandas/tests/indexes/test_setops.py b/pandas/tests/indexes/test_setops.py
index abfa413d56655..d0cbb2ab75f72 100644
--- a/pandas/tests/indexes/test_setops.py
+++ b/pandas/tests/indexes/test_setops.py
@@ -13,7 +13,7 @@
from pandas import Float64Index, Int64Index, RangeIndex, UInt64Index
import pandas._testing as tm
from pandas.api.types import pandas_dtype
-from pandas.tests.indexes.conftest import indices_dict
+from pandas.conftest import indices_dict
COMPATIBLE_INCONSISTENT_PAIRS = {
(Int64Index, RangeIndex): (tm.makeIntIndex, tm.makeRangeIndex),
| Part of #23877
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
-------------------------------------------
I didn't apply the fixtures to the rest of `test_ops.py` yet since @jreback voiced some [concerns in this comment](https://github.com/pandas-dev/pandas/pull/30147#discussion_r356587642). | https://api.github.com/repos/pandas-dev/pandas/pulls/31701 | 2020-02-05T14:32:07Z | 2020-02-15T01:12:49Z | 2020-02-15T01:12:49Z | 2020-02-15T01:22:29Z |
TYP: remove type:ignore from pandas/io/common.py | diff --git a/pandas/io/common.py b/pandas/io/common.py
index e506cc155d48d..c4772895afd1e 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -7,7 +7,19 @@
import mmap
import os
import pathlib
-from typing import IO, Any, AnyStr, Dict, List, Mapping, Optional, Tuple, Union
+from typing import (
+ IO,
+ TYPE_CHECKING,
+ Any,
+ AnyStr,
+ Dict,
+ List,
+ Mapping,
+ Optional,
+ Tuple,
+ Type,
+ Union,
+)
from urllib.parse import ( # noqa
urlencode,
urljoin,
@@ -37,6 +49,10 @@
_VALID_URLS.discard("")
+if TYPE_CHECKING:
+ from io import IOBase # noqa: F401
+
+
def is_url(url) -> bool:
"""
Check to see if a URL has a valid protocol.
@@ -356,12 +372,13 @@ def get_handle(
handles : list of file-like objects
A list of file-like object that were opened in this function.
"""
+ need_text_wrapping: Tuple[Type["IOBase"], ...]
try:
from s3fs import S3File
need_text_wrapping = (BufferedIOBase, RawIOBase, S3File)
except ImportError:
- need_text_wrapping = (BufferedIOBase, RawIOBase) # type: ignore
+ need_text_wrapping = (BufferedIOBase, RawIOBase)
handles: List[IO] = list()
f = path_or_buf
| https://api.github.com/repos/pandas-dev/pandas/pulls/31700 | 2020-02-05T14:19:24Z | 2020-02-05T15:12:31Z | 2020-02-05T15:12:31Z | 2020-02-05T15:20:32Z | |
DOC: Update 1.0.1 release notes | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 222690b2aed8d..bf9fa5f9d0708 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -1,7 +1,7 @@
.. _whatsnew_101:
-What's new in 1.0.1 (??)
-------------------------
+What's new in 1.0.1 (February 5, 2020)
+--------------------------------------
These are the changes in pandas 1.0.1. See :ref:`release` for a full changelog
including other versions of pandas.
@@ -74,3 +74,5 @@ Bug fixes
Contributors
~~~~~~~~~~~~
+
+.. contributors:: v0.25.3..v1.0.1|HEAD
diff --git a/doc/sphinxext/announce.py b/doc/sphinxext/announce.py
index f394aac5c545b..e4859157f73de 100755
--- a/doc/sphinxext/announce.py
+++ b/doc/sphinxext/announce.py
@@ -57,6 +57,16 @@ def get_authors(revision_range):
pat = "^.*\\t(.*)$"
lst_release, cur_release = [r.strip() for r in revision_range.split("..")]
+ if "|" in cur_release:
+ # e.g. v1.0.1|HEAD
+ maybe_tag, head = cur_release.split("|")
+ assert head == "HEAD"
+ if maybe_tag in this_repo.tags:
+ cur_release = maybe_tag
+ else:
+ cur_release = head
+ revision_range = f"{lst_release}..{cur_release}"
+
# authors, in current release and previous to current release.
cur = set(re.findall(pat, this_repo.git.shortlog("-s", revision_range), re.M))
pre = set(re.findall(pat, this_repo.git.shortlog("-s", lst_release), re.M))
diff --git a/doc/sphinxext/contributors.py b/doc/sphinxext/contributors.py
index d9ba2bb2cfb07..c2b21e40cadad 100644
--- a/doc/sphinxext/contributors.py
+++ b/doc/sphinxext/contributors.py
@@ -6,7 +6,13 @@
This will be replaced with a message indicating the number of
code contributors and commits, and then list each contributor
-individually.
+individually. For development versions (before a tag is available)
+use::
+
+ .. contributors:: v0.23.0..v0.23.1|HEAD
+
+While the v0.23.1 tag does not exist, that will use the HEAD of the
+branch as the end of the revision range.
"""
from announce import build_components
from docutils import nodes
| cc @jorisvandenbossche | https://api.github.com/repos/pandas-dev/pandas/pulls/31699 | 2020-02-05T13:51:06Z | 2020-02-05T14:43:22Z | 2020-02-05T14:43:22Z | 2020-02-05T15:28:25Z |
Backport PR #31596 on branch 1.0.x (BUG: read_csv used in file like object RawIOBase is not recognize encoding option) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 2ca836d9c5508..222690b2aed8d 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -33,6 +33,7 @@ Fixed regressions
- Fixed regression in :meth:`qcut` when passed a nullable integer. (:issue:`31389`)
- Fixed regression in assigning to a :class:`Series` using a nullable integer dtype (:issue:`31446`)
- Fixed performance regression when indexing a ``DataFrame`` or ``Series`` with a :class:`MultiIndex` for the index using a list of labels (:issue:`31648`)
+- Fixed regression in :meth:`read_csv` used in file like object ``RawIOBase`` is not recognize ``encoding`` option (:issue:`31575`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx
index 377d49f2bbd29..3077f73a8d1a4 100644
--- a/pandas/_libs/parsers.pyx
+++ b/pandas/_libs/parsers.pyx
@@ -638,7 +638,7 @@ cdef class TextReader:
raise ValueError(f'Unrecognized compression type: '
f'{self.compression}')
- if self.encoding and isinstance(source, io.BufferedIOBase):
+ if self.encoding and isinstance(source, (io.BufferedIOBase, io.RawIOBase)):
source = io.TextIOWrapper(
source, self.encoding.decode('utf-8'), newline='')
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 771a302d647ec..9617965915aa5 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -3,7 +3,7 @@
import bz2
from collections import abc
import gzip
-from io import BufferedIOBase, BytesIO
+from io import BufferedIOBase, BytesIO, RawIOBase
import mmap
import os
import pathlib
@@ -361,9 +361,9 @@ def get_handle(
try:
from s3fs import S3File
- need_text_wrapping = (BufferedIOBase, S3File)
+ need_text_wrapping = (BufferedIOBase, RawIOBase, S3File)
except ImportError:
- need_text_wrapping = BufferedIOBase # type: ignore
+ need_text_wrapping = (BufferedIOBase, RawIOBase) # type: ignore
handles: List[IO] = list()
f = path_or_buf
@@ -439,7 +439,7 @@ def get_handle(
from io import TextIOWrapper
g = TextIOWrapper(f, encoding=encoding, newline="")
- if not isinstance(f, BufferedIOBase):
+ if not isinstance(f, (BufferedIOBase, RawIOBase)):
handles.append(g)
f = g
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index b4eb2fb1411d0..cb108362f4dc7 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -5,7 +5,7 @@
from collections import abc, defaultdict
import csv
import datetime
-from io import BufferedIOBase, StringIO, TextIOWrapper
+from io import BufferedIOBase, RawIOBase, StringIO, TextIOWrapper
import re
import sys
from textwrap import fill
@@ -1876,7 +1876,7 @@ def __init__(self, src, **kwds):
# Handle the file object with universal line mode enabled.
# We will handle the newline character ourselves later on.
- if isinstance(src, BufferedIOBase):
+ if isinstance(src, (BufferedIOBase, RawIOBase)):
src = TextIOWrapper(src, encoding=encoding, newline="")
kwds["encoding"] = "utf-8"
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index 33abf4bb7d9ee..620f837935718 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -142,6 +142,7 @@ def test_read_csv_utf_aliases(all_parsers, utf_value, encoding_fmt):
)
def test_binary_mode_file_buffers(all_parsers, csv_dir_path, fname, encoding):
# gh-23779: Python csv engine shouldn't error on files opened in binary.
+ # gh-31575: Python csv engine shouldn't error on files opened in raw binary.
parser = all_parsers
fpath = os.path.join(csv_dir_path, fname)
@@ -155,6 +156,10 @@ def test_binary_mode_file_buffers(all_parsers, csv_dir_path, fname, encoding):
result = parser.read_csv(fb, encoding=encoding)
tm.assert_frame_equal(expected, result)
+ with open(fpath, mode="rb", buffering=0) as fb:
+ result = parser.read_csv(fb, encoding=encoding)
+ tm.assert_frame_equal(expected, result)
+
@pytest.mark.parametrize("pass_encoding", [True, False])
def test_encoding_temp_file(all_parsers, utf_value, encoding_fmt, pass_encoding):
| Backport PR #31596: BUG: read_csv used in file like object RawIOBase is not recognize encoding option | https://api.github.com/repos/pandas-dev/pandas/pulls/31698 | 2020-02-05T13:27:18Z | 2020-02-05T14:16:20Z | 2020-02-05T14:16:20Z | 2020-02-05T14:16:21Z |
Backport PR #31690 on branch 1.0.x (TST: add test for regression in groupby with empty MultiIndex level) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index c82a58e5d3c45..2ca836d9c5508 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
+- Fixed regression in :meth:`DataFrame.groupby` with an empty DataFrame grouping by a level of a MultiIndex (:issue:`31670`).
- Fixed regression in :meth:`DataFrame.apply` with object dtype and non-reducing function (:issue:`31505`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 70ba21d89d22f..7245d6f481d99 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -676,6 +676,19 @@ def test_groupby_level_index_value_all_na(self):
)
tm.assert_frame_equal(result, expected)
+ def test_groupby_multiindex_level_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/31670
+ df = pd.DataFrame(
+ [[123, "a", 1.0], [123, "b", 2.0]], columns=["id", "category", "value"]
+ )
+ df = df.set_index(["id", "category"])
+ empty = df[df.value < 0]
+ result = empty.groupby("id").sum()
+ expected = pd.DataFrame(
+ dtype="float64", columns=["value"], index=pd.Int64Index([], name="id")
+ )
+ tm.assert_frame_equal(result, expected)
+
# get_group
# --------------------------------
| Backport PR #31690: TST: add test for regression in groupby with empty MultiIndex level | https://api.github.com/repos/pandas-dev/pandas/pulls/31692 | 2020-02-05T10:17:17Z | 2020-02-05T11:09:51Z | 2020-02-05T11:09:51Z | 2020-02-05T11:09:51Z |
TST: add test for regression in groupby with empty MultiIndex level | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index c82a58e5d3c45..2ca836d9c5508 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
+- Fixed regression in :meth:`DataFrame.groupby` with an empty DataFrame grouping by a level of a MultiIndex (:issue:`31670`).
- Fixed regression in :meth:`DataFrame.apply` with object dtype and non-reducing function (:issue:`31505`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 4273139b32828..efcd22f9c0c82 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -676,6 +676,19 @@ def test_groupby_level_index_value_all_na(self):
)
tm.assert_frame_equal(result, expected)
+ def test_groupby_multiindex_level_empty(self):
+ # https://github.com/pandas-dev/pandas/issues/31670
+ df = pd.DataFrame(
+ [[123, "a", 1.0], [123, "b", 2.0]], columns=["id", "category", "value"]
+ )
+ df = df.set_index(["id", "category"])
+ empty = df[df.value < 0]
+ result = empty.groupby("id").sum()
+ expected = pd.DataFrame(
+ dtype="float64", columns=["value"], index=pd.Int64Index([], name="id")
+ )
+ tm.assert_frame_equal(result, expected)
+
# get_group
# --------------------------------
| Closes https://github.com/pandas-dev/pandas/issues/31670 (the actual fix was already in #29243) | https://api.github.com/repos/pandas-dev/pandas/pulls/31690 | 2020-02-05T09:00:05Z | 2020-02-05T10:17:06Z | 2020-02-05T10:17:06Z | 2020-02-05T10:17:11Z |
Backport PR #29243 on branch 1.0.x (Follow up PR: #28097 Simplify branch statement) | diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 6c125e17e5549..0c57b27f325d0 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1270,10 +1270,10 @@ def _get_grouper_for_level(self, mapper, level):
# on the output of a groupby doesn't reflect back here.
level_index = level_index.copy()
- if len(level_index):
- grouper = level_index.take(codes)
- else:
+ if level_index._can_hold_na:
grouper = level_index.take(codes, fill_value=True)
+ else:
+ grouper = level_index.take(codes)
return grouper, codes, level_index
| Backport PR #29243: Follow up PR: #28097 Simplify branch statement | https://api.github.com/repos/pandas-dev/pandas/pulls/31689 | 2020-02-05T08:47:51Z | 2020-02-05T09:47:56Z | 2020-02-05T09:47:56Z | 2020-02-05T09:47:57Z |
Backport PR #31666: BUG: Block.iget not wrapping timedelta64/datetime64 | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index be888f1952636..9aa9ece9a5267 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -25,6 +25,7 @@ Fixed regressions
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
+- Fixed regression in :meth:`DataFrame.loc` and :meth:`DataFrame.iloc` when selecting a row containing a single ``datetime64`` or ``timedelta64`` column (:issue:`31649`)
- Fixed regression where setting :attr:`pd.options.display.max_colwidth` was not accepting negative integer. In addition, this behavior has been deprecated in favor of using ``None`` (:issue:`31532`)
- Fixed regression in objTOJSON.c fix return-type warning (:issue:`31463`)
- Fixed regression in :meth:`qcut` when passed a nullable integer. (:issue:`31389`)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 6f2bcdb495d01..1229f2ad061b8 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -7,7 +7,7 @@
import numpy as np
-from pandas._libs import NaT, algos as libalgos, lib, tslib, writers
+from pandas._libs import NaT, Timestamp, algos as libalgos, lib, tslib, writers
from pandas._libs.index import convert_scalar
import pandas._libs.internals as libinternals
from pandas._libs.tslibs import Timedelta, conversion
@@ -2148,6 +2148,16 @@ def get_values(self, dtype=None):
return result.reshape(self.values.shape)
return self.values
+ def iget(self, key):
+ # GH#31649 we need to wrap scalars in Timestamp/Timedelta
+ # TODO: this can be removed if we ever have 2D EA
+ result = super().iget(key)
+ if isinstance(result, np.datetime64):
+ result = Timestamp(result)
+ elif isinstance(result, np.timedelta64):
+ result = Timedelta(result)
+ return result
+
class DatetimeBlock(DatetimeLikeBlockMixin, Block):
__slots__ = ()
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index 46be2f7d8fe89..928b4dc7fecb2 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -2276,3 +2276,41 @@ def test_transpose(self, uint64_frame):
expected = DataFrame(uint64_frame.values.T)
expected.index = ["A", "B"]
tm.assert_frame_equal(result, expected)
+
+
+def test_object_casting_indexing_wraps_datetimelike():
+ # GH#31649, check the indexing methods all the way down the stack
+ df = pd.DataFrame(
+ {
+ "A": [1, 2],
+ "B": pd.date_range("2000", periods=2),
+ "C": pd.timedelta_range("1 Day", periods=2),
+ }
+ )
+
+ ser = df.loc[0]
+ assert isinstance(ser.values[1], pd.Timestamp)
+ assert isinstance(ser.values[2], pd.Timedelta)
+
+ ser = df.iloc[0]
+ assert isinstance(ser.values[1], pd.Timestamp)
+ assert isinstance(ser.values[2], pd.Timedelta)
+
+ ser = df.xs(0, axis=0)
+ assert isinstance(ser.values[1], pd.Timestamp)
+ assert isinstance(ser.values[2], pd.Timedelta)
+
+ mgr = df._data
+ arr = mgr.fast_xs(0)
+ assert isinstance(arr[1], pd.Timestamp)
+ assert isinstance(arr[2], pd.Timedelta)
+
+ blk = mgr.blocks[mgr._blknos[1]]
+ assert blk.dtype == "M8[ns]" # we got the right block
+ val = blk.iget((0, 0))
+ assert isinstance(val, pd.Timestamp)
+
+ blk = mgr.blocks[mgr._blknos[2]]
+ assert blk.dtype == "m8[ns]" # we got the right block
+ val = blk.iget((0, 0))
+ assert isinstance(val, pd.Timedelta)
| Backport of https://github.com/pandas-dev/pandas/pull/31666 | https://api.github.com/repos/pandas-dev/pandas/pulls/31688 | 2020-02-05T08:45:09Z | 2020-02-05T09:47:31Z | 2020-02-05T09:47:31Z | 2020-02-05T09:47:35Z |
DOC: fixup v1.0.1 whatsnew entries | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 0e36fd149d9cc..c82a58e5d3c45 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -20,9 +20,12 @@ Fixed regressions
- Fixed regression in ``DataFrame.__setitem__`` raising an ``AttributeError`` with a :class:`MultiIndex` and a non-monotonic indexer (:issue:`31449`)
- Fixed regression in :class:`Series` multiplication when multiplying a numeric :class:`Series` with >10000 elements with a timedelta-like scalar (:issue:`31457`)
- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
-- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
+- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
+- Fixed regression in :meth:`DataFrame.apply` with object dtype and non-reducing function (:issue:`31505`)
+- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
+- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
- Fixed regression in :meth:`DataFrame.loc` and :meth:`DataFrame.iloc` when selecting a row containing a single ``datetime64`` or ``timedelta64`` column (:issue:`31649`)
- Fixed regression where setting :attr:`pd.options.display.max_colwidth` was not accepting negative integer. In addition, this behavior has been deprecated in favor of using ``None`` (:issue:`31532`)
- Fixed regression in objTOJSON.c fix return-type warning (:issue:`31463`)
| Fixing the merge of https://github.com/pandas-dev/pandas/pull/31614 | https://api.github.com/repos/pandas-dev/pandas/pulls/31686 | 2020-02-05T08:36:47Z | 2020-02-05T08:39:55Z | 2020-02-05T08:39:55Z | 2020-02-05T08:40:18Z |
Backport PR #31614: REGR: fix non-reduction apply with tz-aware objects | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index be888f1952636..df8cf1932f793 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -22,6 +22,7 @@ Fixed regressions
- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
+- Fixed regression in :meth:`DataFrame.apply` with object dtype and non-reducing function (:issue:`31505`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 89164c527002a..43d253f632f0f 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -114,7 +114,8 @@ cdef class Reducer:
if self.typ is not None:
# In this case, we also have self.index
name = labels[i]
- cached_typ = self.typ(chunk, index=self.index, name=name)
+ cached_typ = self.typ(
+ chunk, index=self.index, name=name, dtype=arr.dtype)
# use the cached_typ if possible
if cached_typ is not None:
diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py
index e98f74e133ea9..fe6abef97acc4 100644
--- a/pandas/tests/frame/test_apply.py
+++ b/pandas/tests/frame/test_apply.py
@@ -703,6 +703,14 @@ def apply_list(row):
)
tm.assert_series_equal(result, expected)
+ def test_apply_noreduction_tzaware_object(self):
+ # https://github.com/pandas-dev/pandas/issues/31505
+ df = pd.DataFrame({"foo": [pd.Timestamp("2020", tz="UTC")]}, dtype="object")
+ result = df.apply(lambda x: x)
+ tm.assert_frame_equal(result, df)
+ result = df.apply(lambda x: x.copy())
+ tm.assert_frame_equal(result, df)
+
class TestInferOutputShape:
# the user has supplied an opaque UDF where
| backport https://github.com/pandas-dev/pandas/pull/31614 | https://api.github.com/repos/pandas-dev/pandas/pulls/31685 | 2020-02-05T08:30:02Z | 2020-02-05T09:35:44Z | 2020-02-05T09:35:44Z | 2020-02-05T09:35:53Z |
BUG: string methods with NA | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index 70aaaa6d0a60d..ac0a3ae42f6db 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -29,6 +29,10 @@ Bug fixes
- Using ``pd.NA`` with :meth:`DataFrame.to_json` now correctly outputs a null value instead of an empty object (:issue:`31615`)
+**Strings**
+
+- Using ``pd.NA`` with :meth:`Series.str.repeat` now correctly outputs a null value instead of raising error for vector inputs (:issue:`31632`)
+
.. ---------------------------------------------------------------------------
.. _whatsnew_102.contributors:
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 18c7504f2c2f8..9ef066d55689f 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -778,6 +778,8 @@ def scalar_rep(x):
else:
def rep(x, r):
+ if x is libmissing.NA:
+ return x
try:
return bytes.__mul__(x, r)
except TypeError:
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 62d26dacde67b..faa7b78b07301 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -1153,6 +1153,18 @@ def test_repeat(self):
assert isinstance(rs, Series)
tm.assert_series_equal(rs, xp)
+ def test_repeat_with_null(self):
+ # GH: 31632
+ values = Series(["a", None], dtype="string")
+ result = values.str.repeat([3, 4])
+ exp = Series(["aaa", None], dtype="string")
+ tm.assert_series_equal(result, exp)
+
+ values = Series(["a", "b"], dtype="string")
+ result = values.str.repeat([3, None])
+ exp = Series(["aaa", None], dtype="string")
+ tm.assert_series_equal(result, exp)
+
def test_match(self):
# New match behavior introduced in 0.13
values = Series(["fooBAD__barBAD", np.nan, "foo"])
| - [x] closes #31632
- [ ] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/31684 | 2020-02-05T06:41:15Z | 2020-03-10T14:08:01Z | 2020-03-10T14:08:01Z | 2020-03-10T14:09:34Z |
REF: Use _maybe_cast_indexer and dont override Categorical.get_loc | diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 718b6afb70e06..2eda54ec8d4ed 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -439,44 +439,10 @@ def _to_safe_for_reshape(self):
""" convert to object if we are a categorical """
return self.astype("object")
- def get_loc(self, key, method=None):
- """
- Get integer location, slice or boolean mask for requested label.
-
- Parameters
- ----------
- key : label
- method : {None}
- * default: exact matches only.
-
- Returns
- -------
- loc : int if unique index, slice if monotonic index, else mask
-
- Raises
- ------
- KeyError : if the key is not in the index
-
- Examples
- --------
- >>> unique_index = pd.CategoricalIndex(list('abc'))
- >>> unique_index.get_loc('b')
- 1
-
- >>> monotonic_index = pd.CategoricalIndex(list('abbc'))
- >>> monotonic_index.get_loc('b')
- slice(1, 3, None)
-
- >>> non_monotonic_index = pd.CategoricalIndex(list('abcb'))
- >>> non_monotonic_index.get_loc('b')
- array([False, True, False, True], dtype=bool)
- """
+ def _maybe_cast_indexer(self, key):
code = self.categories.get_loc(key)
code = self.codes.dtype.type(code)
- try:
- return self._engine.get_loc(code)
- except KeyError:
- raise KeyError(key)
+ return code
def get_value(self, series: "Series", key: Any):
"""
diff --git a/pandas/tests/indexes/categorical/test_indexing.py b/pandas/tests/indexes/categorical/test_indexing.py
index 6fce6542d228e..507e38d9acac2 100644
--- a/pandas/tests/indexes/categorical/test_indexing.py
+++ b/pandas/tests/indexes/categorical/test_indexing.py
@@ -172,6 +172,23 @@ def test_get_loc(self):
with pytest.raises(KeyError, match="'c'"):
i.get_loc("c")
+ def test_get_loc_unique(self):
+ cidx = pd.CategoricalIndex(list("abc"))
+ result = cidx.get_loc("b")
+ assert result == 1
+
+ def test_get_loc_monotonic_nonunique(self):
+ cidx = pd.CategoricalIndex(list("abbc"))
+ result = cidx.get_loc("b")
+ expected = slice(1, 3, None)
+ assert result == expected
+
+ def test_get_loc_nonmonotonic_nonunique(self):
+ cidx = pd.CategoricalIndex(list("abcb"))
+ result = cidx.get_loc("b")
+ expected = np.array([False, True, False, True], dtype=bool)
+ tm.assert_numpy_array_equal(result, expected)
+
class TestGetIndexer:
def test_get_indexer_base(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/31681 | 2020-02-05T03:55:04Z | 2020-02-06T23:59:31Z | 2020-02-06T23:59:31Z | 2020-02-07T00:05:47Z | |
CLN: remove libindex.get_value_at | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index 5ea0108d87c9a..4185cc2084469 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -1,17 +1,12 @@
-from datetime import datetime, timedelta, date
import warnings
-import cython
-
import numpy as np
cimport numpy as cnp
from numpy cimport (ndarray, intp_t,
float64_t, float32_t,
int64_t, int32_t, int16_t, int8_t,
- uint64_t, uint32_t, uint16_t, uint8_t,
- # Note: NPY_DATETIME, NPY_TIMEDELTA are only available
- # for cimport in cython>=0.27.3
- NPY_DATETIME, NPY_TIMEDELTA)
+ uint64_t, uint32_t, uint16_t, uint8_t
+)
cnp.import_array()
@@ -23,7 +18,7 @@ from pandas._libs.tslibs.c_timestamp cimport _Timestamp
from pandas._libs.hashtable cimport HashTable
from pandas._libs import algos, hashtable as _hash
-from pandas._libs.tslibs import Timestamp, Timedelta, period as periodlib
+from pandas._libs.tslibs import Timedelta, period as periodlib
from pandas._libs.missing import checknull
@@ -35,16 +30,6 @@ cdef inline bint is_definitely_invalid_key(object val):
return False
-cpdef get_value_at(ndarray arr, object loc, object tz=None):
- obj = util.get_value_at(arr, loc)
-
- if arr.descr.type_num == NPY_DATETIME:
- return Timestamp(obj, tz=tz)
- elif arr.descr.type_num == NPY_TIMEDELTA:
- return Timedelta(obj)
- return obj
-
-
# Don't populate hash tables in monotonic indexes larger than this
_SIZE_CUTOFF = 1_000_000
@@ -72,21 +57,6 @@ cdef class IndexEngine:
self._ensure_mapping_populated()
return val in self.mapping
- cpdef get_value(self, ndarray arr, object key, object tz=None):
- """
- Parameters
- ----------
- arr : 1-dimensional ndarray
- """
- cdef:
- object loc
-
- loc = self.get_loc(key)
- if isinstance(loc, slice) or util.is_array(loc):
- return arr[loc]
- else:
- return get_value_at(arr, loc, tz=tz)
-
cpdef get_loc(self, object val):
cdef:
Py_ssize_t loc
diff --git a/pandas/_libs/util.pxd b/pandas/_libs/util.pxd
index 15fedbb20beec..828bccf7d5641 100644
--- a/pandas/_libs/util.pxd
+++ b/pandas/_libs/util.pxd
@@ -1,7 +1,5 @@
from pandas._libs.tslibs.util cimport *
-from cython cimport Py_ssize_t
-
cimport numpy as cnp
from numpy cimport ndarray
@@ -51,49 +49,3 @@ cdef inline void set_array_not_contiguous(ndarray ao) nogil:
PyArray_CLEARFLAGS(ao,
(NPY_ARRAY_C_CONTIGUOUS | NPY_ARRAY_F_CONTIGUOUS))
-
-cdef inline Py_ssize_t validate_indexer(ndarray arr, object loc) except -1:
- """
- Cast the given indexer `loc` to an integer. If it is negative, i.e. a
- python-style indexing-from-the-end indexer, translate it to a
- from-the-front indexer. Raise if this is not possible.
-
- Parameters
- ----------
- arr : ndarray
- loc : object
-
- Returns
- -------
- idx : Py_ssize_t
-
- Raises
- ------
- IndexError
- """
- cdef:
- Py_ssize_t idx, size
- int casted
-
- if is_float_object(loc):
- casted = int(loc)
- if casted == loc:
- loc = casted
-
- idx = <Py_ssize_t>loc
- size = cnp.PyArray_SIZE(arr)
-
- if idx < 0 and size > 0:
- idx += size
- if idx >= size or size == 0 or idx < 0:
- raise IndexError('index out of bounds')
-
- return idx
-
-
-cdef inline object get_value_at(ndarray arr, object loc):
- cdef:
- Py_ssize_t i
-
- i = validate_indexer(arr, loc)
- return arr[i]
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index b476a019c66cc..8008805ddcf87 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -9,7 +9,7 @@
import numpy as np
-from pandas._libs import index as libindex, lib
+from pandas._libs import lib
import pandas._libs.sparse as splib
from pandas._libs.sparse import BlockIndex, IntIndex, SparseIndex
from pandas._libs.tslibs import NaT
@@ -794,7 +794,9 @@ def _get_val_at(self, loc):
if sp_loc == -1:
return self.fill_value
else:
- return libindex.get_value_at(self.sp_values, sp_loc)
+ val = self.sp_values[sp_loc]
+ val = com.maybe_box_datetimelike(val, self.sp_values.dtype)
+ return val
def take(self, indices, allow_fill=False, fill_value=None):
if is_scalar(indices):
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 745a56ce2be7f..00c7a41477017 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -72,8 +72,12 @@ def consensus_name_attr(objs):
return name
-def maybe_box_datetimelike(value):
+def maybe_box_datetimelike(value, dtype=None):
# turn a datetime like into a Timestamp/timedelta as needed
+ if dtype == object:
+ # If we dont have datetime64/timedelta64 dtype, we dont want to
+ # box datetimelike scalars
+ return value
if isinstance(value, (np.datetime64, datetime)):
value = tslibs.Timestamp(value)
diff --git a/pandas/tests/indexes/multi/test_get_set.py b/pandas/tests/indexes/multi/test_get_set.py
index 57d16a739c213..675a1e2e832f3 100644
--- a/pandas/tests/indexes/multi/test_get_set.py
+++ b/pandas/tests/indexes/multi/test_get_set.py
@@ -57,8 +57,6 @@ def test_get_value_duplicates():
)
assert index.get_loc("D") == slice(0, 3)
- with pytest.raises(KeyError, match=r"^'D'$"):
- index._engine.get_value(np.array([]), "D")
def test_get_level_values_all_na():
| There was a usage in MultiIndex.get_value that was just removed, leaving just the one in sparse.array that this changes to handle in python-space. | https://api.github.com/repos/pandas-dev/pandas/pulls/31680 | 2020-02-05T03:29:43Z | 2020-02-07T00:04:21Z | 2020-02-07T00:04:21Z | 2020-02-07T00:06:45Z |
REGR: fix op(frame, frame2) with reindex | diff --git a/doc/source/whatsnew/v1.0.2.rst b/doc/source/whatsnew/v1.0.2.rst
index 19358689a2186..c9031ac1ae9fe 100644
--- a/doc/source/whatsnew/v1.0.2.rst
+++ b/doc/source/whatsnew/v1.0.2.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression in :meth:`Series.align` when ``other`` is a DataFrame and ``method`` is not None (:issue:`31785`)
- Fixed regression in :meth:`pandas.core.groupby.RollingGroupby.apply` where the ``raw`` parameter was ignored (:issue:`31754`)
- Fixed regression in :meth:`rolling(..).corr() <pandas.core.window.Rolling.corr>` when using a time offset (:issue:`31789`)
+- Fixed regression in :class:`DataFrame` arithmetic operations with mis-matched columns (:issue:`31623`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/ops/__init__.py b/pandas/core/ops/__init__.py
index f3c1a609d50a1..b74dea686a89f 100644
--- a/pandas/core/ops/__init__.py
+++ b/pandas/core/ops/__init__.py
@@ -5,7 +5,7 @@
"""
import datetime
import operator
-from typing import Optional, Set, Tuple, Union
+from typing import TYPE_CHECKING, Optional, Set, Tuple, Union
import numpy as np
@@ -61,6 +61,9 @@
rxor,
)
+if TYPE_CHECKING:
+ from pandas import DataFrame # noqa:F401
+
# -----------------------------------------------------------------------------
# constants
ARITHMETIC_BINOPS: Set[str] = {
@@ -703,6 +706,58 @@ def to_series(right):
return left, right
+def _should_reindex_frame_op(
+ left: "DataFrame", right, axis, default_axis: int, fill_value, level
+) -> bool:
+ """
+ Check if this is an operation between DataFrames that will need to reindex.
+ """
+ assert isinstance(left, ABCDataFrame)
+
+ if not isinstance(right, ABCDataFrame):
+ return False
+
+ if fill_value is None and level is None and axis is default_axis:
+ # TODO: any other cases we should handle here?
+ cols = left.columns.intersection(right.columns)
+ if not (cols.equals(left.columns) and cols.equals(right.columns)):
+ return True
+
+ return False
+
+
+def _frame_arith_method_with_reindex(
+ left: "DataFrame", right: "DataFrame", op
+) -> "DataFrame":
+ """
+ For DataFrame-with-DataFrame operations that require reindexing,
+ operate only on shared columns, then reindex.
+
+ Parameters
+ ----------
+ left : DataFrame
+ right : DataFrame
+ op : binary operator
+
+ Returns
+ -------
+ DataFrame
+ """
+ # GH#31623, only operate on shared columns
+ cols = left.columns.intersection(right.columns)
+
+ new_left = left[cols]
+ new_right = right[cols]
+ result = op(new_left, new_right)
+
+ # Do the join on the columns instead of using _align_method_FRAME
+ # to avoid constructing two potentially large/sparse DataFrames
+ join_columns, _, _ = left.columns.join(
+ right.columns, how="outer", level=None, return_indexers=True
+ )
+ return result.reindex(join_columns, axis=1)
+
+
def _arith_method_FRAME(cls, op, special):
str_rep = _get_opstr(op)
op_name = _get_op_name(op, special)
@@ -720,6 +775,9 @@ def _arith_method_FRAME(cls, op, special):
@Appender(doc)
def f(self, other, axis=default_axis, level=None, fill_value=None):
+ if _should_reindex_frame_op(self, other, axis, default_axis, fill_value, level):
+ return _frame_arith_method_with_reindex(self, other, op)
+
self, other = _align_method_FRAME(self, other, axis, flex=True, level=level)
if isinstance(other, ABCDataFrame):
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index c6eacf2bbcd84..44ad55517dcea 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -711,6 +711,25 @@ def test_operations_with_interval_categories_index(self, all_arithmetic_operator
expected = pd.DataFrame([[getattr(n, op)(num) for n in data]], columns=ind)
tm.assert_frame_equal(result, expected)
+ def test_frame_with_frame_reindex(self):
+ # GH#31623
+ df = pd.DataFrame(
+ {
+ "foo": [pd.Timestamp("2019"), pd.Timestamp("2020")],
+ "bar": [pd.Timestamp("2018"), pd.Timestamp("2021")],
+ },
+ columns=["foo", "bar"],
+ )
+ df2 = df[["foo"]]
+
+ result = df - df2
+
+ expected = pd.DataFrame(
+ {"foo": [pd.Timedelta(0), pd.Timedelta(0)], "bar": [np.nan, np.nan]},
+ columns=["bar", "foo"],
+ )
+ tm.assert_frame_equal(result, expected)
+
def test_frame_with_zero_len_series_corner_cases():
# GH#28600
| - [x] closes #31623
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [ ] whatsnew entry
cc @TomAugspurger this is pretty ugly, and I'm not sure how well it will behave if either frame has MultiIndex colums.
On the plus side, it could improve perf in the many-columns-but-small-intersection case.
The ugliness might be improved by moving this check to before the _align_method_FRAME call | https://api.github.com/repos/pandas-dev/pandas/pulls/31679 | 2020-02-05T00:55:52Z | 2020-02-19T00:26:28Z | 2020-02-19T00:26:28Z | 2020-02-19T10:00:44Z |
Backport PR #31668 on branch 1.0.x (REGR: Fixed handling of Categorical in cython ops) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index 20cfcfbde389c..be888f1952636 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -19,6 +19,7 @@ Fixed regressions
- Fixed regression when indexing a ``Series`` or ``DataFrame`` indexed by ``DatetimeIndex`` with a slice containg a :class:`datetime.date` (:issue:`31501`)
- Fixed regression in ``DataFrame.__setitem__`` raising an ``AttributeError`` with a :class:`MultiIndex` and a non-monotonic indexer (:issue:`31449`)
- Fixed regression in :class:`Series` multiplication when multiplying a numeric :class:`Series` with >10000 elements with a timedelta-like scalar (:issue:`31457`)
+- Fixed regression in ``.groupby()`` aggregations with categorical dtype using Cythonized reduction functions (e.g. ``first``) (:issue:`31450`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index a1e5692af9b2e..6864260809bfb 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1379,7 +1379,9 @@ def f(self, **kwargs):
except DataError:
pass
except NotImplementedError as err:
- if "function is not implemented for this dtype" in str(err):
+ if "function is not implemented for this dtype" in str(
+ err
+ ) or "category dtype not supported" in str(err):
# raised in _get_cython_function, in some cases can
# be trimmed by implementing cython funcs for more dtypes
pass
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 3716fa7f3d87f..3ce91663c2493 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -378,6 +378,22 @@ def test_agg_index_has_complex_internals(index):
tm.assert_frame_equal(result, expected)
+def test_agg_cython_category_not_implemented_fallback():
+ # https://github.com/pandas-dev/pandas/issues/31450
+ df = pd.DataFrame({"col_num": [1, 1, 2, 3]})
+ df["col_cat"] = df["col_num"].astype("category")
+
+ result = df.groupby("col_num").col_cat.first()
+ expected = pd.Series(
+ [1, 2, 3], index=pd.Index([1, 2, 3], name="col_num"), name="col_cat"
+ )
+ tm.assert_series_equal(result, expected)
+
+ result = df.groupby("col_num").agg({"col_cat": "first"})
+ expected = expected.to_frame()
+ tm.assert_frame_equal(result, expected)
+
+
class TestNamedAggregationSeries:
def test_series_named_agg(self):
df = pd.Series([1, 2, 3, 4])
| Backport PR #31668: REGR: Fixed handling of Categorical in cython ops | https://api.github.com/repos/pandas-dev/pandas/pulls/31678 | 2020-02-05T00:31:37Z | 2020-02-05T07:31:27Z | 2020-02-05T07:31:27Z | 2020-02-05T07:31:27Z |
REF: make _convert_scalar_indexer require a scalar | diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 718cd0957cdc6..fe6f8d55baa87 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -398,15 +398,17 @@ def _convert_scalar_indexer(self, key, kind=None):
assert kind in ["loc", "getitem", "iloc", None]
+ if not is_scalar(key):
+ raise TypeError(key)
+
# we don't allow integer/float indexing for loc
- # we don't allow float indexing for ix/getitem
- if is_scalar(key):
- is_int = is_integer(key)
- is_flt = is_float(key)
- if kind in ["loc"] and (is_int or is_flt):
- self._invalid_indexer("index", key)
- elif kind in ["getitem"] and is_flt:
- self._invalid_indexer("index", key)
+ # we don't allow float indexing for getitem
+ is_int = is_integer(key)
+ is_flt = is_float(key)
+ if kind == "loc" and (is_int or is_flt):
+ self._invalid_indexer("index", key)
+ elif kind == "getitem" and is_flt:
+ self._invalid_indexer("index", key)
return super()._convert_scalar_indexer(key, kind=kind)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 91e86112f5efd..c818112e3e7d0 100755
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -599,7 +599,7 @@ def _slice(self, obj, axis: int, kind=None):
def _get_setitem_indexer(self, key):
if self.axis is not None:
- return self._convert_tuple(key)
+ return self._convert_tuple(key, setting=True)
ax = self.obj._get_axis(0)
@@ -612,7 +612,7 @@ def _get_setitem_indexer(self, key):
if isinstance(key, tuple):
try:
- return self._convert_tuple(key)
+ return self._convert_tuple(key, setting=True)
except IndexingError:
pass
@@ -620,7 +620,7 @@ def _get_setitem_indexer(self, key):
return list(key)
try:
- return self._convert_to_indexer(key, axis=0)
+ return self._convert_to_indexer(key, axis=0, setting=True)
except TypeError as e:
# invalid indexer type vs 'other' indexing errors
@@ -683,20 +683,22 @@ def _is_nested_tuple_indexer(self, tup: Tuple) -> bool:
return any(is_nested_tuple(tup, ax) for ax in self.obj.axes)
return False
- def _convert_tuple(self, key):
+ def _convert_tuple(self, key, setting: bool = False):
keyidx = []
if self.axis is not None:
axis = self.obj._get_axis_number(self.axis)
for i in range(self.ndim):
if i == axis:
- keyidx.append(self._convert_to_indexer(key, axis=axis))
+ keyidx.append(
+ self._convert_to_indexer(key, axis=axis, setting=setting)
+ )
else:
keyidx.append(slice(None))
else:
for i, k in enumerate(key):
if i >= self.ndim:
raise IndexingError("Too many indexers")
- idx = self._convert_to_indexer(k, axis=i)
+ idx = self._convert_to_indexer(k, axis=i, setting=setting)
keyidx.append(idx)
return tuple(keyidx)
@@ -1566,7 +1568,7 @@ def _validate_read_indexer(
"https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike" # noqa:E501
)
- def _convert_to_indexer(self, key, axis: int):
+ def _convert_to_indexer(self, key, axis: int, setting: bool = False):
raise AbstractMethodError(self)
def __getitem__(self, key):
@@ -1775,7 +1777,7 @@ def _get_slice_axis(self, slice_obj: slice, axis: int):
# return a DatetimeIndex instead of a slice object.
return self.obj.take(indexer, axis=axis)
- def _convert_to_indexer(self, key, axis: int):
+ def _convert_to_indexer(self, key, axis: int, setting: bool = False):
"""
Convert indexing key into something we can use to do actual fancy
indexing on a ndarray.
@@ -1795,12 +1797,14 @@ def _convert_to_indexer(self, key, axis: int):
if isinstance(key, slice):
return self._convert_slice_indexer(key, axis)
- # try to find out correct indexer, if not type correct raise
- try:
- key = self._convert_scalar_indexer(key, axis)
- except TypeError:
- # but we will allow setting
- pass
+ if is_scalar(key):
+ # try to find out correct indexer, if not type correct raise
+ try:
+ key = self._convert_scalar_indexer(key, axis)
+ except TypeError:
+ # but we will allow setting
+ if not setting:
+ raise
# see if we are positional in nature
is_int_index = labels.is_integer()
@@ -2032,7 +2036,7 @@ def _get_slice_axis(self, slice_obj: slice, axis: int):
indexer = self._convert_slice_indexer(slice_obj, axis)
return self._slice(indexer, axis=axis, kind="iloc")
- def _convert_to_indexer(self, key, axis: int):
+ def _convert_to_indexer(self, key, axis: int, setting: bool = False):
"""
Much simpler as we only have to deal with our valid types.
"""
| https://api.github.com/repos/pandas-dev/pandas/pulls/31676 | 2020-02-04T23:23:40Z | 2020-02-05T00:56:42Z | 2020-02-05T00:56:42Z | 2020-02-05T01:08:33Z | |
CLN: Remove CategoricalAccessor._deprecations | diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 3a6662d3e3ae2..d26ff7490e714 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -2504,10 +2504,6 @@ class CategoricalAccessor(PandasDelegate, PandasObject, NoNewAttributesMixin):
>>> s.cat.as_unordered()
"""
- _deprecations = PandasObject._deprecations | frozenset(
- ["categorical", "index", "name"]
- )
-
def __init__(self, data):
self._validate(data)
self._parent = data.values
| These deprecated attributes have all been removed, so ``._deprecated`` is no longer needed. | https://api.github.com/repos/pandas-dev/pandas/pulls/31675 | 2020-02-04T23:23:05Z | 2020-02-05T00:43:13Z | 2020-02-05T00:43:13Z | 2020-02-12T21:49:03Z |
CLN: assorted cleanups in indexes/ | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 6a7551391f2a8..ccb4927d9b4b7 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4107,6 +4107,11 @@ def __contains__(self, key: Any) -> bool:
bool
Whether the key search is in the index.
+ Raises
+ ------
+ TypeError
+ If the key is not hashable.
+
See Also
--------
Index.isin : Returns an ndarray of boolean dtype indicating whether the
diff --git a/pandas/core/indexes/category.py b/pandas/core/indexes/category.py
index 2cdf47ad61cec..fc6da6f75e875 100644
--- a/pandas/core/indexes/category.py
+++ b/pandas/core/indexes/category.py
@@ -160,17 +160,6 @@ class CategoricalIndex(ExtensionIndex, accessor.PandasDelegate):
_typ = "categoricalindex"
- _raw_inherit = {
- "argsort",
- "_internal_get_values",
- "tolist",
- "codes",
- "categories",
- "ordered",
- "_reverse_indexer",
- "searchsorted",
- }
-
codes: np.ndarray
categories: Index
_data: Categorical
@@ -847,18 +836,13 @@ def _concat_same_dtype(self, to_concat, name):
result.name = name
return result
- def _delegate_property_get(self, name: str, *args, **kwargs):
- """ method delegation to the ._values """
- prop = getattr(self._values, name)
- return prop # no wrapping for now
-
def _delegate_method(self, name: str, *args, **kwargs):
""" method delegation to the ._values """
method = getattr(self._values, name)
if "inplace" in kwargs:
raise ValueError("cannot use inplace with CategoricalIndex")
res = method(*args, **kwargs)
- if is_scalar(res) or name in self._raw_inherit:
+ if is_scalar(res):
return res
return CategoricalIndex(res, name=self.name)
diff --git a/pandas/core/indexes/datetimelike.py b/pandas/core/indexes/datetimelike.py
index 718cd0957cdc6..04c05fa2adc45 100644
--- a/pandas/core/indexes/datetimelike.py
+++ b/pandas/core/indexes/datetimelike.py
@@ -200,7 +200,7 @@ def sort_values(self, return_indexer=False, ascending=True):
arr = type(self._data)._simple_new(
sorted_values, dtype=self.dtype, freq=freq
)
- return self._simple_new(arr, name=self.name)
+ return type(self)._simple_new(arr, name=self.name)
@Appender(_index_shared_docs["take"] % _index_doc_kwargs)
def take(self, indices, axis=0, allow_fill=True, fill_value=None, **kwargs):
@@ -526,7 +526,7 @@ def _concat_same_dtype(self, to_concat, name):
if is_diff_evenly_spaced:
new_data._freq = self.freq
- return self._simple_new(new_data, name=name)
+ return type(self)._simple_new(new_data, name=name)
def shift(self, periods=1, freq=None):
"""
@@ -629,7 +629,7 @@ def _shallow_copy(self, values=None, **kwargs):
del attributes["freq"]
attributes.update(kwargs)
- return self._simple_new(values, **attributes)
+ return type(self)._simple_new(values, **attributes)
# --------------------------------------------------------------------
# Set Operation Methods
@@ -886,7 +886,7 @@ def _wrap_joined_index(self, joined, other):
kwargs = {}
if hasattr(self, "tz"):
kwargs["tz"] = getattr(other, "tz", None)
- return self._simple_new(joined, name, **kwargs)
+ return type(self)._simple_new(joined, name, **kwargs)
# --------------------------------------------------------------------
# List-Like Methods
diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py
index 46c896a724dae..07290fc3cf733 100644
--- a/pandas/core/indexes/datetimes.py
+++ b/pandas/core/indexes/datetimes.py
@@ -590,6 +590,7 @@ def _partial_date_slice(
raise KeyError
# a monotonic (sorted) series can be sliced
+ # Use asi8.searchsorted to avoid re-validating
left = stamps.searchsorted(t1.value, side="left") if use_lhs else None
right = stamps.searchsorted(t2.value, side="right") if use_rhs else None
diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py
index 956da07f51476..42f0a012902a3 100644
--- a/pandas/core/indexes/period.py
+++ b/pandas/core/indexes/period.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas._libs import index as libindex
-from pandas._libs.tslibs import NaT, frequencies as libfrequencies, resolution
+from pandas._libs.tslibs import frequencies as libfrequencies, resolution
from pandas._libs.tslibs.parsing import parse_time_string
from pandas._libs.tslibs.period import Period
from pandas.util._decorators import Appender, cache_readonly
@@ -547,7 +547,7 @@ def get_loc(self, key, method=None, tolerance=None):
# we cannot construct the Period
raise KeyError(key)
- ordinal = key.ordinal if key is not NaT else key.value
+ ordinal = self._data._unbox_scalar(key)
try:
return self._engine.get_loc(ordinal)
except KeyError:
diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py
index d4954eb67dedd..ec0414adc1376 100644
--- a/pandas/core/indexes/timedeltas.py
+++ b/pandas/core/indexes/timedeltas.py
@@ -278,12 +278,6 @@ def _maybe_cast_slice_bound(self, label, side: str, kind):
return label
- def _get_string_slice(self, key: str, use_lhs: bool = True, use_rhs: bool = True):
- # TODO: Check for non-True use_lhs/use_rhs
- assert isinstance(key, str), type(key)
- # given a key, try to figure out a location for a partial slice
- raise NotImplementedError
-
def is_type_compatible(self, typ) -> bool:
return typ == self.inferred_type or typ == "timedelta"
| https://api.github.com/repos/pandas-dev/pandas/pulls/31674 | 2020-02-04T23:14:09Z | 2020-02-05T00:42:25Z | 2020-02-05T00:42:25Z | 2020-02-05T00:45:56Z | |
CLN: misc tslibs, annotations, unused imports | diff --git a/pandas/_libs/tslibs/resolution.pyx b/pandas/_libs/tslibs/resolution.pyx
index c0b20c14e9920..1e0eb7f97ec54 100644
--- a/pandas/_libs/tslibs/resolution.pyx
+++ b/pandas/_libs/tslibs/resolution.pyx
@@ -27,7 +27,7 @@ cdef:
# ----------------------------------------------------------------------
-cpdef resolution(int64_t[:] stamps, tz=None):
+cpdef resolution(const int64_t[:] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
npy_datetimestruct dts
@@ -38,7 +38,7 @@ cpdef resolution(int64_t[:] stamps, tz=None):
return _reso_local(stamps, tz)
-cdef _reso_local(int64_t[:] stamps, object tz):
+cdef _reso_local(const int64_t[:] stamps, object tz):
cdef:
Py_ssize_t i, n = len(stamps)
int reso = RESO_DAY, curr_reso
@@ -106,7 +106,7 @@ cdef inline int _reso_stamp(npy_datetimestruct *dts):
return RESO_DAY
-def get_freq_group(freq):
+def get_freq_group(freq) -> int:
"""
Return frequency code group of given frequency str or offset.
@@ -189,7 +189,7 @@ class Resolution:
_freq_reso_map = {v: k for k, v in _reso_freq_map.items()}
@classmethod
- def get_str(cls, reso):
+ def get_str(cls, reso: int) -> str:
"""
Return resolution str against resolution code.
@@ -201,7 +201,7 @@ class Resolution:
return cls._reso_str_map.get(reso, 'day')
@classmethod
- def get_reso(cls, resostr):
+ def get_reso(cls, resostr: str) -> int:
"""
Return resolution str against resolution code.
@@ -216,7 +216,7 @@ class Resolution:
return cls._str_reso_map.get(resostr, cls.RESO_DAY)
@classmethod
- def get_freq_group(cls, resostr):
+ def get_freq_group(cls, resostr: str) -> int:
"""
Return frequency str against resolution str.
@@ -228,7 +228,7 @@ class Resolution:
return get_freq_group(cls.get_freq(resostr))
@classmethod
- def get_freq(cls, resostr):
+ def get_freq(cls, resostr: str) -> str:
"""
Return frequency str against resolution str.
@@ -240,7 +240,7 @@ class Resolution:
return cls._reso_freq_map[resostr]
@classmethod
- def get_str_from_freq(cls, freq):
+ def get_str_from_freq(cls, freq: str) -> str:
"""
Return resolution str against frequency str.
@@ -252,7 +252,7 @@ class Resolution:
return cls._freq_reso_map.get(freq, 'day')
@classmethod
- def get_reso_from_freq(cls, freq):
+ def get_reso_from_freq(cls, freq: str) -> int:
"""
Return resolution code against frequency str.
diff --git a/pandas/_libs/tslibs/timedeltas.pyx b/pandas/_libs/tslibs/timedeltas.pyx
index ad7cf6ae9307d..3742506a7f8af 100644
--- a/pandas/_libs/tslibs/timedeltas.pyx
+++ b/pandas/_libs/tslibs/timedeltas.pyx
@@ -1,5 +1,4 @@
import collections
-import textwrap
import cython
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 4915671aa6512..b8c462abe35f1 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1,4 +1,3 @@
-import sys
import warnings
import numpy as np
| https://api.github.com/repos/pandas-dev/pandas/pulls/31673 | 2020-02-04T23:09:40Z | 2020-02-05T00:46:52Z | 2020-02-05T00:46:52Z | 2020-02-05T00:48:50Z | |
REF: move convert_scalar out of cython | diff --git a/pandas/_libs/index.pyx b/pandas/_libs/index.pyx
index b39afc57f34f6..5ea0108d87c9a 100644
--- a/pandas/_libs/index.pyx
+++ b/pandas/_libs/index.pyx
@@ -535,61 +535,6 @@ cdef class PeriodEngine(Int64Engine):
return super(PeriodEngine, self).get_indexer_non_unique(ordinal_array)
-cpdef convert_scalar(ndarray arr, object value):
- # we don't turn integers
- # into datetimes/timedeltas
-
- # we don't turn bools into int/float/complex
-
- if arr.descr.type_num == NPY_DATETIME:
- if util.is_array(value):
- pass
- elif isinstance(value, (datetime, np.datetime64, date)):
- return Timestamp(value).to_datetime64()
- elif util.is_timedelta64_object(value):
- # exclude np.timedelta64("NaT") from value != value below
- pass
- elif value is None or value != value:
- return np.datetime64("NaT", "ns")
- raise ValueError("cannot set a Timestamp with a non-timestamp "
- f"{type(value).__name__}")
-
- elif arr.descr.type_num == NPY_TIMEDELTA:
- if util.is_array(value):
- pass
- elif isinstance(value, timedelta) or util.is_timedelta64_object(value):
- value = Timedelta(value)
- if value is NaT:
- return np.timedelta64("NaT", "ns")
- return value.to_timedelta64()
- elif util.is_datetime64_object(value):
- # exclude np.datetime64("NaT") which would otherwise be picked up
- # by the `value != value check below
- pass
- elif value is None or value != value:
- return np.timedelta64("NaT", "ns")
- raise ValueError("cannot set a Timedelta with a non-timedelta "
- f"{type(value).__name__}")
-
- else:
- validate_numeric_casting(arr.dtype, value)
-
- return value
-
-
-cpdef validate_numeric_casting(dtype, object value):
- # Note: we can't annotate dtype as cnp.dtype because that cases dtype.type
- # to integer
- if issubclass(dtype.type, (np.integer, np.bool_)):
- if util.is_float_object(value) and value != value:
- raise ValueError("Cannot assign nan to integer series")
-
- if (issubclass(dtype.type, (np.integer, np.floating, np.complex)) and
- not issubclass(dtype.type, np.bool_)):
- if util.is_bool_object(value):
- raise ValueError("Cannot assign bool to float/integer series")
-
-
cdef class BaseMultiIndexCodesEngine:
"""
Base class for MultiIndexUIntEngine and MultiIndexPyIntEngine, which
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 52c569793e499..0719b8ce6010b 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1,11 +1,18 @@
""" routings for casting """
-from datetime import datetime, timedelta
+from datetime import date, datetime, timedelta
import numpy as np
from pandas._libs import lib, tslib, tslibs
-from pandas._libs.tslibs import NaT, OutOfBoundsDatetime, Period, iNaT
+from pandas._libs.tslibs import (
+ NaT,
+ OutOfBoundsDatetime,
+ Period,
+ Timedelta,
+ Timestamp,
+ iNaT,
+)
from pandas._libs.tslibs.timezones import tz_compare
from pandas._typing import Dtype
from pandas.util._validators import validate_bool_kwarg
@@ -1599,3 +1606,59 @@ def maybe_cast_to_integer_array(arr, dtype, copy: bool = False):
if is_integer_dtype(dtype) and (is_float_dtype(arr) or is_object_dtype(arr)):
raise ValueError("Trying to coerce float values to integers")
+
+
+def convert_scalar_for_putitemlike(scalar, dtype: np.dtype):
+ """
+ Convert datetimelike scalar if we are setting into a datetime64
+ or timedelta64 ndarray.
+
+ Parameters
+ ----------
+ scalar : scalar
+ dtype : np.dtpye
+
+ Returns
+ -------
+ scalar
+ """
+ if dtype.kind == "m":
+ if isinstance(scalar, (timedelta, np.timedelta64)):
+ # We have to cast after asm8 in case we have NaT
+ return Timedelta(scalar).asm8.view("timedelta64[ns]")
+ elif scalar is None or scalar is NaT or (is_float(scalar) and np.isnan(scalar)):
+ return np.timedelta64("NaT", "ns")
+ if dtype.kind == "M":
+ if isinstance(scalar, (date, np.datetime64)):
+ # Note: we include date, not just datetime
+ return Timestamp(scalar).to_datetime64()
+ elif scalar is None or scalar is NaT or (is_float(scalar) and np.isnan(scalar)):
+ return np.datetime64("NaT", "ns")
+ else:
+ validate_numeric_casting(dtype, scalar)
+ return scalar
+
+
+def validate_numeric_casting(dtype: np.dtype, value):
+ """
+ Check that we can losslessly insert the given value into an array
+ with the given dtype.
+
+ Parameters
+ ----------
+ dtype : np.dtype
+ value : scalar
+
+ Raises
+ ------
+ ValueError
+ """
+ if issubclass(dtype.type, (np.integer, np.bool_)):
+ if is_float(value) and np.isnan(value):
+ raise ValueError("Cannot assign nan to integer series")
+
+ if issubclass(dtype.type, (np.integer, np.floating, np.complex)) and not issubclass(
+ dtype.type, np.bool_
+ ):
+ if is_bool(value):
+ raise ValueError("Cannot assign bool to float/integer series")
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 83a2a509c0743..8b3fd808957bb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -40,7 +40,7 @@
from pandas._config import get_option
-from pandas._libs import algos as libalgos, index as libindex, lib, properties
+from pandas._libs import algos as libalgos, lib, properties
from pandas._typing import Axes, Axis, Dtype, FilePathOrBuffer, Label, Level, Renamer
from pandas.compat import PY37
from pandas.compat._optional import import_optional_dependency
@@ -69,6 +69,7 @@
maybe_infer_to_datetimelike,
maybe_upcast,
maybe_upcast_putmask,
+ validate_numeric_casting,
)
from pandas.core.dtypes.common import (
ensure_float64,
@@ -3025,7 +3026,7 @@ def _set_value(self, index, col, value, takeable: bool = False):
series = self._get_item_cache(col)
engine = self.index._engine
loc = engine.get_loc(index)
- libindex.validate_numeric_casting(series.dtype, value)
+ validate_numeric_casting(series.dtype, value)
series._values[loc] = value
# Note: trying to use series._set_value breaks tests in
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index ccb4927d9b4b7..c13f0ae6462fc 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -18,7 +18,10 @@
from pandas.util._decorators import Appender, Substitution, cache_readonly
from pandas.core.dtypes import concat as _concat
-from pandas.core.dtypes.cast import maybe_cast_to_integer_array
+from pandas.core.dtypes.cast import (
+ maybe_cast_to_integer_array,
+ validate_numeric_casting,
+)
from pandas.core.dtypes.common import (
ensure_categorical,
ensure_int64,
@@ -4653,7 +4656,7 @@ def set_value(self, arr, key, value):
stacklevel=2,
)
loc = self._engine.get_loc(key)
- libindex.validate_numeric_casting(arr.dtype, value)
+ validate_numeric_casting(arr.dtype, value)
arr[loc] = value
_index_shared_docs[
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index cb03fbe1770b3..85a26179276f5 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -8,7 +8,6 @@
import numpy as np
from pandas._libs import NaT, Timestamp, algos as libalgos, lib, tslib, writers
-from pandas._libs.index import convert_scalar
import pandas._libs.internals as libinternals
from pandas._libs.tslibs import Timedelta, conversion
from pandas._libs.tslibs.timezones import tz_compare
@@ -16,6 +15,7 @@
from pandas.core.dtypes.cast import (
astype_nansafe,
+ convert_scalar_for_putitemlike,
find_common_type,
infer_dtype_from,
infer_dtype_from_scalar,
@@ -762,7 +762,7 @@ def replace(
# The only non-DatetimeLike class that also has a non-trivial
# try_coerce_args is ObjectBlock, but that overrides replace,
# so does not get here.
- to_replace = convert_scalar(values, to_replace)
+ to_replace = convert_scalar_for_putitemlike(to_replace, values.dtype)
mask = missing.mask_missing(values, to_replace)
if filter is not None:
@@ -841,7 +841,7 @@ def setitem(self, indexer, value):
# We only get here for non-Extension Blocks, so _try_coerce_args
# is only relevant for DatetimeBlock and TimedeltaBlock
if lib.is_scalar(value):
- value = convert_scalar(values, value)
+ value = convert_scalar_for_putitemlike(value, values.dtype)
else:
# current dtype cannot store value, coerce to common dtype
@@ -957,7 +957,7 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0, transpose=False)
# We only get here for non-Extension Blocks, so _try_coerce_args
# is only relevant for DatetimeBlock and TimedeltaBlock
if lib.is_scalar(new):
- new = convert_scalar(new_values, new)
+ new = convert_scalar_for_putitemlike(new, new_values.dtype)
if transpose:
new_values = new_values.T
@@ -1200,7 +1200,7 @@ def _interpolate_with_fill(
values = self.values if inplace else self.values.copy()
# We only get here for non-ExtensionBlock
- fill_value = convert_scalar(self.values, fill_value)
+ fill_value = convert_scalar_for_putitemlike(fill_value, self.values.dtype)
values = missing.interpolate_2d(
values,
@@ -1405,7 +1405,7 @@ def where_func(cond, values, other):
raise TypeError
if lib.is_scalar(other) and isinstance(values, np.ndarray):
# convert datetime to datetime64, timedelta to timedelta64
- other = convert_scalar(values, other)
+ other = convert_scalar_for_putitemlike(other, values.dtype)
# By the time we get here, we should have all Series/Index
# args extracted to ndarray
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 2ffb22d2d272f..fb316c8883e78 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -23,13 +23,12 @@
from pandas._config import get_option
from pandas._libs import lib, properties, reshape, tslibs
-from pandas._libs.index import validate_numeric_casting
from pandas._typing import Label
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender, Substitution
from pandas.util._validators import validate_bool_kwarg, validate_percentile
-from pandas.core.dtypes.cast import convert_dtypes
+from pandas.core.dtypes.cast import convert_dtypes, validate_numeric_casting
from pandas.core.dtypes.common import (
_is_unorderable_exception,
ensure_platform_int,
| There's nothing about this that particularly benefits from being in cython (I think until recently this was used within index.pyx) and its clearer in python. Plus we get a slightly smaller/faster build. | https://api.github.com/repos/pandas-dev/pandas/pulls/31672 | 2020-02-04T23:01:56Z | 2020-02-05T02:45:09Z | 2020-02-05T02:45:09Z | 2020-02-05T02:50:39Z |
Backport PR #31477 on branch 1.0.x (REGR: Fix TypeError in groupby min / max of period column) | diff --git a/doc/source/whatsnew/v1.0.1.rst b/doc/source/whatsnew/v1.0.1.rst
index f9c756b2518af..20cfcfbde389c 100644
--- a/doc/source/whatsnew/v1.0.1.rst
+++ b/doc/source/whatsnew/v1.0.1.rst
@@ -20,6 +20,7 @@ Fixed regressions
- Fixed regression in ``DataFrame.__setitem__`` raising an ``AttributeError`` with a :class:`MultiIndex` and a non-monotonic indexer (:issue:`31449`)
- Fixed regression in :class:`Series` multiplication when multiplying a numeric :class:`Series` with >10000 elements with a timedelta-like scalar (:issue:`31457`)
- Fixed regression in :meth:`GroupBy.apply` if called with a function which returned a non-pandas non-scalar object (e.g. a list or numpy array) (:issue:`31441`)
+- Fixed regression in :meth:`DataFrame.groupby` whereby taking the minimum or maximum of a column with period dtype would raise a ``TypeError``. (:issue:`31471`)
- Fixed regression in :meth:`to_datetime` when parsing non-nanosecond resolution datetimes (:issue:`31491`)
- Fixed regression in :meth:`~DataFrame.to_csv` where specifying an ``na_rep`` might truncate the values written (:issue:`31447`)
- Fixed regression in :class:`Categorical` construction with ``numpy.str_`` categories (:issue:`31499`)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 2e95daa392976..92ce80ff1987c 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -31,6 +31,7 @@
is_extension_array_dtype,
is_integer_dtype,
is_numeric_dtype,
+ is_period_dtype,
is_sparse,
is_timedelta64_dtype,
needs_i8_conversion,
@@ -567,7 +568,12 @@ def _cython_operation(
if swapped:
result = result.swapaxes(0, axis)
- if is_datetime64tz_dtype(orig_values.dtype):
+ if is_datetime64tz_dtype(orig_values.dtype) or is_period_dtype(
+ orig_values.dtype
+ ):
+ # We need to use the constructors directly for these dtypes
+ # since numpy won't recognize them
+ # https://github.com/pandas-dev/pandas/issues/31471
result = type(orig_values)(result.astype(np.int64), dtype=orig_values.dtype)
elif is_datetimelike and kind == "aggregate":
result = result.astype(orig_values.dtype)
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 723aec15d14bc..3716fa7f3d87f 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -685,6 +685,34 @@ def aggfunc(x):
tm.assert_frame_equal(result, expected)
+@pytest.mark.parametrize("func", ["min", "max"])
+def test_groupby_aggregate_period_column(func):
+ # GH 31471
+ groups = [1, 2]
+ periods = pd.period_range("2020", periods=2, freq="Y")
+ df = pd.DataFrame({"a": groups, "b": periods})
+
+ result = getattr(df.groupby("a")["b"], func)()
+ idx = pd.Int64Index([1, 2], name="a")
+ expected = pd.Series(periods, index=idx, name="b")
+
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("func", ["min", "max"])
+def test_groupby_aggregate_period_frame(func):
+ # GH 31471
+ groups = [1, 2]
+ periods = pd.period_range("2020", periods=2, freq="Y")
+ df = pd.DataFrame({"a": groups, "b": periods})
+
+ result = getattr(df.groupby("a"), func)()
+ idx = pd.Int64Index([1, 2], name="a")
+ expected = pd.DataFrame({"b": periods}, index=idx)
+
+ tm.assert_frame_equal(result, expected)
+
+
class TestLambdaMangling:
def test_maybe_mangle_lambdas_passthrough(self):
assert _maybe_mangle_lambdas("mean") == "mean"
| Backport PR #31477: REGR: Fix TypeError in groupby min / max of period column | https://api.github.com/repos/pandas-dev/pandas/pulls/31671 | 2020-02-04T22:53:09Z | 2020-02-05T00:22:44Z | 2020-02-05T00:22:44Z | 2020-02-05T00:22:44Z |
CLN: remove kwargs from signature of (Index|MultiIndex).copy | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e431d0bcf7e9b..38ff3f3c3b871 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -823,20 +823,22 @@ def repeat(self, repeats, axis=None):
# --------------------------------------------------------------------
# Copying Methods
- def copy(self, name=None, deep=False, dtype=None, **kwargs):
+ def copy(self, name=None, deep=False, dtype=None, names=None):
"""
Make a copy of this object. Name and dtype sets those attributes on
the new object.
Parameters
----------
- name : str, optional
+ name : Label
deep : bool, default False
- dtype : numpy dtype or pandas type
+ dtype : numpy dtype or pandas type, optional
+ names : list-like, optional
+ Kept for compatibility with MultiIndex. Should not be used.
Returns
-------
- copy : Index
+ Index
Notes
-----
@@ -848,7 +850,6 @@ def copy(self, name=None, deep=False, dtype=None, **kwargs):
else:
new_index = self._shallow_copy()
- names = kwargs.get("names")
names = self._validate_names(name=name, names=names, deep=deep)
new_index = new_index.set_names(names)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 708bea7d132a2..2052082e692a6 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1000,8 +1000,8 @@ def copy(
levels=None,
codes=None,
deep=False,
+ name=None,
_set_identity=False,
- **kwargs,
):
"""
Make a copy of this object. Names, dtype, levels and codes can be
@@ -1013,10 +1013,13 @@ def copy(
dtype : numpy dtype or pandas type, optional
levels : sequence, optional
codes : sequence, optional
+ deep : bool, default False
+ name : Label
+ Kept for compatibility with 1-dimensional Index. Should not be used.
Returns
-------
- copy : MultiIndex
+ MultiIndex
Notes
-----
@@ -1024,10 +1027,7 @@ def copy(
``deep``, but if ``deep`` is passed it will attempt to deepcopy.
This could be potentially expensive on large MultiIndex objects.
"""
- name = kwargs.get("name")
names = self._validate_names(name=name, names=names, deep=deep)
- if "labels" in kwargs:
- raise TypeError("'labels' argument has been removed; use 'codes' instead")
if deep:
from copy import deepcopy
| Removes ``**kwargs`` from ``Index.copy`` and ``MultiIndex.copy``.
| https://api.github.com/repos/pandas-dev/pandas/pulls/31669 | 2020-02-04T22:52:42Z | 2020-02-09T17:30:24Z | 2020-02-09T17:30:24Z | 2020-02-12T21:48:50Z |
Avoid unnecessary re-opening of HDF5 files (Closes: #58248) | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 17328e6084cb4..237001df750c8 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -331,6 +331,7 @@ Performance improvements
- Performance improvement in :meth:`RangeIndex.reindex` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57647`, :issue:`57752`)
- Performance improvement in :meth:`RangeIndex.take` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57445`, :issue:`57752`)
- Performance improvement in :func:`merge` if hash-join can be used (:issue:`57970`)
+- Performance improvement in :meth:`to_hdf` avoid unnecessary reopenings of the HDF5 file to speedup data addition to files with a very large number of groups . (:issue:`58248`)
- Performance improvement in ``DataFrameGroupBy.__len__`` and ``SeriesGroupBy.__len__`` (:issue:`57595`)
- Performance improvement in indexing operations for string dtypes (:issue:`56997`)
- Performance improvement in unary methods on a :class:`RangeIndex` returning a :class:`RangeIndex` instead of a :class:`Index` when possible. (:issue:`57825`)
@@ -406,7 +407,6 @@ I/O
- Bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
- Bug in :meth:`read_csv` raising ``TypeError`` when ``index_col`` is specified and ``na_values`` is a dict containing the key ``None``. (:issue:`57547`)
-
Period
^^^^^^
-
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 5ecf7e287ea58..3cfd740a51304 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -292,14 +292,14 @@ def to_hdf(
dropna=dropna,
)
- path_or_buf = stringify_path(path_or_buf)
- if isinstance(path_or_buf, str):
+ if isinstance(path_or_buf, HDFStore):
+ f(path_or_buf)
+ else:
+ path_or_buf = stringify_path(path_or_buf)
with HDFStore(
path_or_buf, mode=mode, complevel=complevel, complib=complib
) as store:
f(store)
- else:
- f(path_or_buf)
def read_hdf(
| - [X] closes #58248 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58275 | 2024-04-16T07:57:34Z | 2024-04-16T18:49:46Z | 2024-04-16T18:49:46Z | 2024-04-16T19:06:56Z |
Backport PR #58268 on branch 2.2.x (CI/TST: Unxfail test_slice_locs_negative_step Pyarrow test) | diff --git a/pandas/tests/indexes/object/test_indexing.py b/pandas/tests/indexes/object/test_indexing.py
index 443cacf94d239..ebf9dac715f8d 100644
--- a/pandas/tests/indexes/object/test_indexing.py
+++ b/pandas/tests/indexes/object/test_indexing.py
@@ -7,7 +7,6 @@
NA,
is_matching_na,
)
-from pandas.compat import pa_version_under16p0
import pandas.util._test_decorators as td
import pandas as pd
@@ -201,16 +200,7 @@ class TestSliceLocs:
(pd.IndexSlice["m":"m":-1], ""), # type: ignore[misc]
],
)
- def test_slice_locs_negative_step(self, in_slice, expected, dtype, request):
- if (
- not pa_version_under16p0
- and dtype == "string[pyarrow_numpy]"
- and in_slice == slice("a", "a", -1)
- ):
- request.applymarker(
- pytest.mark.xfail(reason="https://github.com/apache/arrow/issues/40642")
- )
-
+ def test_slice_locs_negative_step(self, in_slice, expected, dtype):
index = Index(list("bcdxy"), dtype=dtype)
s_start, s_stop = index.slice_locs(in_slice.start, in_slice.stop, in_slice.step)
| Backport PR #58268: CI/TST: Unxfail test_slice_locs_negative_step Pyarrow test | https://api.github.com/repos/pandas-dev/pandas/pulls/58269 | 2024-04-15T19:48:04Z | 2024-04-15T21:01:12Z | 2024-04-15T21:01:12Z | 2024-04-15T21:01:12Z |
CI/TST: Unxfail test_slice_locs_negative_step Pyarrow test | diff --git a/pandas/tests/indexes/object/test_indexing.py b/pandas/tests/indexes/object/test_indexing.py
index 34cc8eab4d812..039836da75cd5 100644
--- a/pandas/tests/indexes/object/test_indexing.py
+++ b/pandas/tests/indexes/object/test_indexing.py
@@ -7,7 +7,6 @@
NA,
is_matching_na,
)
-from pandas.compat import pa_version_under16p0
import pandas.util._test_decorators as td
import pandas as pd
@@ -202,16 +201,7 @@ class TestSliceLocs:
(pd.IndexSlice["m":"m":-1], ""), # type: ignore[misc]
],
)
- def test_slice_locs_negative_step(self, in_slice, expected, dtype, request):
- if (
- not pa_version_under16p0
- and dtype == "string[pyarrow_numpy]"
- and in_slice == slice("a", "a", -1)
- ):
- request.applymarker(
- pytest.mark.xfail(reason="https://github.com/apache/arrow/issues/40642")
- )
-
+ def test_slice_locs_negative_step(self, in_slice, expected, dtype):
index = Index(list("bcdxy"), dtype=dtype)
s_start, s_stop = index.slice_locs(in_slice.start, in_slice.stop, in_slice.step)
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58268 | 2024-04-15T18:41:03Z | 2024-04-15T19:47:31Z | 2024-04-15T19:47:31Z | 2024-04-15T19:47:34Z |
REF: Clean up some iterator usages | diff --git a/pandas/_libs/tslibs/offsets.pyx b/pandas/_libs/tslibs/offsets.pyx
index e36abdf0ad971..107608ec9f606 100644
--- a/pandas/_libs/tslibs/offsets.pyx
+++ b/pandas/_libs/tslibs/offsets.pyx
@@ -219,8 +219,7 @@ cdef _get_calendar(weekmask, holidays, calendar):
holidays = holidays + calendar.holidays().tolist()
except AttributeError:
pass
- holidays = [_to_dt64D(dt) for dt in holidays]
- holidays = tuple(sorted(holidays))
+ holidays = tuple(sorted(_to_dt64D(dt) for dt in holidays))
kwargs = {"weekmask": weekmask}
if holidays:
@@ -419,11 +418,10 @@ cdef class BaseOffset:
if "holidays" in all_paras and not all_paras["holidays"]:
all_paras.pop("holidays")
- exclude = ["kwds", "name", "calendar"]
- attrs = [(k, v) for k, v in all_paras.items()
- if (k not in exclude) and (k[0] != "_")]
- attrs = sorted(set(attrs))
- params = tuple([str(type(self))] + attrs)
+ exclude = {"kwds", "name", "calendar"}
+ attrs = {(k, v) for k, v in all_paras.items()
+ if (k not in exclude) and (k[0] != "_")}
+ params = tuple([str(type(self))] + sorted(attrs))
return params
@property
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index cd4812c3f78ae..b65a00db7d7df 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2301,8 +2301,8 @@ def maybe_reorder(
exclude.update(index)
if any(exclude):
- arr_exclude = [x for x in exclude if x in arr_columns]
- to_remove = [arr_columns.get_loc(col) for col in arr_exclude]
+ arr_exclude = (x for x in exclude if x in arr_columns)
+ to_remove = {arr_columns.get_loc(col) for col in arr_exclude}
arrays = [v for i, v in enumerate(arrays) if i not in to_remove]
columns = columns.drop(exclude)
@@ -3705,7 +3705,7 @@ def transpose(
nv.validate_transpose(args, {})
# construct the args
- dtypes = list(self.dtypes)
+ first_dtype = self.dtypes.iloc[0] if len(self.columns) else None
if self._can_fast_transpose:
# Note: tests pass without this, but this improves perf quite a bit.
@@ -3723,11 +3723,11 @@ def transpose(
elif (
self._is_homogeneous_type
- and dtypes
- and isinstance(dtypes[0], ExtensionDtype)
+ and first_dtype is not None
+ and isinstance(first_dtype, ExtensionDtype)
):
new_values: list
- if isinstance(dtypes[0], BaseMaskedDtype):
+ if isinstance(first_dtype, BaseMaskedDtype):
# We have masked arrays with the same dtype. We can transpose faster.
from pandas.core.arrays.masked import (
transpose_homogeneous_masked_arrays,
@@ -3736,7 +3736,7 @@ def transpose(
new_values = transpose_homogeneous_masked_arrays(
cast(Sequence[BaseMaskedArray], self._iter_column_arrays())
)
- elif isinstance(dtypes[0], ArrowDtype):
+ elif isinstance(first_dtype, ArrowDtype):
# We have arrow EAs with the same dtype. We can transpose faster.
from pandas.core.arrays.arrow.array import (
ArrowExtensionArray,
@@ -3748,10 +3748,11 @@ def transpose(
)
else:
# We have other EAs with the same dtype. We preserve dtype in transpose.
- dtyp = dtypes[0]
- arr_typ = dtyp.construct_array_type()
+ arr_typ = first_dtype.construct_array_type()
values = self.values
- new_values = [arr_typ._from_sequence(row, dtype=dtyp) for row in values]
+ new_values = [
+ arr_typ._from_sequence(row, dtype=first_dtype) for row in values
+ ]
result = type(self)._from_arrays(
new_values,
@@ -5882,7 +5883,7 @@ def set_index(
else:
arrays.append(self.index)
- to_remove: list[Hashable] = []
+ to_remove: set[Hashable] = set()
for col in keys:
if isinstance(col, MultiIndex):
arrays.extend(col._get_level_values(n) for n in range(col.nlevels))
@@ -5909,7 +5910,7 @@ def set_index(
arrays.append(frame[col])
names.append(col)
if drop:
- to_remove.append(col)
+ to_remove.add(col)
if len(arrays[-1]) != len(self):
# check newest element against length of calling frame, since
@@ -5926,7 +5927,7 @@ def set_index(
raise ValueError(f"Index has duplicate keys: {duplicates}")
# use set to handle duplicate column names gracefully in case of drop
- for c in set(to_remove):
+ for c in to_remove:
del frame[c]
# clear up memory usage
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 523ca9de201bf..9686c081b5fb3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2045,7 +2045,7 @@ def __setstate__(self, state) -> None:
# e.g. say fill_value needing _mgr to be
# defined
meta = set(self._internal_names + self._metadata)
- for k in list(meta):
+ for k in meta:
if k in state and k != "_flags":
v = state[k]
object.__setattr__(self, k, v)
diff --git a/pandas/core/internals/construction.py b/pandas/core/internals/construction.py
index 73b93110c9018..cea52bf8c91b2 100644
--- a/pandas/core/internals/construction.py
+++ b/pandas/core/internals/construction.py
@@ -567,7 +567,7 @@ def _extract_index(data) -> Index:
if len(data) == 0:
return default_index(0)
- raw_lengths = []
+ raw_lengths = set()
indexes: list[list[Hashable] | Index] = []
have_raw_arrays = False
@@ -583,7 +583,7 @@ def _extract_index(data) -> Index:
indexes.append(list(val.keys()))
elif is_list_like(val) and getattr(val, "ndim", 1) == 1:
have_raw_arrays = True
- raw_lengths.append(len(val))
+ raw_lengths.add(len(val))
elif isinstance(val, np.ndarray) and val.ndim > 1:
raise ValueError("Per-column arrays must each be 1-dimensional")
@@ -596,24 +596,23 @@ def _extract_index(data) -> Index:
index = union_indexes(indexes, sort=False)
if have_raw_arrays:
- lengths = list(set(raw_lengths))
- if len(lengths) > 1:
+ if len(raw_lengths) > 1:
raise ValueError("All arrays must be of the same length")
if have_dicts:
raise ValueError(
"Mixing dicts with non-Series may lead to ambiguous ordering."
)
-
+ raw_length = raw_lengths.pop()
if have_series:
- if lengths[0] != len(index):
+ if raw_length != len(index):
msg = (
- f"array length {lengths[0]} does not match index "
+ f"array length {raw_length} does not match index "
f"length {len(index)}"
)
raise ValueError(msg)
else:
- index = default_index(lengths[0])
+ index = default_index(raw_length)
return ensure_index(index)
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 2aeb1aff07a54..df7a6cdb1ea52 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -1124,18 +1124,18 @@ def f(value):
# we require at least Ymd
required = ["year", "month", "day"]
- req = sorted(set(required) - set(unit_rev.keys()))
+ req = set(required) - set(unit_rev.keys())
if len(req):
- _required = ",".join(req)
+ _required = ",".join(sorted(req))
raise ValueError(
"to assemble mappings requires at least that "
f"[year, month, day] be specified: [{_required}] is missing"
)
# keys we don't recognize
- excess = sorted(set(unit_rev.keys()) - set(_unit_map.values()))
+ excess = set(unit_rev.keys()) - set(_unit_map.values())
if len(excess):
- _excess = ",".join(excess)
+ _excess = ",".join(sorted(excess))
raise ValueError(
f"extra keys have been passed to the datetime assemblage: [{_excess}]"
)
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58267 | 2024-04-15T18:29:27Z | 2024-04-16T18:28:30Z | 2024-04-16T18:28:30Z | 2024-04-16T18:28:33Z |
DOC: Enforce Numpy Docstring Validation for pandas.Float32Dtype and pandas.Float64Dtype | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 01baadab67dbd..66f6bfd7195f9 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -153,8 +153,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DatetimeTZDtype SA01" \
-i "pandas.DatetimeTZDtype.tz SA01" \
-i "pandas.DatetimeTZDtype.unit SA01" \
- -i "pandas.Float32Dtype SA01" \
- -i "pandas.Float64Dtype SA01" \
-i "pandas.Grouper PR02,SA01" \
-i "pandas.HDFStore.append PR01,SA01" \
-i "pandas.HDFStore.get SA01" \
diff --git a/pandas/core/arrays/floating.py b/pandas/core/arrays/floating.py
index 74b8cfb65cbc7..653e63e9d1e2d 100644
--- a/pandas/core/arrays/floating.py
+++ b/pandas/core/arrays/floating.py
@@ -135,6 +135,12 @@ class FloatingArray(NumericArray):
-------
None
+See Also
+--------
+CategoricalDtype : Type for categorical data with the categories and orderedness.
+IntegerDtype : An ExtensionDtype to hold a single size & kind of integer dtype.
+StringDtype : An ExtensionDtype for string data.
+
Examples
--------
For Float32Dtype:
| - [ ] xref #58067
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58266 | 2024-04-15T18:18:59Z | 2024-04-15T19:48:32Z | 2024-04-15T19:48:32Z | 2024-04-16T03:55:37Z |
DOC: Move whatsnew 3.0 elements | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index ca0285ade466d..5493320f803f0 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -339,19 +339,6 @@ Performance improvements
Bug fixes
~~~~~~~~~
-- Fixed bug in :class:`SparseDtype` for equal comparison with na fill value. (:issue:`54770`)
-- Fixed bug in :meth:`.DataFrameGroupBy.median` where nat values gave an incorrect result. (:issue:`57926`)
-- Fixed bug in :meth:`DataFrame.cumsum` which was raising ``IndexError`` if dtype is ``timedelta64[ns]`` (:issue:`57956`)
-- Fixed bug in :meth:`DataFrame.eval` and :meth:`DataFrame.query` which caused an exception when using NumPy attributes via ``@`` notation, e.g., ``df.eval("@np.floor(a)")``. (:issue:`58041`)
-- Fixed bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
-- Fixed bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
-- Fixed bug in :meth:`DataFrame.transform` that was returning the wrong order unless the index was monotonically increasing. (:issue:`57069`)
-- Fixed bug in :meth:`DataFrame.update` bool dtype being converted to object (:issue:`55509`)
-- Fixed bug in :meth:`DataFrameGroupBy.apply` that was returning a completely empty DataFrame when all return values of ``func`` were ``None`` instead of returning an empty DataFrame with the original columns and dtypes. (:issue:`57775`)
-- Fixed bug in :meth:`Series.diff` allowing non-integer values for the ``periods`` argument. (:issue:`56607`)
-- Fixed bug in :meth:`Series.rank` that doesn't preserve missing values for nullable integers when ``na_option='keep'``. (:issue:`56976`)
-- Fixed bug in :meth:`Series.replace` and :meth:`DataFrame.replace` inconsistently replacing matching instances when ``regex=True`` and missing values are present. (:issue:`56599`)
-- Fixed bug in :meth:`read_csv` raising ``TypeError`` when ``index_col`` is specified and ``na_values`` is a dict containing the key ``None``. (:issue:`57547`)
Categorical
^^^^^^^^^^^
@@ -363,12 +350,12 @@ Datetimelike
- Bug in :class:`Timestamp` constructor failing to raise when ``tz=None`` is explicitly specified in conjunction with timezone-aware ``tzinfo`` or data (:issue:`48688`)
- Bug in :func:`date_range` where the last valid timestamp would sometimes not be produced (:issue:`56134`)
- Bug in :func:`date_range` where using a negative frequency value would not include all points between the start and end values (:issue:`56382`)
--
+- Bug in :func:`tseries.api.guess_datetime_format` would fail to infer time format when "%Y" == "%H%M" (:issue:`57452`)
Timedelta
^^^^^^^^^
- Accuracy improvement in :meth:`Timedelta.to_pytimedelta` to round microseconds consistently for large nanosecond based Timedelta (:issue:`57841`)
--
+- Bug in :meth:`DataFrame.cumsum` which was raising ``IndexError`` if dtype is ``timedelta64[ns]`` (:issue:`57956`)
Timezones
^^^^^^^^^
@@ -382,6 +369,7 @@ Numeric
Conversion
^^^^^^^^^^
+- Bug in :meth:`DataFrame.update` bool dtype being converted to object (:issue:`55509`)
- Bug in :meth:`Series.astype` might modify read-only array inplace when casting to a string dtype (:issue:`57212`)
- Bug in :meth:`Series.reindex` not maintaining ``float32`` type when a ``reindex`` introduces a missing value (:issue:`45857`)
@@ -412,10 +400,11 @@ MultiIndex
I/O
^^^
+- Bug in :class:`DataFrame` and :class:`Series` ``repr`` of :py:class:`collections.abc.Mapping`` elements. (:issue:`57915`)
- Bug in :meth:`DataFrame.to_excel` when writing empty :class:`DataFrame` with :class:`MultiIndex` on both axes (:issue:`57696`)
-- Now all ``Mapping`` s are pretty printed correctly. Before only literal ``dict`` s were. (:issue:`57915`)
--
--
+- Bug in :meth:`DataFrame.to_string` that raised ``StopIteration`` with nested DataFrames. (:issue:`16098`)
+- Bug in :meth:`read_csv` raising ``TypeError`` when ``index_col`` is specified and ``na_values`` is a dict containing the key ``None``. (:issue:`57547`)
+
Period
^^^^^^
@@ -430,23 +419,25 @@ Plotting
Groupby/resample/rolling
^^^^^^^^^^^^^^^^^^^^^^^^
- Bug in :meth:`.DataFrameGroupBy.groups` and :meth:`.SeriesGroupby.groups` that would not respect groupby argument ``dropna`` (:issue:`55919`)
+- Bug in :meth:`.DataFrameGroupBy.median` where nat values gave an incorrect result. (:issue:`57926`)
- Bug in :meth:`.DataFrameGroupBy.quantile` when ``interpolation="nearest"`` is inconsistent with :meth:`DataFrame.quantile` (:issue:`47942`)
- Bug in :meth:`DataFrame.ewm` and :meth:`Series.ewm` when passed ``times`` and aggregation functions other than mean (:issue:`51695`)
--
+- Bug in :meth:`DataFrameGroupBy.apply` that was returning a completely empty DataFrame when all return values of ``func`` were ``None`` instead of returning an empty DataFrame with the original columns and dtypes. (:issue:`57775`)
+
Reshaping
^^^^^^^^^
--
+- Bug in :meth:`DataFrame.join` inconsistently setting result index name (:issue:`55815`)
-
Sparse
^^^^^^
--
+- Bug in :class:`SparseDtype` for equal comparison with na fill value. (:issue:`54770`)
-
ExtensionArray
^^^^^^^^^^^^^^
-- Fixed bug in :meth:`api.types.is_datetime64_any_dtype` where a custom :class:`ExtensionDtype` would return ``False`` for array-likes (:issue:`57055`)
+- Bug in :meth:`api.types.is_datetime64_any_dtype` where a custom :class:`ExtensionDtype` would return ``False`` for array-likes (:issue:`57055`)
-
Styler
@@ -456,11 +447,15 @@ Styler
Other
^^^^^
- Bug in :class:`DataFrame` when passing a ``dict`` with a NA scalar and ``columns`` that would always return ``np.nan`` (:issue:`57205`)
-- Bug in :func:`tseries.api.guess_datetime_format` would fail to infer time format when "%Y" == "%H%M" (:issue:`57452`)
- Bug in :func:`unique` on :class:`Index` not always returning :class:`Index` (:issue:`57043`)
+- Bug in :meth:`DataFrame.eval` and :meth:`DataFrame.query` which caused an exception when using NumPy attributes via ``@`` notation, e.g., ``df.eval("@np.floor(a)")``. (:issue:`58041`)
- Bug in :meth:`DataFrame.sort_index` when passing ``axis="columns"`` and ``ignore_index=True`` and ``ascending=False`` not returning a :class:`RangeIndex` columns (:issue:`57293`)
+- Bug in :meth:`DataFrame.transform` that was returning the wrong order unless the index was monotonically increasing. (:issue:`57069`)
- Bug in :meth:`DataFrame.where` where using a non-bool type array in the function would return a ``ValueError`` instead of a ``TypeError`` (:issue:`56330`)
- Bug in :meth:`Index.sort_values` when passing a key function that turns values into tuples, e.g. ``key=natsort.natsort_key``, would raise ``TypeError`` (:issue:`56081`)
+- Bug in :meth:`Series.diff` allowing non-integer values for the ``periods`` argument. (:issue:`56607`)
+- Bug in :meth:`Series.rank` that doesn't preserve missing values for nullable integers when ``na_option='keep'``. (:issue:`56976`)
+- Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` inconsistently replacing matching instances when ``regex=True`` and missing values are present. (:issue:`56599`)
- Bug in Dataframe Interchange Protocol implementation was returning incorrect results for data buffers' associated dtype, for string and datetime columns (:issue:`54781`)
.. ***DO NOT USE THIS SECTION***
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58265 | 2024-04-15T17:38:54Z | 2024-04-15T18:41:50Z | 2024-04-15T18:41:50Z | 2024-04-15T18:41:53Z |
DOC: Reference Excel reader installation in the read and write tabular data tutorial | diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst
index 3cd9e030d6b3c..cf5f15ceb8344 100644
--- a/doc/source/getting_started/install.rst
+++ b/doc/source/getting_started/install.rst
@@ -269,6 +269,8 @@ SciPy 1.10.0 computation Miscellaneous stati
xarray 2022.12.0 computation pandas-like API for N-dimensional data
========================= ================== =============== =============================================================
+.. _install.excel_dependencies:
+
Excel files
^^^^^^^^^^^
diff --git a/doc/source/getting_started/intro_tutorials/02_read_write.rst b/doc/source/getting_started/intro_tutorials/02_read_write.rst
index ae658ec6abbaf..aa032b186aeb9 100644
--- a/doc/source/getting_started/intro_tutorials/02_read_write.rst
+++ b/doc/source/getting_started/intro_tutorials/02_read_write.rst
@@ -111,6 +111,12 @@ strings (``object``).
My colleague requested the Titanic data as a spreadsheet.
+.. note::
+ If you want to use :func:`~pandas.to_excel` and :func:`~pandas.read_excel`,
+ you need to install an Excel reader as outlined in the
+ :ref:`Excel files <install.excel_dependencies>` section of the
+ installation documentation.
+
.. ipython:: python
titanic.to_excel("titanic.xlsx", sheet_name="passengers", index=False)
| Created a link to the Excel file recommended installation section and referenced it in the `How do I read and write tabular data?` tutorial.
- [x] closes #58246
The below are not applicable because I am just updating documentation.
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58261 | 2024-04-15T04:51:29Z | 2024-04-15T16:47:31Z | 2024-04-15T16:47:31Z | 2024-04-15T16:47:44Z |
DOC: replace deprecated frequency alias | diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index f4f076103d8c3..8ada9d88e08bc 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1787,7 +1787,7 @@ def strftime(self, date_format: str) -> npt.NDArray[np.object_]:
----------
freq : str or Offset
The frequency level to {op} the index to. Must be a fixed
- frequency like 'S' (second) not 'ME' (month end). See
+ frequency like 's' (second) not 'ME' (month end). See
:ref:`frequency aliases <timeseries.offset_aliases>` for
a list of possible `freq` values.
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
| replace 'S' with 's'
Dreprecated since version 2.2.0 https://pandas.pydata.org/docs/whatsnew/v2.2.0.html#other-deprecations.
See https://github.com/pandas-dev/pandas/issues/52536
- ~~[ ] closes #xxxx (Replace xxxx with the GitHub issue number)~~
- ~~[ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature~~
- ~~[ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).~~
- ~~[ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.~~
- ~~[ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/58256 | 2024-04-14T14:11:16Z | 2024-04-16T17:12:19Z | 2024-04-16T17:12:19Z | 2024-04-16T17:12:29Z |
DEPR: logical ops with dtype-less sequences | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 50643454bbcec..f709bec842c86 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -208,6 +208,7 @@ Removal of prior version deprecations/changes
- :meth:`SeriesGroupBy.agg` no longer pins the name of the group to the input passed to the provided ``func`` (:issue:`51703`)
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
+- Disallow allowing logical operations (``||``, ``&``, ``^``) between pandas objects and dtype-less sequences (e.g. ``list``, ``tuple``); wrap the objects in :class:`Series`, :class:`Index`, or ``np.array`` first instead (:issue:`52264`)
- Disallow automatic casting to object in :class:`Series` logical operations (``&``, ``^``, ``||``) between series with mismatched indexes and dtypes other than ``object`` or ``bool`` (:issue:`52538`)
- Disallow calling :meth:`Series.replace` or :meth:`DataFrame.replace` without a ``value`` and with non-dict-like ``to_replace`` (:issue:`33302`)
- Disallow constructing a :class:`arrays.SparseArray` with scalar data (:issue:`53039`)
diff --git a/pandas/core/ops/array_ops.py b/pandas/core/ops/array_ops.py
index 810e30d369729..983a3df57e369 100644
--- a/pandas/core/ops/array_ops.py
+++ b/pandas/core/ops/array_ops.py
@@ -12,7 +12,6 @@
TYPE_CHECKING,
Any,
)
-import warnings
import numpy as np
@@ -29,7 +28,6 @@
is_supported_dtype,
is_unitless,
)
-from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.cast import (
construct_1d_object_array_from_listlike,
@@ -424,15 +422,13 @@ def fill_bool(x, left=None):
right = lib.item_from_zerodim(right)
if is_list_like(right) and not hasattr(right, "dtype"):
# e.g. list, tuple
- warnings.warn(
+ raise TypeError(
+ # GH#52264
"Logical ops (and, or, xor) between Pandas objects and dtype-less "
- "sequences (e.g. list, tuple) are deprecated and will raise in a "
- "future version. Wrap the object in a Series, Index, or np.array "
+ "sequences (e.g. list, tuple) are no longer supported. "
+ "Wrap the object in a Series, Index, or np.array "
"before operating instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
)
- right = construct_1d_object_array_from_listlike(right)
# NB: We assume extract_array has already been called on left and right
lvalues = ensure_wrapped_if_datetimelike(left)
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 00f48bf3b1d78..44bf3475b85a6 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -807,9 +807,6 @@ def test_series_ops_name_retention(self, flex, box, names, all_binary_operators)
r"Logical ops \(and, or, xor\) between Pandas objects and "
"dtype-less sequences"
)
- warn = None
- if box in [list, tuple] and is_logical:
- warn = FutureWarning
right = box(right)
if flex:
@@ -818,9 +815,12 @@ def test_series_ops_name_retention(self, flex, box, names, all_binary_operators)
return
result = getattr(left, name)(right)
else:
- # GH#37374 logical ops behaving as set ops deprecated
- with tm.assert_produces_warning(warn, match=msg):
- result = op(left, right)
+ if is_logical and box in [list, tuple]:
+ with pytest.raises(TypeError, match=msg):
+ # GH#52264 logical ops with dtype-less sequences deprecated
+ op(left, right)
+ return
+ result = op(left, right)
assert isinstance(result, Series)
if box in [Index, Series]:
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index dfc3309a8e449..f59eacea3fe6c 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -86,7 +86,7 @@ def test_logical_operators_int_dtype_with_float(self):
# GH#9016: support bitwise op for integer types
s_0123 = Series(range(4), dtype="int64")
- warn_msg = (
+ err_msg = (
r"Logical ops \(and, or, xor\) between Pandas objects and "
"dtype-less sequences"
)
@@ -97,9 +97,8 @@ def test_logical_operators_int_dtype_with_float(self):
with pytest.raises(TypeError, match=msg):
s_0123 & 3.14
msg = "unsupported operand type.+for &:"
- with pytest.raises(TypeError, match=msg):
- with tm.assert_produces_warning(FutureWarning, match=warn_msg):
- s_0123 & [0.1, 4, 3.14, 2]
+ with pytest.raises(TypeError, match=err_msg):
+ s_0123 & [0.1, 4, 3.14, 2]
with pytest.raises(TypeError, match=msg):
s_0123 & np.array([0.1, 4, 3.14, 2])
with pytest.raises(TypeError, match=msg):
@@ -108,7 +107,7 @@ def test_logical_operators_int_dtype_with_float(self):
def test_logical_operators_int_dtype_with_str(self):
s_1111 = Series([1] * 4, dtype="int8")
- warn_msg = (
+ err_msg = (
r"Logical ops \(and, or, xor\) between Pandas objects and "
"dtype-less sequences"
)
@@ -116,9 +115,8 @@ def test_logical_operators_int_dtype_with_str(self):
msg = "Cannot perform 'and_' with a dtyped.+array and scalar of type"
with pytest.raises(TypeError, match=msg):
s_1111 & "a"
- with pytest.raises(TypeError, match="unsupported operand.+for &"):
- with tm.assert_produces_warning(FutureWarning, match=warn_msg):
- s_1111 & ["a", "b", "c", "d"]
+ with pytest.raises(TypeError, match=err_msg):
+ s_1111 & ["a", "b", "c", "d"]
def test_logical_operators_int_dtype_with_bool(self):
# GH#9016: support bitwise op for integer types
@@ -129,17 +127,15 @@ def test_logical_operators_int_dtype_with_bool(self):
result = s_0123 & False
tm.assert_series_equal(result, expected)
- warn_msg = (
+ msg = (
r"Logical ops \(and, or, xor\) between Pandas objects and "
"dtype-less sequences"
)
- with tm.assert_produces_warning(FutureWarning, match=warn_msg):
- result = s_0123 & [False]
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ s_0123 & [False]
- with tm.assert_produces_warning(FutureWarning, match=warn_msg):
- result = s_0123 & (False,)
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ s_0123 & (False,)
result = s_0123 ^ False
expected = Series([False, True, True, True])
@@ -188,9 +184,8 @@ def test_logical_ops_bool_dtype_with_ndarray(self):
)
expected = Series([True, False, False, False, False])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = left & right
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ left & right
result = left & np.array(right)
tm.assert_series_equal(result, expected)
result = left & Index(right)
@@ -199,9 +194,8 @@ def test_logical_ops_bool_dtype_with_ndarray(self):
tm.assert_series_equal(result, expected)
expected = Series([True, True, True, True, True])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = left | right
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ left | right
result = left | np.array(right)
tm.assert_series_equal(result, expected)
result = left | Index(right)
@@ -210,9 +204,8 @@ def test_logical_ops_bool_dtype_with_ndarray(self):
tm.assert_series_equal(result, expected)
expected = Series([False, True, True, True, True])
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = left ^ right
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ left ^ right
result = left ^ np.array(right)
tm.assert_series_equal(result, expected)
result = left ^ Index(right)
@@ -269,9 +262,8 @@ def test_scalar_na_logical_ops_corners(self):
r"Logical ops \(and, or, xor\) between Pandas objects and "
"dtype-less sequences"
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
- result = s & list(s)
- tm.assert_series_equal(result, expected)
+ with pytest.raises(TypeError, match=msg):
+ s & list(s)
def test_scalar_na_logical_ops_corners_aligns(self):
s = Series([2, 3, 4, 5, 6, 7, 8, 9, datetime(2005, 1, 1)])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58242 | 2024-04-12T21:20:54Z | 2024-04-14T19:13:24Z | 2024-04-14T19:13:24Z | 2024-04-15T15:28:55Z |
DEPR: object casting in Series logical ops in non-bool cases | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index e05cc87d1af14..50643454bbcec 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -208,6 +208,7 @@ Removal of prior version deprecations/changes
- :meth:`SeriesGroupBy.agg` no longer pins the name of the group to the input passed to the provided ``func`` (:issue:`51703`)
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
+- Disallow automatic casting to object in :class:`Series` logical operations (``&``, ``^``, ``||``) between series with mismatched indexes and dtypes other than ``object`` or ``bool`` (:issue:`52538`)
- Disallow calling :meth:`Series.replace` or :meth:`DataFrame.replace` without a ``value`` and with non-dict-like ``to_replace`` (:issue:`33302`)
- Disallow constructing a :class:`arrays.SparseArray` with scalar data (:issue:`53039`)
- Disallow non-standard (``np.ndarray``, :class:`Index`, :class:`ExtensionArray`, or :class:`Series`) to :func:`isin`, :func:`unique`, :func:`factorize` (:issue:`52986`)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 0f796964eb56d..ab24b224b0957 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5859,17 +5859,12 @@ def _align_for_op(self, right, align_asobject: bool = False):
object,
np.bool_,
):
- warnings.warn(
- "Operation between non boolean Series with different "
- "indexes will no longer return a boolean result in "
- "a future version. Cast both Series to object type "
- "to maintain the prior behavior.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- # to keep original value's dtype for bool ops
- left = left.astype(object)
- right = right.astype(object)
+ pass
+ # GH#52538 no longer cast in these cases
+ else:
+ # to keep original value's dtype for bool ops
+ left = left.astype(object)
+ right = right.astype(object)
left, right = left.align(right)
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index b76b69289b72f..dfc3309a8e449 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -233,26 +233,22 @@ def test_logical_operators_int_dtype_with_bool_dtype_and_reindex(self):
# s_0123 will be all false now because of reindexing like s_tft
expected = Series([False] * 7, index=[0, 1, 2, 3, "a", "b", "c"])
- with tm.assert_produces_warning(FutureWarning):
- result = s_tft & s_0123
+ result = s_tft & s_0123
tm.assert_series_equal(result, expected)
- # GH 52538: Deprecate casting to object type when reindex is needed;
+ # GH#52538: no longer to object type when reindex is needed;
# matches DataFrame behavior
- expected = Series([False] * 7, index=[0, 1, 2, 3, "a", "b", "c"])
- with tm.assert_produces_warning(FutureWarning):
- result = s_0123 & s_tft
- tm.assert_series_equal(result, expected)
+ msg = r"unsupported operand type\(s\) for &: 'float' and 'bool'"
+ with pytest.raises(TypeError, match=msg):
+ s_0123 & s_tft
s_a0b1c0 = Series([1], list("b"))
- with tm.assert_produces_warning(FutureWarning):
- res = s_tft & s_a0b1c0
+ res = s_tft & s_a0b1c0
expected = s_tff.reindex(list("abc"))
tm.assert_series_equal(res, expected)
- with tm.assert_produces_warning(FutureWarning):
- res = s_tft | s_a0b1c0
+ res = s_tft | s_a0b1c0
expected = s_tft.reindex(list("abc"))
tm.assert_series_equal(res, expected)
@@ -405,27 +401,24 @@ def test_logical_ops_label_based(self, using_infer_string):
tm.assert_series_equal(result, expected)
# vs non-matching
- with tm.assert_produces_warning(FutureWarning):
- result = a & Series([1], ["z"])
+ result = a & Series([1], ["z"])
expected = Series([False, False, False, False], list("abcz"))
tm.assert_series_equal(result, expected)
- with tm.assert_produces_warning(FutureWarning):
- result = a | Series([1], ["z"])
+ result = a | Series([1], ["z"])
expected = Series([True, True, False, False], list("abcz"))
tm.assert_series_equal(result, expected)
# identity
# we would like s[s|e] == s to hold for any e, whether empty or not
- with tm.assert_produces_warning(FutureWarning):
- for e in [
- empty.copy(),
- Series([1], ["z"]),
- Series(np.nan, b.index),
- Series(np.nan, a.index),
- ]:
- result = a[a | e]
- tm.assert_series_equal(result, a[a])
+ for e in [
+ empty.copy(),
+ Series([1], ["z"]),
+ Series(np.nan, b.index),
+ Series(np.nan, a.index),
+ ]:
+ result = a[a | e]
+ tm.assert_series_equal(result, a[a])
for e in [Series(["z"])]:
warn = FutureWarning if using_infer_string else None
@@ -519,7 +512,6 @@ def test_logical_ops_df_compat(self):
tm.assert_frame_equal(s3.to_frame() | s4.to_frame(), exp_or1.to_frame())
tm.assert_frame_equal(s4.to_frame() | s3.to_frame(), exp_or.to_frame())
- @pytest.mark.xfail(reason="Will pass once #52839 deprecation is enforced")
def test_int_dtype_different_index_not_bool(self):
# GH 52500
ser1 = Series([1, 2, 3], index=[10, 11, 23], name="a")
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58241 | 2024-04-12T21:06:29Z | 2024-04-13T00:14:38Z | 2024-04-13T00:14:38Z | 2024-04-13T14:49:50Z |
API: Expose api.typing.FrozenList for typing | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index e05cc87d1af14..6e4fef387fdfb 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -28,6 +28,7 @@ enhancement2
Other enhancements
^^^^^^^^^^^^^^^^^^
+- :class:`pandas.api.typing.FrozenList` is available for typing the outputs of :attr:`MultiIndex.names`, :attr:`MultiIndex.codes` and :attr:`MultiIndex.levels` (:issue:`58237`)
- :func:`DataFrame.to_excel` now raises an ``UserWarning`` when the character count in a cell exceeds Excel's limitation of 32767 characters (:issue:`56954`)
- :func:`read_stata` now returns ``datetime64`` resolutions better matching those natively stored in the stata format (:issue:`55642`)
- :meth:`Styler.set_tooltips` provides alternative method to storing tooltips by using title attribute of td elements. (:issue:`56981`)
diff --git a/pandas/api/typing/__init__.py b/pandas/api/typing/__init__.py
index 9b5d2cb06b523..df6392bf692a2 100644
--- a/pandas/api/typing/__init__.py
+++ b/pandas/api/typing/__init__.py
@@ -9,6 +9,7 @@
DataFrameGroupBy,
SeriesGroupBy,
)
+from pandas.core.indexes.frozen import FrozenList
from pandas.core.resample import (
DatetimeIndexResamplerGroupby,
PeriodIndexResamplerGroupby,
@@ -38,6 +39,7 @@
"ExpandingGroupby",
"ExponentialMovingWindow",
"ExponentialMovingWindowGroupby",
+ "FrozenList",
"JsonReader",
"NaTType",
"NAType",
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index e32e5a268d46d..dd4c81a462b79 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -256,6 +256,7 @@ class TestApi(Base):
"ExpandingGroupby",
"ExponentialMovingWindow",
"ExponentialMovingWindowGroupby",
+ "FrozenList",
"JsonReader",
"NaTType",
"NAType",
| Since it was decided not to remove this in favor of `tuple` https://github.com/pandas-dev/pandas/pull/57788, exposing `FrozenList` in `.api.typing` instead | https://api.github.com/repos/pandas-dev/pandas/pulls/58237 | 2024-04-12T18:56:06Z | 2024-04-15T20:40:52Z | 2024-04-15T20:40:52Z | 2024-04-15T20:42:02Z |
DOC: Enforce Numpy Docstring Validation for pandas.ExcelFile, pandas.ExcelFile.parse and pandas.ExcelWriter | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 9c39fac13b230..70cc160cb4904 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -154,9 +154,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DatetimeTZDtype SA01" \
-i "pandas.DatetimeTZDtype.tz SA01" \
-i "pandas.DatetimeTZDtype.unit SA01" \
- -i "pandas.ExcelFile PR01,SA01" \
- -i "pandas.ExcelFile.parse PR01,SA01" \
- -i "pandas.ExcelWriter SA01" \
-i "pandas.Float32Dtype SA01" \
-i "pandas.Float64Dtype SA01" \
-i "pandas.Grouper PR02,SA01" \
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index a9da95054b81a..2b35cfa044ae9 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -979,6 +979,12 @@ class ExcelWriter(Generic[_WorkbookT]):
.. versionadded:: 1.3.0
+ See Also
+ --------
+ read_excel : Read an Excel sheet values (xlsx) file into DataFrame.
+ read_csv : Read a comma-separated values (csv) file into DataFrame.
+ read_fwf : Read a table of fixed-width formatted lines into DataFrame.
+
Notes
-----
For compatibility with CSV writers, ExcelWriter serializes lists
@@ -1434,6 +1440,7 @@ def inspect_excel_format(
return "zip"
+@doc(storage_options=_shared_docs["storage_options"])
class ExcelFile:
"""
Class for parsing tabular Excel sheets into DataFrame objects.
@@ -1472,19 +1479,27 @@ class ExcelFile:
- Otherwise if ``path_or_buffer`` is in xlsb format,
`pyxlsb <https://pypi.org/project/pyxlsb/>`_ will be used.
- .. versionadded:: 1.3.0
+ .. versionadded:: 1.3.0
- Otherwise if `openpyxl <https://pypi.org/project/openpyxl/>`_ is installed,
then ``openpyxl`` will be used.
- Otherwise if ``xlrd >= 2.0`` is installed, a ``ValueError`` will be raised.
- .. warning::
+ .. warning::
- Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
- This is not supported, switch to using ``openpyxl`` instead.
+ Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.
+ This is not supported, switch to using ``openpyxl`` instead.
+ {storage_options}
engine_kwargs : dict, optional
Arbitrary keyword arguments passed to excel engine.
+ See Also
+ --------
+ DataFrame.to_excel : Write DataFrame to an Excel file.
+ DataFrame.to_csv : Write DataFrame to a comma-separated values (csv) file.
+ read_csv : Read a comma-separated values (csv) file into DataFrame.
+ read_fwf : Read a table of fixed-width formatted lines into DataFrame.
+
Examples
--------
>>> file = pd.ExcelFile("myfile.xlsx") # doctest: +SKIP
@@ -1595,11 +1610,134 @@ def parse(
Equivalent to read_excel(ExcelFile, ...) See the read_excel
docstring for more info on accepted parameters.
+ Parameters
+ ----------
+ sheet_name : str, int, list, or None, default 0
+ Strings are used for sheet names. Integers are used in zero-indexed
+ sheet positions (chart sheets do not count as a sheet position).
+ Lists of strings/integers are used to request multiple sheets.
+ Specify ``None`` to get all worksheets.
+ header : int, list of int, default 0
+ Row (0-indexed) to use for the column labels of the parsed
+ DataFrame. If a list of integers is passed those row positions will
+ be combined into a ``MultiIndex``. Use None if there is no header.
+ names : array-like, default None
+ List of column names to use. If file contains no header row,
+ then you should explicitly pass header=None.
+ index_col : int, str, list of int, default None
+ Column (0-indexed) to use as the row labels of the DataFrame.
+ Pass None if there is no such column. If a list is passed,
+ those columns will be combined into a ``MultiIndex``. If a
+ subset of data is selected with ``usecols``, index_col
+ is based on the subset.
+
+ Missing values will be forward filled to allow roundtripping with
+ ``to_excel`` for ``merged_cells=True``. To avoid forward filling the
+ missing values use ``set_index`` after reading the data instead of
+ ``index_col``.
+ usecols : str, list-like, or callable, default None
+ * If None, then parse all columns.
+ * If str, then indicates comma separated list of Excel column letters
+ and column ranges (e.g. "A:E" or "A,C,E:F"). Ranges are inclusive of
+ both sides.
+ * If list of int, then indicates list of column numbers to be parsed
+ (0-indexed).
+ * If list of string, then indicates list of column names to be parsed.
+ * If callable, then evaluate each column name against it and parse the
+ column if the callable returns ``True``.
+
+ Returns a subset of the columns according to behavior above.
+ converters : dict, default None
+ Dict of functions for converting values in certain columns. Keys can
+ either be integers or column labels, values are functions that take one
+ input argument, the Excel cell content, and return the transformed
+ content.
+ true_values : list, default None
+ Values to consider as True.
+ false_values : list, default None
+ Values to consider as False.
+ skiprows : list-like, int, or callable, optional
+ Line numbers to skip (0-indexed) or number of lines to skip (int) at the
+ start of the file. If callable, the callable function will be evaluated
+ against the row indices, returning True if the row should be skipped and
+ False otherwise. An example of a valid callable argument would be ``lambda
+ x: x in [0, 2]``.
+ nrows : int, default None
+ Number of rows to parse.
+ na_values : scalar, str, list-like, or dict, default None
+ Additional strings to recognize as NA/NaN. If dict passed, specific
+ per-column NA values.
+ parse_dates : bool, list-like, or dict, default False
+ The behavior is as follows:
+
+ * ``bool``. If True -> try parsing the index.
+ * ``list`` of int or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
+ each as a separate date column.
+ * ``list`` of lists. e.g. If [[1, 3]] -> combine columns 1 and 3 and
+ parse as a single date column.
+ * ``dict``, e.g. {{'foo' : [1, 3]}} -> parse columns 1, 3 as date and call
+ result 'foo'
+
+ If a column or index contains an unparsable date, the entire column or
+ index will be returned unaltered as an object data type. If you
+ don`t want to parse some cells as date just change their type
+ in Excel to "Text".For non-standard datetime parsing, use
+ ``pd.to_datetime`` after ``pd.read_excel``.
+
+ Note: A fast-path exists for iso8601-formatted dates.
+ date_parser : function, optional
+ Function to use for converting a sequence of string columns to an array of
+ datetime instances. The default uses ``dateutil.parser.parser`` to do the
+ conversion. Pandas will try to call `date_parser` in three different ways,
+ advancing to the next if an exception occurs: 1) Pass one or more arrays
+ (as defined by `parse_dates`) as arguments; 2) concatenate (row-wise) the
+ string values from the columns defined by `parse_dates` into a single array
+ and pass that; and 3) call `date_parser` once for each row using one or
+ more strings (corresponding to the columns defined by `parse_dates`) as
+ arguments.
+
+ .. deprecated:: 2.0.0
+ Use ``date_format`` instead, or read in as ``object`` and then apply
+ :func:`to_datetime` as-needed.
+ date_format : str or dict of column -> format, default ``None``
+ If used in conjunction with ``parse_dates``, will parse dates
+ according to this format. For anything more complex,
+ please read in as ``object`` and then apply :func:`to_datetime` as-needed.
+ thousands : str, default None
+ Thousands separator for parsing string columns to numeric. Note that
+ this parameter is only necessary for columns stored as TEXT in Excel,
+ any numeric columns will automatically be parsed, regardless of display
+ format.
+ comment : str, default None
+ Comments out remainder of line. Pass a character or characters to this
+ argument to indicate comments in the input file. Any data between the
+ comment string and the end of the current line is ignored.
+ skipfooter : int, default 0
+ Rows at the end to skip (0-indexed).
+ dtype_backend : {{'numpy_nullable', 'pyarrow'}}, default 'numpy_nullable'
+ Back-end data type applied to the resultant :class:`DataFrame`
+ (still experimental). Behaviour is as follows:
+
+ * ``"numpy_nullable"``: returns nullable-dtype-backed :class:`DataFrame`
+ (default).
+ * ``"pyarrow"``: returns pyarrow-backed nullable :class:`ArrowDtype`
+ DataFrame.
+
+ .. versionadded:: 2.0
+ **kwds : dict, optional
+ Arbitrary keyword arguments passed to excel engine.
+
Returns
-------
DataFrame or dict of DataFrames
DataFrame from the passed in Excel file.
+ See Also
+ --------
+ read_excel : Read an Excel sheet values (xlsx) file into DataFrame.
+ read_csv : Read a comma-separated values (csv) file into DataFrame.
+ read_fwf : Read a table of fixed-width formatted lines into DataFrame.
+
Examples
--------
>>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=["A", "B", "C"])
| - [ ] xref #58067
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58235 | 2024-04-12T16:55:51Z | 2024-04-15T16:29:03Z | 2024-04-15T16:29:03Z | 2024-04-15T18:27:28Z |
ASV: benchmark for DataFrame.Update | diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py
index ce31d63f0b70f..6a2ab24df26fe 100644
--- a/asv_bench/benchmarks/frame_methods.py
+++ b/asv_bench/benchmarks/frame_methods.py
@@ -862,4 +862,28 @@ def time_last_valid_index(self, dtype):
self.df.last_valid_index()
+class Update:
+ def setup(self):
+ rng = np.random.default_rng()
+ self.df = DataFrame(rng.uniform(size=(1_000_000, 10)))
+
+ idx = rng.choice(range(1_000_000), size=1_000_000, replace=False)
+ self.df_random = DataFrame(self.df, index=idx)
+
+ idx = rng.choice(range(1_000_000), size=100_000, replace=False)
+ cols = rng.choice(range(10), size=2, replace=False)
+ self.df_sample = DataFrame(
+ rng.uniform(size=(100_000, 2)), index=idx, columns=cols
+ )
+
+ def time_to_update_big_frame_small_arg(self):
+ self.df.update(self.df_sample)
+
+ def time_to_update_random_indices(self):
+ self.df_random.update(self.df_sample)
+
+ def time_to_update_small_frame_big_arg(self):
+ self.df_sample.update(self.df)
+
+
from .pandas_vb_common import setup # noqa: F401 isort:skip
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
@rhshadrach and @datapythonista
Benchmarks for the DataFrame.Update method are implemented, as suggested on PR #57637 by @datapythonista and based on the manual tests done by @rhshadrach on #55634. By running ASV locally we measure the change in the performance due to the fix on #57637, as indicated below:
| Change | Main (2.2.1) [8da8b544] | PR (3.0) [93ea131f] | Ratio | Benchmark (Parameter) |
|----------|----------------------------|----------------------------------------|---------|---------------------------------------------------------|
| - | 1.13±0.02ms | 887±10μs | 0.78 | frame_methods.Update.time_to_update_random_indices |
| - | 1.19±0.02ms | 840±9μs | 0.71 | frame_methods.Update.time_to_update_small_frame_big_arg |
| - | 1.27±0.01ms | 859±9μs | 0.67 | frame_methods.Update.time_to_update_big_frame_small_arg |
Regards
| https://api.github.com/repos/pandas-dev/pandas/pulls/58228 | 2024-04-12T00:28:10Z | 2024-04-15T21:16:32Z | 2024-04-15T21:16:32Z | 2024-04-15T21:16:40Z |
REF: Use PyUnicode_AsUTF8AndSize instead of get_c_string_buf_and_size | diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index e3a9102fec395..5c6254c6a1ec7 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -3,7 +3,7 @@ Template for each `dtype` helper function for hashtable
WARNING: DO NOT edit .pxi FILE directly, .pxi is generated from .pxi.in
"""
-
+from cpython.unicode cimport PyUnicode_AsUTF8
{{py:
@@ -98,7 +98,6 @@ from pandas._libs.khash cimport (
# VectorData
# ----------------------------------------------------------------------
-from pandas._libs.tslibs.util cimport get_c_string
from pandas._libs.missing cimport C_NA
@@ -998,7 +997,7 @@ cdef class StringHashTable(HashTable):
cdef:
khiter_t k
const char *v
- v = get_c_string(val)
+ v = PyUnicode_AsUTF8(val)
k = kh_get_str(self.table, v)
if k != self.table.n_buckets:
@@ -1012,7 +1011,7 @@ cdef class StringHashTable(HashTable):
int ret = 0
const char *v
- v = get_c_string(key)
+ v = PyUnicode_AsUTF8(key)
k = kh_put_str(self.table, v, &ret)
if kh_exist_str(self.table, k):
@@ -1037,7 +1036,7 @@ cdef class StringHashTable(HashTable):
raise MemoryError()
for i in range(n):
val = values[i]
- v = get_c_string(val)
+ v = PyUnicode_AsUTF8(val)
vecs[i] = v
with nogil:
@@ -1071,11 +1070,11 @@ cdef class StringHashTable(HashTable):
val = values[i]
if isinstance(val, str):
- # GH#31499 if we have a np.str_ get_c_string won't recognize
+ # GH#31499 if we have a np.str_ PyUnicode_AsUTF8 won't recognize
# it as a str, even though isinstance does.
- v = get_c_string(<str>val)
+ v = PyUnicode_AsUTF8(<str>val)
else:
- v = get_c_string(self.na_string_sentinel)
+ v = PyUnicode_AsUTF8(self.na_string_sentinel)
vecs[i] = v
with nogil:
@@ -1109,11 +1108,11 @@ cdef class StringHashTable(HashTable):
val = values[i]
if isinstance(val, str):
- # GH#31499 if we have a np.str_ get_c_string won't recognize
+ # GH#31499 if we have a np.str_ PyUnicode_AsUTF8 won't recognize
# it as a str, even though isinstance does.
- v = get_c_string(<str>val)
+ v = PyUnicode_AsUTF8(<str>val)
else:
- v = get_c_string(self.na_string_sentinel)
+ v = PyUnicode_AsUTF8(self.na_string_sentinel)
vecs[i] = v
with nogil:
@@ -1195,9 +1194,9 @@ cdef class StringHashTable(HashTable):
else:
# if ignore_na is False, we also stringify NaN/None/etc.
try:
- v = get_c_string(<str>val)
+ v = PyUnicode_AsUTF8(<str>val)
except UnicodeEncodeError:
- v = get_c_string(<str>repr(val))
+ v = PyUnicode_AsUTF8(<str>repr(val))
vecs[i] = v
# compute
diff --git a/pandas/_libs/tslibs/np_datetime.pyx b/pandas/_libs/tslibs/np_datetime.pyx
index aa01a05d0d932..61095b3f034fd 100644
--- a/pandas/_libs/tslibs/np_datetime.pyx
+++ b/pandas/_libs/tslibs/np_datetime.pyx
@@ -18,6 +18,7 @@ from cpython.object cimport (
Py_LT,
Py_NE,
)
+from cpython.unicode cimport PyUnicode_AsUTF8AndSize
from libc.stdint cimport INT64_MAX
import_datetime()
@@ -44,7 +45,6 @@ from pandas._libs.tslibs.dtypes cimport (
npy_unit_to_abbrev,
npy_unit_to_attrname,
)
-from pandas._libs.tslibs.util cimport get_c_string_buf_and_size
cdef extern from "pandas/datetime/pd_datetime.h":
@@ -341,13 +341,13 @@ cdef int string_to_dts(
const char* format_buf
FormatRequirement format_requirement
- buf = get_c_string_buf_and_size(val, &length)
+ buf = PyUnicode_AsUTF8AndSize(val, &length)
if format is None:
format_buf = b""
format_length = 0
format_requirement = INFER_FORMAT
else:
- format_buf = get_c_string_buf_and_size(format, &format_length)
+ format_buf = PyUnicode_AsUTF8AndSize(format, &format_length)
format_requirement = <FormatRequirement>exact
return parse_iso_8601_datetime(buf, length, want_exc,
dts, out_bestunit, out_local, out_tzoffset,
diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx
index 384df1cac95eb..85ef3fd93ff09 100644
--- a/pandas/_libs/tslibs/parsing.pyx
+++ b/pandas/_libs/tslibs/parsing.pyx
@@ -19,6 +19,7 @@ from cpython.datetime cimport (
from datetime import timezone
from cpython.object cimport PyObject_Str
+from cpython.unicode cimport PyUnicode_AsUTF8AndSize
from cython cimport Py_ssize_t
from libc.string cimport strchr
@@ -74,10 +75,7 @@ import_pandas_datetime()
from pandas._libs.tslibs.strptime import array_strptime
-from pandas._libs.tslibs.util cimport (
- get_c_string_buf_and_size,
- is_array,
-)
+from pandas._libs.tslibs.util cimport is_array
cdef extern from "pandas/portable.h":
@@ -175,7 +173,7 @@ cdef datetime _parse_delimited_date(
int day = 1, month = 1, year
bint can_swap = 0
- buf = get_c_string_buf_and_size(date_string, &length)
+ buf = PyUnicode_AsUTF8AndSize(date_string, &length)
if length == 10 and _is_delimiter(buf[2]) and _is_delimiter(buf[5]):
# parsing MM?DD?YYYY and DD?MM?YYYY dates
month = _parse_2digit(buf)
@@ -251,7 +249,7 @@ cdef bint _does_string_look_like_time(str parse_string):
Py_ssize_t length
int hour = -1, minute = -1
- buf = get_c_string_buf_and_size(parse_string, &length)
+ buf = PyUnicode_AsUTF8AndSize(parse_string, &length)
if length >= 4:
if buf[1] == b":":
# h:MM format
@@ -467,7 +465,7 @@ cpdef bint _does_string_look_like_datetime(str py_string):
char first
int error = 0
- buf = get_c_string_buf_and_size(py_string, &length)
+ buf = PyUnicode_AsUTF8AndSize(py_string, &length)
if length >= 1:
first = buf[0]
if first == b"0":
@@ -521,7 +519,7 @@ cdef datetime _parse_dateabbr_string(str date_string, datetime default,
pass
if 4 <= date_len <= 7:
- buf = get_c_string_buf_and_size(date_string, &date_len)
+ buf = PyUnicode_AsUTF8AndSize(date_string, &date_len)
try:
i = date_string.index("Q", 1, 6)
if i == 1:
diff --git a/pandas/_libs/tslibs/util.pxd b/pandas/_libs/tslibs/util.pxd
index a5822e57d3fa6..f144275e0ee6a 100644
--- a/pandas/_libs/tslibs/util.pxd
+++ b/pandas/_libs/tslibs/util.pxd
@@ -1,6 +1,5 @@
from cpython.object cimport PyTypeObject
-from cpython.unicode cimport PyUnicode_AsUTF8AndSize
cdef extern from "Python.h":
@@ -155,36 +154,6 @@ cdef inline bint is_nan(object val):
return is_complex_object(val) and val != val
-cdef inline const char* get_c_string_buf_and_size(str py_string,
- Py_ssize_t *length) except NULL:
- """
- Extract internal char* buffer of unicode or bytes object `py_string` with
- getting length of this internal buffer saved in `length`.
-
- Notes
- -----
- Python object owns memory, thus returned char* must not be freed.
- `length` can be NULL if getting buffer length is not needed.
-
- Parameters
- ----------
- py_string : str
- length : Py_ssize_t*
-
- Returns
- -------
- buf : const char*
- """
- # Note PyUnicode_AsUTF8AndSize() can
- # potentially allocate memory inside in unlikely case of when underlying
- # unicode object was stored as non-utf8 and utf8 wasn't requested before.
- return PyUnicode_AsUTF8AndSize(py_string, length)
-
-
-cdef inline const char* get_c_string(str py_string) except NULL:
- return get_c_string_buf_and_size(py_string, NULL)
-
-
cdef inline bytes string_encode_locale(str py_string):
"""As opposed to PyUnicode_Encode, use current system locale to encode."""
return PyUnicode_EncodeLocale(py_string, NULL)
| cc @WillAyd
| https://api.github.com/repos/pandas-dev/pandas/pulls/58227 | 2024-04-11T23:02:04Z | 2024-04-12T00:51:06Z | 2024-04-12T00:51:06Z | 2024-04-12T00:52:55Z |
BUG: Timestamp ignoring explicit tz=None | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index e05cc87d1af14..66f6c10e36bdc 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -357,6 +357,7 @@ Categorical
Datetimelike
^^^^^^^^^^^^
+- Bug in :class:`Timestamp` constructor failing to raise when ``tz=None`` is explicitly specified in conjunction with timezone-aware ``tzinfo`` or data (:issue:`48688`)
- Bug in :func:`date_range` where the last valid timestamp would sometimes not be produced (:issue:`56134`)
- Bug in :func:`date_range` where using a negative frequency value would not include all points between the start and end values (:issue:`56382`)
-
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index d4cd90613ca5b..82daa6d942095 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1751,7 +1751,7 @@ class Timestamp(_Timestamp):
tzinfo_type tzinfo=None,
*,
nanosecond=None,
- tz=None,
+ tz=_no_input,
unit=None,
fold=None,
):
@@ -1783,6 +1783,10 @@ class Timestamp(_Timestamp):
_date_attributes = [year, month, day, hour, minute, second,
microsecond, nanosecond]
+ explicit_tz_none = tz is None
+ if tz is _no_input:
+ tz = None
+
if tzinfo is not None:
# GH#17690 tzinfo must be a datetime.tzinfo object, ensured
# by the cython annotation.
@@ -1883,6 +1887,11 @@ class Timestamp(_Timestamp):
if ts.value == NPY_NAT:
return NaT
+ if ts.tzinfo is not None and explicit_tz_none:
+ raise ValueError(
+ "Passed data is timezone-aware, incompatible with 'tz=None'."
+ )
+
return create_timestamp_from_ts(ts.value, ts.dts, ts.tzinfo, ts.fold, ts.creso)
def _round(self, freq, mode, ambiguous="raise", nonexistent="raise"):
diff --git a/pandas/tests/indexing/test_at.py b/pandas/tests/indexing/test_at.py
index d78694018749c..217ca74bd7fbd 100644
--- a/pandas/tests/indexing/test_at.py
+++ b/pandas/tests/indexing/test_at.py
@@ -136,7 +136,11 @@ def test_at_datetime_index(self, row):
class TestAtSetItemWithExpansion:
def test_at_setitem_expansion_series_dt64tz_value(self, tz_naive_fixture):
# GH#25506
- ts = Timestamp("2017-08-05 00:00:00+0100", tz=tz_naive_fixture)
+ ts = (
+ Timestamp("2017-08-05 00:00:00+0100", tz=tz_naive_fixture)
+ if tz_naive_fixture is not None
+ else Timestamp("2017-08-05 00:00:00+0100")
+ )
result = Series(ts)
result.at[1] = ts
expected = Series([ts, ts])
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index bbda9d3ee7dce..4ebdea3733484 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -621,7 +621,6 @@ def test_constructor_with_stringoffset(self):
]
timezones = [
- (None, 0),
("UTC", 0),
(pytz.utc, 0),
("Asia/Tokyo", 9),
@@ -1013,6 +1012,18 @@ def test_timestamp_constructed_by_date_and_tz(self, tz):
assert result.hour == expected.hour
assert result == expected
+ def test_explicit_tz_none(self):
+ # GH#48688
+ msg = "Passed data is timezone-aware, incompatible with 'tz=None'"
+ with pytest.raises(ValueError, match=msg):
+ Timestamp(datetime(2022, 1, 1, tzinfo=timezone.utc), tz=None)
+
+ with pytest.raises(ValueError, match=msg):
+ Timestamp("2022-01-01 00:00:00", tzinfo=timezone.utc, tz=None)
+
+ with pytest.raises(ValueError, match=msg):
+ Timestamp("2022-01-01 00:00:00-0400", tz=None)
+
def test_constructor_ambiguous_dst():
# GH 24329
diff --git a/pandas/tests/scalar/timestamp/test_formats.py b/pandas/tests/scalar/timestamp/test_formats.py
index b4493088acb31..e1299c272e5cc 100644
--- a/pandas/tests/scalar/timestamp/test_formats.py
+++ b/pandas/tests/scalar/timestamp/test_formats.py
@@ -118,7 +118,7 @@ def test_repr(self, date, freq, tz):
def test_repr_utcoffset(self):
# This can cause the tz field to be populated, but it's redundant to
# include this information in the date-string.
- date_with_utc_offset = Timestamp("2014-03-13 00:00:00-0400", tz=None)
+ date_with_utc_offset = Timestamp("2014-03-13 00:00:00-0400")
assert "2014-03-13 00:00:00-0400" in repr(date_with_utc_offset)
assert "tzoffset" not in repr(date_with_utc_offset)
assert "UTC-04:00" in repr(date_with_utc_offset)
| - [x] closes #48688(Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58226 | 2024-04-11T22:24:18Z | 2024-04-15T17:12:09Z | 2024-04-15T17:12:09Z | 2024-04-15T17:17:12Z |
DOC: Fix docstring error for pandas.DataFrame.axes | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 9c39fac13b230..b76ef69efa9f2 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -83,7 +83,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.__iter__ SA01" \
-i "pandas.DataFrame.assign SA01" \
-i "pandas.DataFrame.at_time PR01" \
- -i "pandas.DataFrame.axes SA01" \
-i "pandas.DataFrame.bfill SA01" \
-i "pandas.DataFrame.columns SA01" \
-i "pandas.DataFrame.copy SA01" \
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6db1811a98dd3..0b386efb5a867 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -995,6 +995,11 @@ def axes(self) -> list[Index]:
It has the row axis labels and column axis labels as the only members.
They are returned in that order.
+ See Also
+ --------
+ DataFrame.index: The index (row labels) of the DataFrame.
+ DataFrame.columns: The column labels of the DataFrame.
+
Examples
--------
>>> df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4]})
| - Fixes docstring error for pandas.DataFrame.axes method (https://github.com/pandas-dev/pandas/issues/58065).
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Closed previous PR https://github.com/pandas-dev/pandas/pull/58102 and re-opened here. | https://api.github.com/repos/pandas-dev/pandas/pulls/58223 | 2024-04-11T17:57:55Z | 2024-04-11T19:10:12Z | 2024-04-11T19:10:12Z | 2024-04-11T19:10:19Z |
Revert "CI: Pin blosc to fix pytables" | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 7402f6495c788..ed7dfe1a3c17e 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -24,8 +24,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 15ff3d6dbe54c..dd1d341c70a9b 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -26,8 +26,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 586cf26da7897..388116439f944 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -24,8 +24,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index 96abc7744d871..1d9f8aa3b092a 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -24,8 +24,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 4756f1046054d..b760f27a3d4d3 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -27,8 +27,6 @@ dependencies:
# optional dependencies
- beautifulsoup4=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc=1.21.3
- bottleneck=1.3.6
- fastparquet=2023.10.0
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 8154486ff53e1..8f235a836bb3d 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -24,8 +24,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index dc6adf175779f..ed4d139714e71 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -25,8 +25,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/environment.yml b/environment.yml
index ac4bd0568403b..186d7e1d703df 100644
--- a/environment.yml
+++ b/environment.yml
@@ -28,8 +28,6 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
- # https://github.com/conda-forge/pytables-feedstock/issues/97
- - c-blosc2=2.13.2
- blosc
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index 4e4e54c5be9a9..d54d35bc0171f 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -23,7 +23,7 @@
import tomli as tomllib
import yaml
-EXCLUDE = {"python", "c-compiler", "cxx-compiler", "c-blosc2"}
+EXCLUDE = {"python", "c-compiler", "cxx-compiler"}
REMAP_VERSION = {"tzdata": "2022.7"}
CONDA_TO_PIP = {
"pytables": "tables",
diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py
index 59989aadf73ae..1001b00450354 100755
--- a/scripts/validate_min_versions_in_sync.py
+++ b/scripts/validate_min_versions_in_sync.py
@@ -36,7 +36,7 @@
SETUP_PATH = pathlib.Path("pyproject.toml").resolve()
YAML_PATH = pathlib.Path("ci/deps")
ENV_PATH = pathlib.Path("environment.yml")
-EXCLUDE_DEPS = {"tzdata", "blosc", "c-blosc2", "pyqt", "pyqt5"}
+EXCLUDE_DEPS = {"tzdata", "blosc", "pyqt", "pyqt5"}
EXCLUSION_LIST = frozenset(["python=3.8[build=*_pypy]"])
# pandas package is not available
# in pre-commit environment
@@ -225,9 +225,6 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str,
seen_required = True
elif "# optional dependencies" in line:
seen_optional = True
- elif "#" in line:
- # just a comment
- continue
elif "- pip:" in line:
continue
elif seen_required and line.strip():
| Reverts pandas-dev/pandas#58209 | https://api.github.com/repos/pandas-dev/pandas/pulls/58218 | 2024-04-11T12:24:43Z | 2024-04-11T15:21:49Z | 2024-04-11T15:21:49Z | 2024-04-11T15:22:15Z |
DOC: update DatetimeTZDtype unit limitation | diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index 2d8e490f02d52..98e689528744e 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -698,8 +698,8 @@ class DatetimeTZDtype(PandasExtensionDtype):
Parameters
----------
unit : str, default "ns"
- The precision of the datetime data. Currently limited
- to ``"ns"``.
+ The precision of the datetime data. Valid options are
+ ``"s"``, ``"ms"``, ``"us"``, ``"ns"``.
tz : str, int, or datetime.tzinfo
The timezone.
| - [x] closes #58212
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58213 | 2024-04-10T17:44:15Z | 2024-04-12T18:07:59Z | 2024-04-12T18:07:59Z | 2024-04-12T18:52:50Z |
Backport PR #58209: CI: Pin blosc to fix pytables | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index a3e44e6373145..ea2336ae78f81 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index d6bf9ec7843de..8f84a53b58610 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -26,6 +26,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 95cd1a4d46ef4..51a246ce73a11 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index a442ed6feeb5d..7d2b9c39d2fe3 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index 7067048c4434d..cedf4fb9dc867 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -27,6 +27,8 @@ dependencies:
# optional dependencies
- beautifulsoup4=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc=1.21.3
- bottleneck=1.3.6
- fastparquet=2022.12.0
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index b162a78e7f115..85f2a74e849ee 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index a19ffd485262d..c018ad94e7f30 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -25,6 +25,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/environment.yml b/environment.yml
index 58eb69ad1f070..7f2db06d4d50e 100644
--- a/environment.yml
+++ b/environment.yml
@@ -27,6 +27,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc
- bottleneck>=1.3.6
- fastparquet>=2022.12.0
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index 5fcf09cd073fe..bf38d2fa419d1 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -23,7 +23,7 @@
import tomli as tomllib
import yaml
-EXCLUDE = {"python", "c-compiler", "cxx-compiler"}
+EXCLUDE = {"python", "c-compiler", "cxx-compiler", "c-blosc2"}
REMAP_VERSION = {"tzdata": "2022.7"}
CONDA_TO_PIP = {
"pytables": "tables",
diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py
index 7dd3e96e6ec18..62a92cdd10ebc 100755
--- a/scripts/validate_min_versions_in_sync.py
+++ b/scripts/validate_min_versions_in_sync.py
@@ -36,7 +36,7 @@
SETUP_PATH = pathlib.Path("pyproject.toml").resolve()
YAML_PATH = pathlib.Path("ci/deps")
ENV_PATH = pathlib.Path("environment.yml")
-EXCLUDE_DEPS = {"tzdata", "blosc", "pandas-gbq", "pyqt", "pyqt5"}
+EXCLUDE_DEPS = {"tzdata", "blosc", "c-blosc2", "pandas-gbq", "pyqt", "pyqt5"}
EXCLUSION_LIST = frozenset(["python=3.8[build=*_pypy]"])
# pandas package is not available
# in pre-commit environment
@@ -225,6 +225,9 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str,
seen_required = True
elif "# optional dependencies" in line:
seen_optional = True
+ elif "#" in line:
+ # just a comment
+ continue
elif "- pip:" in line:
continue
elif seen_required and line.strip():
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58211 | 2024-04-10T15:38:55Z | 2024-04-10T16:42:52Z | 2024-04-10T16:42:52Z | 2024-04-10T16:42:53Z |
CI: Pin blosc to fix pytables | diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index ed7dfe1a3c17e..7402f6495c788 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index dd1d341c70a9b..15ff3d6dbe54c 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -26,6 +26,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 388116439f944..586cf26da7897 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index 1d9f8aa3b092a..96abc7744d871 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index b760f27a3d4d3..4756f1046054d 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -27,6 +27,8 @@ dependencies:
# optional dependencies
- beautifulsoup4=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc=1.21.3
- bottleneck=1.3.6
- fastparquet=2023.10.0
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index 8f235a836bb3d..8154486ff53e1 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -24,6 +24,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index ed4d139714e71..dc6adf175779f 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -25,6 +25,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc>=1.21.3
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/environment.yml b/environment.yml
index 186d7e1d703df..ac4bd0568403b 100644
--- a/environment.yml
+++ b/environment.yml
@@ -28,6 +28,8 @@ dependencies:
# optional dependencies
- beautifulsoup4>=4.11.2
+ # https://github.com/conda-forge/pytables-feedstock/issues/97
+ - c-blosc2=2.13.2
- blosc
- bottleneck>=1.3.6
- fastparquet>=2023.10.0
diff --git a/scripts/generate_pip_deps_from_conda.py b/scripts/generate_pip_deps_from_conda.py
index d54d35bc0171f..4e4e54c5be9a9 100755
--- a/scripts/generate_pip_deps_from_conda.py
+++ b/scripts/generate_pip_deps_from_conda.py
@@ -23,7 +23,7 @@
import tomli as tomllib
import yaml
-EXCLUDE = {"python", "c-compiler", "cxx-compiler"}
+EXCLUDE = {"python", "c-compiler", "cxx-compiler", "c-blosc2"}
REMAP_VERSION = {"tzdata": "2022.7"}
CONDA_TO_PIP = {
"pytables": "tables",
diff --git a/scripts/validate_min_versions_in_sync.py b/scripts/validate_min_versions_in_sync.py
index 1001b00450354..59989aadf73ae 100755
--- a/scripts/validate_min_versions_in_sync.py
+++ b/scripts/validate_min_versions_in_sync.py
@@ -36,7 +36,7 @@
SETUP_PATH = pathlib.Path("pyproject.toml").resolve()
YAML_PATH = pathlib.Path("ci/deps")
ENV_PATH = pathlib.Path("environment.yml")
-EXCLUDE_DEPS = {"tzdata", "blosc", "pyqt", "pyqt5"}
+EXCLUDE_DEPS = {"tzdata", "blosc", "c-blosc2", "pyqt", "pyqt5"}
EXCLUSION_LIST = frozenset(["python=3.8[build=*_pypy]"])
# pandas package is not available
# in pre-commit environment
@@ -225,6 +225,9 @@ def get_versions_from_ci(content: list[str]) -> tuple[dict[str, str], dict[str,
seen_required = True
elif "# optional dependencies" in line:
seen_optional = True
+ elif "#" in line:
+ # just a comment
+ continue
elif "- pip:" in line:
continue
elif seen_required and line.strip():
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58209 | 2024-04-10T14:29:32Z | 2024-04-10T15:36:29Z | 2024-04-10T15:36:29Z | 2024-04-11T07:59:40Z |
Backport PR #58202: DOC/TST: Document numpy 2.0 support and add tests… | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 0dac3660c76b2..856a31a5cf305 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -9,6 +9,21 @@ including other versions of pandas.
{{ header }}
.. ---------------------------------------------------------------------------
+
+.. _whatsnew_220.np2_compat:
+
+Pandas 2.2.2 is now compatible with numpy 2.0
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pandas 2.2.2 is the first version of pandas that is generally compatible with the upcoming
+numpy 2.0 release, and wheels for pandas 2.2.2 will work with both numpy 1.x and 2.x.
+
+One major caveat is that arrays created with numpy 2.0's new ``StringDtype`` will convert
+to ``object`` dtyped arrays upon :class:`Series`/:class:`DataFrame` creation.
+Full support for numpy 2.0's StringDtype is expected to land in pandas 3.0.
+
+As usual please report any bugs discovered to our `issue tracker <https://github.com/pandas-dev/pandas/issues/new/choose>`_
+
.. _whatsnew_222.regressions:
Fixed regressions
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index acd0675fd43ec..cae2f6e81d384 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -24,6 +24,7 @@
from pandas._config import using_pyarrow_string_dtype
from pandas._libs import lib
+from pandas.compat.numpy import np_version_gt2
from pandas.errors import IntCastingNaNError
import pandas.util._test_decorators as td
@@ -3118,6 +3119,24 @@ def test_columns_indexes_raise_on_sets(self):
with pytest.raises(ValueError, match="columns cannot be a set"):
DataFrame(data, columns={"a", "b", "c"})
+ # TODO: make this not cast to object in pandas 3.0
+ @pytest.mark.skipif(
+ not np_version_gt2, reason="StringDType only available in numpy 2 and above"
+ )
+ @pytest.mark.parametrize(
+ "data",
+ [
+ {"a": ["a", "b", "c"], "b": [1.0, 2.0, 3.0], "c": ["d", "e", "f"]},
+ ],
+ )
+ def test_np_string_array_object_cast(self, data):
+ from numpy.dtypes import StringDType
+
+ data["a"] = np.array(data["a"], dtype=StringDType())
+ res = DataFrame(data)
+ assert res["a"].dtype == np.object_
+ assert (res["a"] == data["a"]).all()
+
def get1(obj): # TODO: make a helper in tm?
if isinstance(obj, Series):
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 4d3839553a0af..387be8398e4b2 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -2191,6 +2191,25 @@ def test_series_constructor_infer_multiindex(self, container, data):
multi = Series(data, index=indexes)
assert isinstance(multi.index, MultiIndex)
+ # TODO: make this not cast to object in pandas 3.0
+ @pytest.mark.skipif(
+ not np_version_gt2, reason="StringDType only available in numpy 2 and above"
+ )
+ @pytest.mark.parametrize(
+ "data",
+ [
+ ["a", "b", "c"],
+ ["a", "b", np.nan],
+ ],
+ )
+ def test_np_string_array_object_cast(self, data):
+ from numpy.dtypes import StringDType
+
+ arr = np.array(data, dtype=StringDType())
+ res = Series(arr)
+ assert res.dtype == np.object_
+ assert (res == data).all()
+
class TestSeriesConstructorInternals:
def test_constructor_no_pandas_array(self, using_array_manager):
| … for string array
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58208 | 2024-04-10T12:07:10Z | 2024-04-10T13:01:08Z | 2024-04-10T13:01:08Z | 2024-04-10T13:01:09Z |
Backport PR #58203 on branch 2.2.x (DOC: Add release date/contributors for 2.2.2) | diff --git a/doc/source/whatsnew/v2.2.1.rst b/doc/source/whatsnew/v2.2.1.rst
index 310dd921e44f6..4db0069ec4b95 100644
--- a/doc/source/whatsnew/v2.2.1.rst
+++ b/doc/source/whatsnew/v2.2.1.rst
@@ -87,4 +87,4 @@ Other
Contributors
~~~~~~~~~~~~
-.. contributors:: v2.2.0..v2.2.1|HEAD
+.. contributors:: v2.2.0..v2.2.1
diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 0dac3660c76b2..589a868c850d3 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -1,6 +1,6 @@
.. _whatsnew_222:
-What's new in 2.2.2 (April XX, 2024)
+What's new in 2.2.2 (April 10, 2024)
---------------------------------------
These are the changes in pandas 2.2.2. See :ref:`release` for a full changelog
@@ -40,3 +40,5 @@ Other
Contributors
~~~~~~~~~~~~
+
+.. contributors:: v2.2.1..v2.2.2|HEAD
| Backport PR #58203: DOC: Add release date/contributors for 2.2.2 | https://api.github.com/repos/pandas-dev/pandas/pulls/58206 | 2024-04-10T00:16:06Z | 2024-04-10T12:06:47Z | 2024-04-10T12:06:47Z | 2024-04-10T12:06:47Z |
GH: PDEP vote issue template | diff --git a/.github/ISSUE_TEMPLATE/pdep_vote.yaml b/.github/ISSUE_TEMPLATE/pdep_vote.yaml
new file mode 100644
index 0000000000000..6dcbd76eb0f74
--- /dev/null
+++ b/.github/ISSUE_TEMPLATE/pdep_vote.yaml
@@ -0,0 +1,74 @@
+name: PDEP Vote
+description: Call for a vote on a PDEP
+title: "VOTE: "
+labels: [Vote]
+
+body:
+ - type: markdown
+ attributes:
+ value: >
+ As per [PDEP-1](https://pandas.pydata.org/pdeps/0001-purpose-and-guidelines.html), the following issue template should be used when a
+ maintainer has opened a PDEP discussion and is ready to call for a vote.
+ - type: checkboxes
+ attributes:
+ label: Locked issue
+ options:
+ - label: >
+ I locked this voting issue so that only voting members are able to cast their votes or
+ comment on this issue.
+ required: true
+ - type: input
+ id: PDEP-name
+ attributes:
+ label: PDEP number and title
+ placeholder: >
+ PDEP-1: Purpose and guidelines
+ validations:
+ required: true
+ - type: input
+ id: PDEP-link
+ attributes:
+ label: Pull request with discussion
+ description: e.g. https://github.com/pandas-dev/pandas/pull/47444
+ validations:
+ required: true
+ - type: input
+ id: PDEP-rendered-link
+ attributes:
+ label: Rendered PDEP for easy reading
+ description: e.g. https://github.com/pandas-dev/pandas/pull/47444/files?short_path=7c449e6#diff-7c449e698132205b235c501f7e47ebba38da4d2b7f9492c98f16745dba787041
+ validations:
+ required: true
+ - type: input
+ id: PDEP-number-of-discussion-participants
+ attributes:
+ label: Discussion participants
+ description: >
+ You may find it useful to list or total the number of participating members in the
+ PDEP discussion PR. This would be the maximum possible disapprove votes.
+ placeholder: >
+ 14 voting members participated in the PR discussion thus far.
+ - type: input
+ id: PDEP-vote-end
+ attributes:
+ label: Voting will close in 15 days.
+ description: The voting period end date. ('Voting will close in 15 days.' will be automatically written)
+ - type: markdown
+ attributes:
+ value: ---
+ - type: textarea
+ id: Vote
+ attributes:
+ label: Vote
+ value: |
+ Cast your vote in a comment below.
+ * +1: approve.
+ * 0: abstain.
+ * Reason: A one sentence reason is required.
+ * -1: disapprove
+ * Reason: A one sentence reason is required.
+ A disapprove vote requires prior participation in the linked discussion PR.
+
+ @pandas-dev/pandas-core
+ validations:
+ required: true
diff --git a/web/pandas/pdeps/0001-purpose-and-guidelines.md b/web/pandas/pdeps/0001-purpose-and-guidelines.md
index 49a3bc4c871cd..bb15b8f997b11 100644
--- a/web/pandas/pdeps/0001-purpose-and-guidelines.md
+++ b/web/pandas/pdeps/0001-purpose-and-guidelines.md
@@ -79,8 +79,8 @@ Next is described the workflow that PDEPs can follow.
#### Submitting a PDEP
-Proposing a PDEP is done by creating a PR adding a new file to `web/pdeps/`.
-The file is a markdown file, you can use `web/pdeps/0001.md` as a reference
+Proposing a PDEP is done by creating a PR adding a new file to `web/pandas/pdeps/`.
+The file is a markdown file, you can use `web/pandas/pdeps/0001-purpose-and-guidelines.md` as a reference
for the expected format.
The initial status of a PDEP will be `Status: Draft`. This will be changed to
| Continuation of https://github.com/pandas-dev/pandas/pull/54469
Feel free to preview this by opening an issue in my fork | https://api.github.com/repos/pandas-dev/pandas/pulls/58204 | 2024-04-09T23:02:01Z | 2024-04-16T17:17:02Z | 2024-04-16T17:17:02Z | 2024-04-16T17:17:12Z |
DOC: Add release date/contributors for 2.2.2 | diff --git a/doc/source/whatsnew/v2.2.1.rst b/doc/source/whatsnew/v2.2.1.rst
index 310dd921e44f6..4db0069ec4b95 100644
--- a/doc/source/whatsnew/v2.2.1.rst
+++ b/doc/source/whatsnew/v2.2.1.rst
@@ -87,4 +87,4 @@ Other
Contributors
~~~~~~~~~~~~
-.. contributors:: v2.2.0..v2.2.1|HEAD
+.. contributors:: v2.2.0..v2.2.1
diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 0dac3660c76b2..589a868c850d3 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -1,6 +1,6 @@
.. _whatsnew_222:
-What's new in 2.2.2 (April XX, 2024)
+What's new in 2.2.2 (April 10, 2024)
---------------------------------------
These are the changes in pandas 2.2.2. See :ref:`release` for a full changelog
@@ -40,3 +40,5 @@ Other
Contributors
~~~~~~~~~~~~
+
+.. contributors:: v2.2.1..v2.2.2|HEAD
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58203 | 2024-04-09T22:40:02Z | 2024-04-10T00:15:38Z | 2024-04-10T00:15:38Z | 2024-04-10T00:15:39Z |
DOC/TST: Document numpy 2.0 support and add tests for string array | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 0dac3660c76b2..856a31a5cf305 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -9,6 +9,21 @@ including other versions of pandas.
{{ header }}
.. ---------------------------------------------------------------------------
+
+.. _whatsnew_220.np2_compat:
+
+Pandas 2.2.2 is now compatible with numpy 2.0
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Pandas 2.2.2 is the first version of pandas that is generally compatible with the upcoming
+numpy 2.0 release, and wheels for pandas 2.2.2 will work with both numpy 1.x and 2.x.
+
+One major caveat is that arrays created with numpy 2.0's new ``StringDtype`` will convert
+to ``object`` dtyped arrays upon :class:`Series`/:class:`DataFrame` creation.
+Full support for numpy 2.0's StringDtype is expected to land in pandas 3.0.
+
+As usual please report any bugs discovered to our `issue tracker <https://github.com/pandas-dev/pandas/issues/new/choose>`_
+
.. _whatsnew_222.regressions:
Fixed regressions
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 12d8269b640fc..53476c2f7ce38 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -24,6 +24,7 @@
from pandas._config import using_pyarrow_string_dtype
from pandas._libs import lib
+from pandas.compat.numpy import np_version_gt2
from pandas.errors import IntCastingNaNError
from pandas.core.dtypes.common import is_integer_dtype
@@ -3052,6 +3053,24 @@ def test_from_dict_with_columns_na_scalar(self):
expected = DataFrame({"a": Series([pd.NaT, pd.NaT])})
tm.assert_frame_equal(result, expected)
+ # TODO: make this not cast to object in pandas 3.0
+ @pytest.mark.skipif(
+ not np_version_gt2, reason="StringDType only available in numpy 2 and above"
+ )
+ @pytest.mark.parametrize(
+ "data",
+ [
+ {"a": ["a", "b", "c"], "b": [1.0, 2.0, 3.0], "c": ["d", "e", "f"]},
+ ],
+ )
+ def test_np_string_array_object_cast(self, data):
+ from numpy.dtypes import StringDType
+
+ data["a"] = np.array(data["a"], dtype=StringDType())
+ res = DataFrame(data)
+ assert res["a"].dtype == np.object_
+ assert (res["a"] == data["a"]).all()
+
def get1(obj): # TODO: make a helper in tm?
if isinstance(obj, Series):
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 97faba532e94a..3f9d5bbe806bb 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -2176,6 +2176,25 @@ def test_series_constructor_infer_multiindex(self, container, data):
multi = Series(data, index=indexes)
assert isinstance(multi.index, MultiIndex)
+ # TODO: make this not cast to object in pandas 3.0
+ @pytest.mark.skipif(
+ not np_version_gt2, reason="StringDType only available in numpy 2 and above"
+ )
+ @pytest.mark.parametrize(
+ "data",
+ [
+ ["a", "b", "c"],
+ ["a", "b", np.nan],
+ ],
+ )
+ def test_np_string_array_object_cast(self, data):
+ from numpy.dtypes import StringDType
+
+ arr = np.array(data, dtype=StringDType())
+ res = Series(arr)
+ assert res.dtype == np.object_
+ assert (res == data).all()
+
class TestSeriesConstructorInternals:
def test_constructor_no_pandas_array(self):
| - [ ] closes #58104 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58202 | 2024-04-09T22:34:15Z | 2024-04-10T11:48:25Z | 2024-04-10T11:48:24Z | 2024-04-10T12:32:01Z |
CLN: Remove unused code | diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 239d78b3b8b7a..124730e1b5ca9 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -263,7 +263,6 @@ def __init__(
self.sort = sort
self.dropna = dropna
- self._grouper_deprecated = None
self._indexer_deprecated: npt.NDArray[np.intp] | None = None
self.binner = None
self._grouper = None
@@ -292,10 +291,6 @@ def _get_grouper(
validate=validate,
dropna=self.dropna,
)
- # Without setting this, subsequent lookups to .groups raise
- # error: Incompatible types in assignment (expression has type "BaseGrouper",
- # variable has type "None")
- self._grouper_deprecated = grouper # type: ignore[assignment]
return grouper, obj
| This looks like a `mypy` error only, which passes now. | https://api.github.com/repos/pandas-dev/pandas/pulls/58201 | 2024-04-09T21:55:22Z | 2024-04-09T22:53:31Z | 2024-04-09T22:53:31Z | 2024-04-09T22:54:15Z |
CLN: Make iterators lazier | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index e8df24850f7a8..832beeddcef3c 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1710,9 +1710,9 @@ def normalize_keyword_aggregation(
# TODO: aggspec type: typing.Dict[str, List[AggScalar]]
aggspec = defaultdict(list)
order = []
- columns, pairs = list(zip(*kwargs.items()))
+ columns = tuple(kwargs.keys())
- for column, aggfunc in pairs:
+ for column, aggfunc in kwargs.values():
aggspec[column].append(aggfunc)
order.append((column, com.get_callable_name(aggfunc) or aggfunc))
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 97cf86d45812d..76df5c82e6239 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6168,12 +6168,13 @@ class max type
names = self.index._get_default_index_names(names, default)
if isinstance(self.index, MultiIndex):
- to_insert = zip(self.index.levels, self.index.codes)
+ to_insert = zip(reversed(self.index.levels), reversed(self.index.codes))
else:
to_insert = ((self.index, None),)
multi_col = isinstance(self.columns, MultiIndex)
- for i, (lev, lab) in reversed(list(enumerate(to_insert))):
+ for j, (lev, lab) in enumerate(to_insert, start=1):
+ i = self.index.nlevels - j
if level is not None and i not in level:
continue
name = names[i]
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 8585ae3828247..0d88882c9b7ef 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -706,7 +706,7 @@ def groups(self) -> dict[Hashable, Index]:
return self.groupings[0].groups
result_index, ids = self.result_index_and_ids
values = result_index._values
- categories = Categorical(ids, categories=np.arange(len(result_index)))
+ categories = Categorical(ids, categories=range(len(result_index)))
result = {
# mypy is not aware that group has to be an integer
values[group]: self.axis.take(axis_ilocs) # type: ignore[call-overload]
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index a834d3e54d30b..c9b502add21e0 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -899,7 +899,7 @@ def __setitem__(self, key, value) -> None:
check_dict_or_set_indexers(key)
if isinstance(key, tuple):
- key = tuple(list(x) if is_iterator(x) else x for x in key)
+ key = (list(x) if is_iterator(x) else x for x in key)
key = tuple(com.apply_if_callable(x, self.obj) for x in key)
else:
maybe_callable = com.apply_if_callable(key, self.obj)
@@ -1177,7 +1177,7 @@ def _check_deprecated_callable_usage(self, key: Any, maybe_callable: T) -> T:
def __getitem__(self, key):
check_dict_or_set_indexers(key)
if type(key) is tuple:
- key = tuple(list(x) if is_iterator(x) else x for x in key)
+ key = (list(x) if is_iterator(x) else x for x in key)
key = tuple(com.apply_if_callable(x, self.obj) for x in key)
if self._is_scalar_access(key):
return self.obj._get_value(*key, takeable=self._takeable)
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 002efec1d8b52..4fba243f73536 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -172,8 +172,6 @@ def maybe_lift(lab, size: int) -> tuple[np.ndarray, int]:
for i, (lab, size) in enumerate(zip(labels, shape)):
labels[i], lshape[i] = maybe_lift(lab, size)
- labels = list(labels)
-
# Iteratively process all the labels in chunks sized so less
# than lib.i8max unique int ids will be required for each chunk
while True:
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58200 | 2024-04-09T17:30:18Z | 2024-04-09T18:45:50Z | 2024-04-09T18:45:50Z | 2024-04-09T21:51:10Z |
Fix ASV with virtualenv | diff --git a/asv_bench/asv.conf.json b/asv_bench/asv.conf.json
index e02ff26ba14e9..30c692115eab1 100644
--- a/asv_bench/asv.conf.json
+++ b/asv_bench/asv.conf.json
@@ -41,6 +41,7 @@
// pip (with all the conda available packages installed first,
// followed by the pip installed packages).
"matrix": {
+ "pip+build": [],
"Cython": ["3.0"],
"matplotlib": [],
"sqlalchemy": [],
| The default remains conda, but this is needed if you ever change the default locally. Otherwise you get
STDERR -------->
/home/willayd/clones/pandas/asv_bench/env/7e437c4d615a12609229592196e74215/bin/python: No module named build | https://api.github.com/repos/pandas-dev/pandas/pulls/58199 | 2024-04-09T16:33:56Z | 2024-04-09T17:16:16Z | 2024-04-09T17:16:16Z | 2024-04-09T17:16:23Z |
Remove unused tslib function | diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index ee8ed762fdb6e..aecf9f2e46bd4 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -70,7 +70,6 @@ from pandas._libs.tslibs.conversion cimport (
from pandas._libs.tslibs.dtypes cimport npy_unit_to_abbrev
from pandas._libs.tslibs.nattype cimport (
NPY_NAT,
- c_NaT as NaT,
c_nat_strings as nat_strings,
)
from pandas._libs.tslibs.timestamps cimport _Timestamp
@@ -346,39 +345,6 @@ def array_with_unit_to_datetime(
return result, tz
-cdef _array_with_unit_to_datetime_object_fallback(ndarray[object] values, str unit):
- cdef:
- Py_ssize_t i, n = len(values)
- ndarray[object] oresult
- tzinfo tz = None
-
- # TODO: fix subtle differences between this and no-unit code
- oresult = cnp.PyArray_EMPTY(values.ndim, values.shape, cnp.NPY_OBJECT, 0)
- for i in range(n):
- val = values[i]
-
- if checknull_with_nat_and_na(val):
- oresult[i] = <object>NaT
- elif is_integer_object(val) or is_float_object(val):
-
- if val != val or val == NPY_NAT:
- oresult[i] = <object>NaT
- else:
- try:
- oresult[i] = Timestamp(val, unit=unit)
- except OutOfBoundsDatetime:
- oresult[i] = val
-
- elif isinstance(val, str):
- if len(val) == 0 or val in nat_strings:
- oresult[i] = <object>NaT
-
- else:
- oresult[i] = val
-
- return oresult, tz
-
-
@cython.wraparound(False)
@cython.boundscheck(False)
def first_non_null(values: ndarray) -> int:
| Flagged during compilation
[22/27] Compiling C object pandas/_libs/tslib.cpython-310-x86_64-linux-gnu.so.p/meson-generated_pandas__libs_tslib.pyx.c.o
pandas/_libs/tslib.cpython-310-x86_64-linux-gnu.so.p/pandas/_libs/tslib.pyx.c:27000:18: warning: '__pyx_f_6pandas_5_libs_5tslib__array_with_unit_to_datetime_object_fallback' defined but not used [-Wunused-function]
| https://api.github.com/repos/pandas-dev/pandas/pulls/58198 | 2024-04-09T16:28:14Z | 2024-04-09T17:15:43Z | 2024-04-09T17:15:43Z | 2024-04-09T17:15:50Z |
Add low-level create_dataframe_from_blocks helper function | diff --git a/pandas/api/internals.py b/pandas/api/internals.py
new file mode 100644
index 0000000000000..03d8992a87575
--- /dev/null
+++ b/pandas/api/internals.py
@@ -0,0 +1,62 @@
+import numpy as np
+
+from pandas._typing import ArrayLike
+
+from pandas import (
+ DataFrame,
+ Index,
+)
+from pandas.core.internals.api import _make_block
+from pandas.core.internals.managers import BlockManager as _BlockManager
+
+
+def create_dataframe_from_blocks(
+ blocks: list[tuple[ArrayLike, np.ndarray]], index: Index, columns: Index
+) -> DataFrame:
+ """
+ Low-level function to create a DataFrame from arrays as they are
+ representing the block structure of the resulting DataFrame.
+
+ Attention: this is an advanced, low-level function that should only be
+ used if you know that the below-mentioned assumptions are guaranteed.
+ If passing data that do not follow those assumptions, subsequent
+ subsequent operations on the resulting DataFrame might lead to strange
+ errors.
+ For almost all use cases, you should use the standard pd.DataFrame(..)
+ constructor instead. If you are planning to use this function, let us
+ know by opening an issue at https://github.com/pandas-dev/pandas/issues.
+
+ Assumptions:
+
+ - The block arrays are either a 2D numpy array or a pandas ExtensionArray
+ - In case of a numpy array, it is assumed to already be in the expected
+ shape for Blocks (2D, (cols, rows), i.e. transposed compared to the
+ DataFrame columns).
+ - All arrays are taken as is (no type inference) and expected to have the
+ correct size.
+ - The placement arrays have the correct length (equalling the number of
+ columns that its equivalent block array represents), and all placement
+ arrays together form a complete set of 0 to n_columns - 1.
+
+ Parameters
+ ----------
+ blocks : list of tuples of (block_array, block_placement)
+ This should be a list of tuples existing of (block_array, block_placement),
+ where:
+
+ - block_array is a 2D numpy array or a 1D ExtensionArray, following the
+ requirements listed above.
+ - block_placement is a 1D integer numpy array
+ index : Index
+ The Index object for the `index` of the resulting DataFrame.
+ columns : Index
+ The Index object for the `columns` of the resulting DataFrame.
+
+ Returns
+ -------
+ DataFrame
+ """
+ block_objs = [_make_block(*block) for block in blocks]
+ axes = [columns, index]
+ mgr = _BlockManager(block_objs, axes)
+ return DataFrame._from_mgr(mgr, mgr.axes)
diff --git a/pandas/core/internals/api.py b/pandas/core/internals/api.py
index d6e1e8b38dfe3..ef25d7ed5ae9e 100644
--- a/pandas/core/internals/api.py
+++ b/pandas/core/internals/api.py
@@ -18,10 +18,14 @@
from pandas.core.dtypes.common import pandas_dtype
from pandas.core.dtypes.dtypes import (
DatetimeTZDtype,
+ ExtensionDtype,
PeriodDtype,
)
-from pandas.core.arrays import DatetimeArray
+from pandas.core.arrays import (
+ DatetimeArray,
+ TimedeltaArray,
+)
from pandas.core.construction import extract_array
from pandas.core.internals.blocks import (
check_ndim,
@@ -32,11 +36,43 @@
)
if TYPE_CHECKING:
- from pandas._typing import Dtype
+ from pandas._typing import (
+ ArrayLike,
+ Dtype,
+ )
from pandas.core.internals.blocks import Block
+def _make_block(values: ArrayLike, placement: np.ndarray) -> Block:
+ """
+ This is an analogue to blocks.new_block(_2d) that ensures:
+ 1) correct dimension for EAs that support 2D (`ensure_block_shape`), and
+ 2) correct EA class for datetime64/timedelta64 (`maybe_coerce_values`).
+
+ The input `values` is assumed to be either numpy array or ExtensionArray:
+ - In case of a numpy array, it is assumed to already be in the expected
+ shape for Blocks (2D, (cols, rows)).
+ - In case of an ExtensionArray the input can be 1D, also for EAs that are
+ internally stored as 2D.
+
+ For the rest no preprocessing or validation is done, except for those dtypes
+ that are internally stored as EAs but have an exact numpy equivalent (and at
+ the moment use that numpy dtype), i.e. datetime64/timedelta64.
+ """
+ dtype = values.dtype
+ klass = get_block_type(dtype)
+ placement_obj = BlockPlacement(placement)
+
+ if (isinstance(dtype, ExtensionDtype) and dtype._supports_2d) or isinstance(
+ values, (DatetimeArray, TimedeltaArray)
+ ):
+ values = ensure_block_shape(values, ndim=2)
+
+ values = maybe_coerce_values(values)
+ return klass(values, ndim=2, placement=placement_obj)
+
+
def make_block(
values, placement, klass=None, ndim=None, dtype: Dtype | None = None
) -> Block:
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index e32e5a268d46d..5ee9932fbd051 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -248,6 +248,7 @@ class TestApi(Base):
"indexers",
"interchange",
"typing",
+ "internals",
]
allowed_typing = [
"DataFrameGroupBy",
diff --git a/pandas/tests/internals/test_api.py b/pandas/tests/internals/test_api.py
index 5bff1b7be3080..7ab8988521fdf 100644
--- a/pandas/tests/internals/test_api.py
+++ b/pandas/tests/internals/test_api.py
@@ -3,10 +3,14 @@
in core.internals
"""
+import datetime
+
+import numpy as np
import pytest
import pandas as pd
import pandas._testing as tm
+from pandas.api.internals import create_dataframe_from_blocks
from pandas.core import internals
from pandas.core.internals import api
@@ -71,3 +75,91 @@ def test_create_block_manager_from_blocks_deprecated():
)
with tm.assert_produces_warning(DeprecationWarning, match=msg):
internals.create_block_manager_from_blocks
+
+
+def test_create_dataframe_from_blocks(float_frame):
+ block = float_frame._mgr.blocks[0]
+ index = float_frame.index.copy()
+ columns = float_frame.columns.copy()
+
+ result = create_dataframe_from_blocks(
+ [(block.values, block.mgr_locs.as_array)], index=index, columns=columns
+ )
+ tm.assert_frame_equal(result, float_frame)
+
+
+def test_create_dataframe_from_blocks_types():
+ df = pd.DataFrame(
+ {
+ "int": list(range(1, 4)),
+ "uint": np.arange(3, 6).astype("uint8"),
+ "float": [2.0, np.nan, 3.0],
+ "bool": np.array([True, False, True]),
+ "boolean": pd.array([True, False, None], dtype="boolean"),
+ "string": list("abc"),
+ "datetime": pd.date_range("20130101", periods=3),
+ "datetimetz": pd.date_range("20130101", periods=3).tz_localize(
+ "Europe/Brussels"
+ ),
+ "timedelta": pd.timedelta_range("1 day", periods=3),
+ "period": pd.period_range("2012-01-01", periods=3, freq="D"),
+ "categorical": pd.Categorical(["a", "b", "a"]),
+ "interval": pd.IntervalIndex.from_tuples([(0, 1), (1, 2), (3, 4)]),
+ }
+ )
+
+ result = create_dataframe_from_blocks(
+ [(block.values, block.mgr_locs.as_array) for block in df._mgr.blocks],
+ index=df.index,
+ columns=df.columns,
+ )
+ tm.assert_frame_equal(result, df)
+
+
+def test_create_dataframe_from_blocks_datetimelike():
+ # extension dtypes that have an exact matching numpy dtype can also be
+ # be passed as a numpy array
+ index, columns = pd.RangeIndex(3), pd.Index(["a", "b", "c", "d"])
+
+ block_array1 = np.arange(
+ datetime.datetime(2020, 1, 1),
+ datetime.datetime(2020, 1, 7),
+ step=datetime.timedelta(1),
+ ).reshape((2, 3))
+ block_array2 = np.arange(
+ datetime.timedelta(1), datetime.timedelta(7), step=datetime.timedelta(1)
+ ).reshape((2, 3))
+ result = create_dataframe_from_blocks(
+ [(block_array1, np.array([0, 2])), (block_array2, np.array([1, 3]))],
+ index=index,
+ columns=columns,
+ )
+ expected = pd.DataFrame(
+ {
+ "a": pd.date_range("2020-01-01", periods=3, unit="us"),
+ "b": pd.timedelta_range("1 days", periods=3, unit="us"),
+ "c": pd.date_range("2020-01-04", periods=3, unit="us"),
+ "d": pd.timedelta_range("4 days", periods=3, unit="us"),
+ }
+ )
+ tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "array",
+ [
+ pd.date_range("2020-01-01", periods=3),
+ pd.date_range("2020-01-01", periods=3, tz="UTC"),
+ pd.period_range("2012-01-01", periods=3, freq="D"),
+ pd.timedelta_range("1 day", periods=3),
+ ],
+)
+def test_create_dataframe_from_blocks_1dEA(array):
+ # ExtensionArrays can be passed as 1D even if stored under the hood as 2D
+ df = pd.DataFrame({"a": array})
+
+ block = df._mgr.blocks[0]
+ result = create_dataframe_from_blocks(
+ [(block.values[0], block.mgr_locs.as_array)], index=df.index, columns=df.columns
+ )
+ tm.assert_frame_equal(result, df)
diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py
index a732d3f83a40a..ba3123a07df4b 100755
--- a/scripts/validate_unwanted_patterns.py
+++ b/scripts/validate_unwanted_patterns.py
@@ -54,6 +54,7 @@
# TODO(4.0): GH#55043 - remove upon removal of CoW option
"_get_option",
"_fill_limit_area_1d",
+ "_make_block",
}
| See my explanation at https://github.com/pandas-dev/pandas/issues/56815/
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58197 | 2024-04-09T15:40:41Z | 2024-04-15T17:37:16Z | 2024-04-15T17:37:16Z | 2024-04-15T19:21:52Z |
TST: Add tests for reading Stata dta files saved in big-endian format | diff --git a/pandas/tests/io/data/stata/stata-compat-be-105.dta b/pandas/tests/io/data/stata/stata-compat-be-105.dta
new file mode 100644
index 0000000000000..af75548c840d4
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-105.dta differ
diff --git a/pandas/tests/io/data/stata/stata-compat-be-108.dta b/pandas/tests/io/data/stata/stata-compat-be-108.dta
new file mode 100644
index 0000000000000..e3e5d85fca4ad
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-108.dta differ
diff --git a/pandas/tests/io/data/stata/stata-compat-be-111.dta b/pandas/tests/io/data/stata/stata-compat-be-111.dta
new file mode 100644
index 0000000000000..197decdcf0c2d
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-111.dta differ
diff --git a/pandas/tests/io/data/stata/stata-compat-be-113.dta b/pandas/tests/io/data/stata/stata-compat-be-113.dta
new file mode 100644
index 0000000000000..c69c32106114f
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-113.dta differ
diff --git a/pandas/tests/io/data/stata/stata-compat-be-114.dta b/pandas/tests/io/data/stata/stata-compat-be-114.dta
new file mode 100644
index 0000000000000..222bdb2b62784
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-114.dta differ
diff --git a/pandas/tests/io/data/stata/stata-compat-be-118.dta b/pandas/tests/io/data/stata/stata-compat-be-118.dta
new file mode 100644
index 0000000000000..0a5df1b321c2d
Binary files /dev/null and b/pandas/tests/io/data/stata/stata-compat-be-118.dta differ
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 1bd71768d226e..65975bcc46f8e 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -1958,6 +1958,15 @@ def test_backward_compat(version, datapath):
tm.assert_frame_equal(old_dta, expected, check_dtype=False)
+@pytest.mark.parametrize("version", [105, 108, 111, 113, 114, 118])
+def test_bigendian(version, datapath):
+ ref = datapath("io", "data", "stata", f"stata-compat-{version}.dta")
+ big = datapath("io", "data", "stata", f"stata-compat-be-{version}.dta")
+ expected = read_stata(ref)
+ big_dta = read_stata(big)
+ tm.assert_frame_equal(big_dta, expected)
+
+
def test_direct_read(datapath, monkeypatch):
file_path = datapath("io", "data", "stata", "stata-compat-118.dta")
| - [x] closes #58194
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58195 | 2024-04-09T13:48:34Z | 2024-04-09T21:53:28Z | 2024-04-09T21:53:28Z | 2024-04-09T21:53:34Z |
TST: Match dta format used for Stata test files with that implied by their filenames | diff --git a/pandas/tests/io/data/stata/stata10_115.dta b/pandas/tests/io/data/stata/stata10_115.dta
index b917dde5ad47d..bdca3b9b340c1 100644
Binary files a/pandas/tests/io/data/stata/stata10_115.dta and b/pandas/tests/io/data/stata/stata10_115.dta differ
diff --git a/pandas/tests/io/data/stata/stata4_114.dta b/pandas/tests/io/data/stata/stata4_114.dta
index c5d7de8b42295..f58cdb215332e 100644
Binary files a/pandas/tests/io/data/stata/stata4_114.dta and b/pandas/tests/io/data/stata/stata4_114.dta differ
diff --git a/pandas/tests/io/data/stata/stata9_115.dta b/pandas/tests/io/data/stata/stata9_115.dta
index 5ad6cd6a2c8ff..1b5c0042bebbe 100644
Binary files a/pandas/tests/io/data/stata/stata9_115.dta and b/pandas/tests/io/data/stata/stata9_115.dta differ
| - [x] closes #57693
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
A few of the Stata test data files were not saved in the format implied by their file names. As these are binary files they are harder to compare, so I have included Stata output detailing their contents below:
[dtatests_original.txt](https://github.com/pandas-dev/pandas/files/14918526/dtatests_original.txt)
[dtatests_updated.txt](https://github.com/pandas-dev/pandas/files/14918527/dtatests_updated.txt)
The Stata tests already include files in the formats that these were actually saved as, so converting them shouldn't lose any coverage.
After making the change I ran the following Stata script in the test directory, confirming that all the files now match their expected version:
```
local files : dir "." files "*.dta"
foreach file in `files' {
dtaversion `file'
}
```
```
(file "s4_educ1.dta" is .dta-format 105 from Stata 5)
(file "stata-compat-105.dta" is .dta-format 105 from Stata 5)
(file "stata-compat-108.dta" is .dta-format 108 from Stata 6)
(file "stata-compat-111.dta" is .dta-format 111 from Stata 7)
(file "stata-compat-113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata-compat-114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata-compat-118.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata-dta-partially-labeled.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata10_115.dta" is .dta-format 115 from Stata 12)
(file "stata10_117.dta" is .dta-format 117 from Stata 13)
(file "stata11_115.dta" is .dta-format 115 from Stata 12)
(file "stata11_117.dta" is .dta-format 117 from Stata 13)
(file "stata12_117.dta" is .dta-format 117 from Stata 13)
(file "stata13_dates.dta" is .dta-format 117 from Stata 13)
(file "stata14_118.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata15.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata16_118.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata1_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata1_117.dta" is .dta-format 117 from Stata 13)
(file "stata1_encoding.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata1_encoding_118.dta" is .dta-format 118 from Stata 14, 15, 16, 17, or 18)
(file "stata2_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata2_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata2_115.dta" is .dta-format 115 from Stata 12)
(file "stata2_117.dta" is .dta-format 117 from Stata 13)
(file "stata3_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata3_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata3_115.dta" is .dta-format 115 from Stata 12)
(file "stata3_117.dta" is .dta-format 117 from Stata 13)
(file "stata4_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata4_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata4_115.dta" is .dta-format 115 from Stata 12)
(file "stata4_117.dta" is .dta-format 117 from Stata 13)
(file "stata5_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata5_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata5_115.dta" is .dta-format 115 from Stata 12)
(file "stata5_117.dta" is .dta-format 117 from Stata 13)
(file "stata6_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata6_114.dta" is .dta-format 114 from Stata 10 or 11)
(file "stata6_115.dta" is .dta-format 115 from Stata 12)
(file "stata6_117.dta" is .dta-format 117 from Stata 13)
(file "stata7_111.dta" is .dta-format 111 from Stata 7)
(file "stata7_115.dta" is .dta-format 115 from Stata 12)
(file "stata7_117.dta" is .dta-format 117 from Stata 13)
(file "stata8_113.dta" is .dta-format 113 from Stata 8 or 9)
(file "stata8_115.dta" is .dta-format 115 from Stata 12)
(file "stata8_117.dta" is .dta-format 117 from Stata 13)
(file "stata9_115.dta" is .dta-format 115 from Stata 12)
(file "stata9_117.dta" is .dta-format 117 from Stata 13)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/58192 | 2024-04-09T12:46:24Z | 2024-04-09T16:43:19Z | 2024-04-09T16:43:19Z | 2024-04-09T16:43:26Z |
TST: catch exception for div/truediv operations between Timedelta and pd.NA | diff --git a/pandas/tests/scalar/timedelta/test_timedelta.py b/pandas/tests/scalar/timedelta/test_timedelta.py
index 73b2da0f7dd50..01e7ba52e58aa 100644
--- a/pandas/tests/scalar/timedelta/test_timedelta.py
+++ b/pandas/tests/scalar/timedelta/test_timedelta.py
@@ -11,6 +11,7 @@
import pytest
from pandas._libs import lib
+from pandas._libs.missing import NA
from pandas._libs.tslibs import (
NaT,
iNaT,
@@ -138,6 +139,19 @@ def test_truediv_numeric(self, td):
assert res._value == td._value / 2
assert res._creso == td._creso
+ def test_truediv_na_type_not_supported(self, td):
+ msg_td_floordiv_na = (
+ r"unsupported operand type\(s\) for /: 'Timedelta' and 'NAType'"
+ )
+ with pytest.raises(TypeError, match=msg_td_floordiv_na):
+ td / NA
+
+ msg_na_floordiv_td = (
+ r"unsupported operand type\(s\) for /: 'NAType' and 'Timedelta'"
+ )
+ with pytest.raises(TypeError, match=msg_na_floordiv_td):
+ NA / td
+
def test_floordiv_timedeltalike(self, td):
assert td // td == 1
assert (2.5 * td) // td == 2
@@ -182,6 +196,19 @@ def test_floordiv_numeric(self, td):
assert res._value == td._value // 2
assert res._creso == td._creso
+ def test_floordiv_na_type_not_supported(self, td):
+ msg_td_floordiv_na = (
+ r"unsupported operand type\(s\) for //: 'Timedelta' and 'NAType'"
+ )
+ with pytest.raises(TypeError, match=msg_td_floordiv_na):
+ td // NA
+
+ msg_na_floordiv_td = (
+ r"unsupported operand type\(s\) for //: 'NAType' and 'Timedelta'"
+ )
+ with pytest.raises(TypeError, match=msg_na_floordiv_td):
+ NA // td
+
def test_addsub_mismatched_reso(self, td):
# need to cast to since td is out of bounds for ns, so
# so we would raise OverflowError without casting
| - [x] closes #54315
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [Not applicable] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [Not applicable] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This PR add a test to catch ``TypeError`` raised by div and true div operations between ``Timedelta`` and ``pd.NA``. | https://api.github.com/repos/pandas-dev/pandas/pulls/58188 | 2024-04-08T20:33:15Z | 2024-04-15T17:13:21Z | 2024-04-15T17:13:21Z | 2024-04-16T20:34:17Z |
Backport PR #58181 on branch 2.2.x (CI: correct error msg in test_view_index) | diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 1fa48f98942c2..b7204d7af1cbb 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -358,7 +358,10 @@ def test_view_with_args_object_array_raises(self, index):
with pytest.raises(NotImplementedError, match="i8"):
index.view("i8")
else:
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references|"
+ "Cannot change data-type for object array|"
+ )
with pytest.raises(TypeError, match=msg):
index.view("i8")
| Backport PR #58181: CI: correct error msg in test_view_index
| https://api.github.com/repos/pandas-dev/pandas/pulls/58187 | 2024-04-08T20:19:29Z | 2024-04-08T21:34:15Z | 2024-04-08T21:34:15Z | 2024-04-08T21:34:23Z |
CLN: union_indexes | diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index b39d32d069619..daaaacee3487d 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -67,7 +67,6 @@ def fast_multiget(
default=...,
) -> ArrayLike: ...
def fast_unique_multiple_list_gen(gen: Generator, sort: bool = ...) -> list: ...
-def fast_unique_multiple_list(lists: list, sort: bool | None = ...) -> list: ...
@overload
def map_infer(
arr: np.ndarray,
diff --git a/pandas/_libs/lib.pyx b/pandas/_libs/lib.pyx
index a2205454a5a46..7aa1cb715521e 100644
--- a/pandas/_libs/lib.pyx
+++ b/pandas/_libs/lib.pyx
@@ -312,34 +312,6 @@ def item_from_zerodim(val: object) -> object:
return val
-@cython.wraparound(False)
-@cython.boundscheck(False)
-def fast_unique_multiple_list(lists: list, sort: bool | None = True) -> list:
- cdef:
- list buf
- Py_ssize_t k = len(lists)
- Py_ssize_t i, j, n
- list uniques = []
- dict table = {}
- object val, stub = 0
-
- for i in range(k):
- buf = lists[i]
- n = len(buf)
- for j in range(n):
- val = buf[j]
- if val not in table:
- table[val] = stub
- uniques.append(val)
- if sort:
- try:
- uniques.sort()
- except TypeError:
- pass
-
- return uniques
-
-
@cython.wraparound(False)
@cython.boundscheck(False)
def fast_unique_multiple_list_gen(object gen, bint sort=True) -> list:
@@ -361,15 +333,15 @@ def fast_unique_multiple_list_gen(object gen, bint sort=True) -> list:
list buf
Py_ssize_t j, n
list uniques = []
- dict table = {}
- object val, stub = 0
+ set table = set()
+ object val
for buf in gen:
n = len(buf)
for j in range(n):
val = buf[j]
if val not in table:
- table[val] = stub
+ table.add(val)
uniques.append(val)
if sort:
try:
diff --git a/pandas/core/indexes/api.py b/pandas/core/indexes/api.py
index 9b05eb42c6d6e..c5e3f3a50e10d 100644
--- a/pandas/core/indexes/api.py
+++ b/pandas/core/indexes/api.py
@@ -209,60 +209,6 @@ def union_indexes(indexes, sort: bool | None = True) -> Index:
indexes, kind = _sanitize_and_check(indexes)
- def _unique_indices(inds, dtype) -> Index:
- """
- Concatenate indices and remove duplicates.
-
- Parameters
- ----------
- inds : list of Index or list objects
- dtype : dtype to set for the resulting Index
-
- Returns
- -------
- Index
- """
- if all(isinstance(ind, Index) for ind in inds):
- inds = [ind.astype(dtype, copy=False) for ind in inds]
- result = inds[0].unique()
- other = inds[1].append(inds[2:])
- diff = other[result.get_indexer_for(other) == -1]
- if len(diff):
- result = result.append(diff.unique())
- if sort:
- result = result.sort_values()
- return result
-
- def conv(i):
- if isinstance(i, Index):
- i = i.tolist()
- return i
-
- return Index(
- lib.fast_unique_multiple_list([conv(i) for i in inds], sort=sort),
- dtype=dtype,
- )
-
- def _find_common_index_dtype(inds):
- """
- Finds a common type for the indexes to pass through to resulting index.
-
- Parameters
- ----------
- inds: list of Index or list objects
-
- Returns
- -------
- The common type or None if no indexes were given
- """
- dtypes = [idx.dtype for idx in indexes if isinstance(idx, Index)]
- if dtypes:
- dtype = find_common_type(dtypes)
- else:
- dtype = None
-
- return dtype
-
if kind == "special":
result = indexes[0]
@@ -294,18 +240,36 @@ def _find_common_index_dtype(inds):
return result
elif kind == "array":
- dtype = _find_common_index_dtype(indexes)
- index = indexes[0]
- if not all(index.equals(other) for other in indexes[1:]):
- index = _unique_indices(indexes, dtype)
+ if not all_indexes_same(indexes):
+ dtype = find_common_type([idx.dtype for idx in indexes])
+ inds = [ind.astype(dtype, copy=False) for ind in indexes]
+ index = inds[0].unique()
+ other = inds[1].append(inds[2:])
+ diff = other[index.get_indexer_for(other) == -1]
+ if len(diff):
+ index = index.append(diff.unique())
+ if sort:
+ index = index.sort_values()
+ else:
+ index = indexes[0]
name = get_unanimous_names(*indexes)[0]
if name != index.name:
index = index.rename(name)
return index
- else: # kind='list'
- dtype = _find_common_index_dtype(indexes)
- return _unique_indices(indexes, dtype)
+ elif kind == "list":
+ dtypes = [idx.dtype for idx in indexes if isinstance(idx, Index)]
+ if dtypes:
+ dtype = find_common_type(dtypes)
+ else:
+ dtype = None
+ all_lists = (idx.tolist() if isinstance(idx, Index) else idx for idx in indexes)
+ return Index(
+ lib.fast_unique_multiple_list_gen(all_lists, sort=bool(sort)),
+ dtype=dtype,
+ )
+ else:
+ raise ValueError(f"{kind=} must be 'special', 'array' or 'list'.")
def _sanitize_and_check(indexes):
@@ -329,14 +293,14 @@ def _sanitize_and_check(indexes):
sanitized_indexes : list of Index or list objects
type : {'list', 'array', 'special'}
"""
- kinds = list({type(index) for index in indexes})
+ kinds = {type(index) for index in indexes}
if list in kinds:
if len(kinds) > 1:
indexes = [
Index(list(x)) if not isinstance(x, Index) else x for x in indexes
]
- kinds.remove(list)
+ kinds -= {list}
else:
return indexes, "list"
| - Just used sets where appropriate
- Added some typing
- Split some helper functions to branches where they are used | https://api.github.com/repos/pandas-dev/pandas/pulls/58183 | 2024-04-08T18:57:52Z | 2024-04-11T15:39:27Z | 2024-04-11T15:39:27Z | 2024-04-11T15:39:30Z |
CI: correct error msg in `test_view_index` | diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py
index 088fcfcd7d75f..676d33d2b0f81 100644
--- a/pandas/tests/indexes/numeric/test_numeric.py
+++ b/pandas/tests/indexes/numeric/test_numeric.py
@@ -312,7 +312,10 @@ def test_cant_or_shouldnt_cast(self, dtype):
def test_view_index(self, simple_index):
index = simple_index
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references.|"
+ "Cannot change data-type for object array.|"
+ )
with pytest.raises(TypeError, match=msg):
index.view(Index)
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index 8d41efa586411..727edb7ae30ad 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -375,7 +375,10 @@ def test_cant_or_shouldnt_cast(self, start, stop, step):
def test_view_index(self, simple_index):
index = simple_index
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references.|"
+ "Cannot change data-type for object array.|"
+ )
with pytest.raises(TypeError, match=msg):
index.view(Index)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 997276ef544f7..484f647c7a8f9 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -358,7 +358,10 @@ def test_view_with_args_object_array_raises(self, index):
with pytest.raises(NotImplementedError, match="i8"):
index.view("i8")
else:
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references.|"
+ "Cannot change data-type for object array.|"
+ )
with pytest.raises(TypeError, match=msg):
index.view("i8")
diff --git a/pandas/tests/indexes/test_datetimelike.py b/pandas/tests/indexes/test_datetimelike.py
index 0ad5888a44392..e45d11e6286e2 100644
--- a/pandas/tests/indexes/test_datetimelike.py
+++ b/pandas/tests/indexes/test_datetimelike.py
@@ -88,7 +88,10 @@ def test_view(self, simple_index):
result = type(simple_index)(idx)
tm.assert_index_equal(result, idx)
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references.|"
+ "Cannot change data-type for object array.|"
+ )
with pytest.raises(TypeError, match=msg):
idx.view(type(simple_index))
diff --git a/pandas/tests/indexes/test_old_base.py b/pandas/tests/indexes/test_old_base.py
index 871e7cdda4102..9b4470021cc1d 100644
--- a/pandas/tests/indexes/test_old_base.py
+++ b/pandas/tests/indexes/test_old_base.py
@@ -880,7 +880,10 @@ def test_view(self, simple_index):
idx_view = idx.view(dtype)
tm.assert_index_equal(idx, index_cls(idx_view, name="Foo"), exact=True)
- msg = "Cannot change data-type for object array"
+ msg = (
+ "Cannot change data-type for array of references.|"
+ "Cannot change data-type for object array.|"
+ )
with pytest.raises(TypeError, match=msg):
# GH#55709
idx.view(index_cls)
| CI failed, the reason: `AssertionError: Regex pattern did not match.`
Replaced error msg `'Cannot change data-type for object array'` with `'Cannot change data-type for array of references.'` | https://api.github.com/repos/pandas-dev/pandas/pulls/58181 | 2024-04-08T16:02:43Z | 2024-04-08T18:23:00Z | 2024-04-08T18:23:00Z | 2024-04-08T21:34:31Z |
Cleanup json_normalize documentation | diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index ef717dd9b7ef8..7d3eefae39679 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -289,10 +289,10 @@ def json_normalize(
meta : list of paths (str or list of str), default None
Fields to use as metadata for each record in resulting table.
meta_prefix : str, default None
- If True, prefix records with dotted (?) path, e.g. foo.bar.field if
+ If True, prefix records with dotted path, e.g. foo.bar.field if
meta is ['foo', 'bar'].
record_prefix : str, default None
- If True, prefix records with dotted (?) path, e.g. foo.bar.field if
+ If True, prefix records with dotted path, e.g. foo.bar.field if
path to records is ['foo', 'bar'].
errors : {'raise', 'ignore'}, default 'raise'
Configures error handling.
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58180 | 2024-04-08T14:09:37Z | 2024-04-08T16:43:15Z | 2024-04-08T16:43:15Z | 2024-04-08T17:28:37Z |
TST: Remove deprecation check from test_pyarrow_engine | diff --git a/pandas/tests/io/parser/test_unsupported.py b/pandas/tests/io/parser/test_unsupported.py
index 8d4c28bd61fa1..44a55cf3be240 100644
--- a/pandas/tests/io/parser/test_unsupported.py
+++ b/pandas/tests/io/parser/test_unsupported.py
@@ -155,12 +155,8 @@ def test_pyarrow_engine(self):
kwargs[default] = "warn"
warn = None
- depr_msg = None
+ depr_msg = "The 'delim_whitespace' keyword in pd.read_csv is deprecated"
if "delim_whitespace" in kwargs:
- depr_msg = "The 'delim_whitespace' keyword in pd.read_csv is deprecated"
- warn = FutureWarning
- if "verbose" in kwargs:
- depr_msg = "The 'verbose' keyword in pd.read_csv is deprecated"
warn = FutureWarning
with pytest.raises(ValueError, match=msg):
| - [x] ref https://github.com/pandas-dev/pandas/issues/57966#issuecomment-2041619807_
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58175 | 2024-04-07T21:56:08Z | 2024-04-09T16:46:13Z | 2024-04-09T16:46:13Z | 2024-04-09T16:46:20Z |
ENH: add numeric_only to Dataframe.cum* methods | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 19b448a1871c2..d189c4f4bf248 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -35,7 +35,7 @@ Other enhancements
- Support passing a :class:`Series` input to :func:`json_normalize` that retains the :class:`Series` :class:`Index` (:issue:`51452`)
- Users can globally disable any ``PerformanceWarning`` by setting the option ``mode.performance_warnings`` to ``False`` (:issue:`56920`)
- :meth:`Styler.format_index_names` can now be used to format the index and column names (:issue:`48936` and :issue:`47489`)
--
+- :meth:`DataFrame.cummin`, :meth:`DataFrame.cummax`, :meth:`DataFrame.cumprod` and :meth:`DataFrame.cumsum` methods now have a ``numeric_only`` parameter (:issue:`53072`)
.. ---------------------------------------------------------------------------
.. _whatsnew_300.notable_bug_fixes:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 66a68755a2a09..d6f4e5ea25fc9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -12029,20 +12029,52 @@ def kurt(
product = prod
@doc(make_doc("cummin", ndim=2))
- def cummin(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Self:
- return NDFrame.cummin(self, axis, skipna, *args, **kwargs)
+ def cummin(
+ self,
+ axis: Axis = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ *args,
+ **kwargs,
+ ) -> Self:
+ data = self._get_numeric_data() if numeric_only else self
+ return NDFrame.cummin(data, axis, skipna, *args, **kwargs)
@doc(make_doc("cummax", ndim=2))
- def cummax(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Self:
- return NDFrame.cummax(self, axis, skipna, *args, **kwargs)
+ def cummax(
+ self,
+ axis: Axis = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ *args,
+ **kwargs,
+ ) -> Self:
+ data = self._get_numeric_data() if numeric_only else self
+ return NDFrame.cummax(data, axis, skipna, *args, **kwargs)
@doc(make_doc("cumsum", ndim=2))
- def cumsum(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Self:
- return NDFrame.cumsum(self, axis, skipna, *args, **kwargs)
+ def cumsum(
+ self,
+ axis: Axis = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ *args,
+ **kwargs,
+ ) -> Self:
+ data = self._get_numeric_data() if numeric_only else self
+ return NDFrame.cumsum(data, axis, skipna, *args, **kwargs)
@doc(make_doc("cumprod", 2))
- def cumprod(self, axis: Axis = 0, skipna: bool = True, *args, **kwargs) -> Self:
- return NDFrame.cumprod(self, axis, skipna, *args, **kwargs)
+ def cumprod(
+ self,
+ axis: Axis = 0,
+ skipna: bool = True,
+ numeric_only: bool = False,
+ *args,
+ **kwargs,
+ ) -> Self:
+ data = self._get_numeric_data() if numeric_only else self
+ return NDFrame.cumprod(data, axis, skipna, *args, **kwargs)
def nunique(self, axis: Axis = 0, dropna: bool = True) -> Series:
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 8af9503a3691d..e1c1b21249362 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11925,7 +11925,45 @@ def last_valid_index(self) -> Hashable:
DataFrame.any : Return True if one (or more) elements are True.
"""
-_cnum_doc = """
+_cnum_pd_doc = """
+Return cumulative {desc} over a DataFrame or Series axis.
+
+Returns a DataFrame or Series of the same size containing the cumulative
+{desc}.
+
+Parameters
+----------
+axis : {{0 or 'index', 1 or 'columns'}}, default 0
+ The index or the name of the axis. 0 is equivalent to None or 'index'.
+ For `Series` this parameter is unused and defaults to 0.
+skipna : bool, default True
+ Exclude NA/null values. If an entire row/column is NA, the result
+ will be NA.
+numeric_only : bool, default False
+ Include only float, int, boolean columns.
+*args, **kwargs
+ Additional keywords have no effect but might be accepted for
+ compatibility with NumPy.
+
+Returns
+-------
+{name1} or {name2}
+ Return cumulative {desc} of {name1} or {name2}.
+
+See Also
+--------
+core.window.expanding.Expanding.{accum_func_name} : Similar functionality
+ but ignores ``NaN`` values.
+{name2}.{accum_func_name} : Return the {desc} over
+ {name2} axis.
+{name2}.cummax : Return cumulative maximum over {name2} axis.
+{name2}.cummin : Return cumulative minimum over {name2} axis.
+{name2}.cumsum : Return cumulative sum over {name2} axis.
+{name2}.cumprod : Return cumulative product over {name2} axis.
+
+{examples}"""
+
+_cnum_series_doc = """
Return cumulative {desc} over a DataFrame or Series axis.
Returns a DataFrame or Series of the same size containing the cumulative
@@ -12716,28 +12754,44 @@ def make_doc(name: str, ndim: int) -> str:
kwargs = {"min_count": ""}
elif name == "cumsum":
- base_doc = _cnum_doc
+ if ndim == 1:
+ base_doc = _cnum_series_doc
+ else:
+ base_doc = _cnum_pd_doc
+
desc = "sum"
see_also = ""
examples = _cumsum_examples
kwargs = {"accum_func_name": "sum"}
elif name == "cumprod":
- base_doc = _cnum_doc
+ if ndim == 1:
+ base_doc = _cnum_series_doc
+ else:
+ base_doc = _cnum_pd_doc
+
desc = "product"
see_also = ""
examples = _cumprod_examples
kwargs = {"accum_func_name": "prod"}
elif name == "cummin":
- base_doc = _cnum_doc
+ if ndim == 1:
+ base_doc = _cnum_series_doc
+ else:
+ base_doc = _cnum_pd_doc
+
desc = "minimum"
see_also = ""
examples = _cummin_examples
kwargs = {"accum_func_name": "min"}
elif name == "cummax":
- base_doc = _cnum_doc
+ if ndim == 1:
+ base_doc = _cnum_series_doc
+ else:
+ base_doc = _cnum_pd_doc
+
desc = "maximum"
see_also = ""
examples = _cummax_examples
diff --git a/pandas/tests/frame/test_cumulative.py b/pandas/tests/frame/test_cumulative.py
index d7aad680d389e..ab217e1b1332a 100644
--- a/pandas/tests/frame/test_cumulative.py
+++ b/pandas/tests/frame/test_cumulative.py
@@ -7,10 +7,12 @@
"""
import numpy as np
+import pytest
from pandas import (
DataFrame,
Series,
+ Timestamp,
)
import pandas._testing as tm
@@ -81,3 +83,25 @@ def test_cumsum_preserve_dtypes(self):
}
)
tm.assert_frame_equal(result, expected)
+
+ @pytest.mark.parametrize("method", ["cumsum", "cumprod", "cummin", "cummax"])
+ @pytest.mark.parametrize("axis", [0, 1])
+ def test_numeric_only_flag(self, method, axis):
+ df = DataFrame(
+ {
+ "int": [1, 2, 3],
+ "bool": [True, False, False],
+ "string": ["a", "b", "c"],
+ "float": [1.0, 3.5, 4.0],
+ "datetime": [
+ Timestamp(2018, 1, 1),
+ Timestamp(2019, 1, 1),
+ Timestamp(2020, 1, 1),
+ ],
+ }
+ )
+ df_numeric_only = df.drop(["string", "datetime"], axis=1)
+
+ result = getattr(df, method)(axis=axis, numeric_only=True)
+ expected = getattr(df_numeric_only, method)(axis)
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/test_api.py b/pandas/tests/groupby/test_api.py
index b5fdf058d1ab0..d2cfa530e7c65 100644
--- a/pandas/tests/groupby/test_api.py
+++ b/pandas/tests/groupby/test_api.py
@@ -183,10 +183,9 @@ def test_frame_consistency(groupby_func):
elif groupby_func in ("bfill", "ffill"):
exclude_expected = {"inplace", "axis", "limit_area"}
elif groupby_func in ("cummax", "cummin"):
- exclude_expected = {"skipna", "args"}
- exclude_result = {"numeric_only"}
+ exclude_expected = {"axis", "skipna", "args"}
elif groupby_func in ("cumprod", "cumsum"):
- exclude_expected = {"skipna"}
+ exclude_expected = {"axis", "skipna", "numeric_only"}
elif groupby_func in ("pct_change",):
exclude_expected = {"kwargs"}
elif groupby_func in ("rank",):
| - [x] closes #53072
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58172 | 2024-04-06T15:46:25Z | 2024-04-11T15:20:49Z | 2024-04-11T15:20:49Z | 2024-04-11T15:24:01Z |
REF: de-duplicate tzinfo-awareness mismatch checks | diff --git a/pandas/_libs/tslib.pyx b/pandas/_libs/tslib.pyx
index effd7f586f266..ee8ed762fdb6e 100644
--- a/pandas/_libs/tslib.pyx
+++ b/pandas/_libs/tslib.pyx
@@ -74,7 +74,6 @@ from pandas._libs.tslibs.nattype cimport (
c_nat_strings as nat_strings,
)
from pandas._libs.tslibs.timestamps cimport _Timestamp
-from pandas._libs.tslibs.timezones cimport tz_compare
from pandas._libs.tslibs import (
Resolution,
@@ -452,13 +451,9 @@ cpdef array_to_datetime(
ndarray[int64_t] iresult
npy_datetimestruct dts
bint utc_convert = bool(utc)
- bint seen_datetime_offset = False
bint is_raise = errors == "raise"
bint is_coerce = errors == "coerce"
- bint is_same_offsets
_TSObject tsobj
- float tz_offset
- set out_tzoffset_vals = set()
tzinfo tz, tz_out = None
cnp.flatiter it = cnp.PyArray_IterNew(values)
NPY_DATETIMEUNIT item_reso
@@ -568,12 +563,12 @@ cpdef array_to_datetime(
# dateutil timezone objects cannot be hashed, so
# store the UTC offsets in seconds instead
nsecs = tz.utcoffset(None).total_seconds()
- out_tzoffset_vals.add(nsecs)
- seen_datetime_offset = True
+ state.out_tzoffset_vals.add(nsecs)
+ state.found_aware_str = True
else:
# Add a marker for naive string, to track if we are
# parsing mixed naive and aware strings
- out_tzoffset_vals.add("naive")
+ state.out_tzoffset_vals.add("naive")
state.found_naive_str = True
else:
@@ -588,41 +583,7 @@ cpdef array_to_datetime(
raise
return values, None
- if seen_datetime_offset and not utc_convert:
- # GH#17697, GH#57275
- # 1) If all the offsets are equal, return one offset for
- # the parsed dates to (maybe) pass to DatetimeIndex
- # 2) If the offsets are different, then do not force the parsing
- # and raise a ValueError: "cannot parse datetimes with
- # mixed time zones unless `utc=True`" instead
- is_same_offsets = len(out_tzoffset_vals) == 1
- if not is_same_offsets:
- raise ValueError(
- "Mixed timezones detected. Pass utc=True in to_datetime "
- "or tz='UTC' in DatetimeIndex to convert to a common timezone."
- )
- elif state.found_naive or state.found_other:
- # e.g. test_to_datetime_mixed_awareness_mixed_types
- raise ValueError("Cannot mix tz-aware with tz-naive values")
- elif tz_out is not None:
- # GH#55693
- tz_offset = out_tzoffset_vals.pop()
- tz_out2 = timezone(timedelta(seconds=tz_offset))
- if not tz_compare(tz_out, tz_out2):
- # e.g. test_to_datetime_mixed_tzs_mixed_types
- raise ValueError(
- "Mixed timezones detected. Pass utc=True in to_datetime "
- "or tz='UTC' in DatetimeIndex to convert to a common timezone."
- )
- # e.g. test_to_datetime_mixed_types_matching_tzs
- else:
- tz_offset = out_tzoffset_vals.pop()
- tz_out = timezone(timedelta(seconds=tz_offset))
- elif not utc_convert:
- if tz_out and (state.found_other or state.found_naive_str):
- # found_other indicates a tz-naive int, float, dt64, or date
- # e.g. test_to_datetime_mixed_awareness_mixed_types
- raise ValueError("Cannot mix tz-aware with tz-naive values")
+ tz_out = state.check_for_mixed_inputs(tz_out, utc)
if infer_reso:
if state.creso_ever_changed:
diff --git a/pandas/_libs/tslibs/strptime.pxd b/pandas/_libs/tslibs/strptime.pxd
index dd8936f080b31..d2eae910a87b5 100644
--- a/pandas/_libs/tslibs/strptime.pxd
+++ b/pandas/_libs/tslibs/strptime.pxd
@@ -18,9 +18,12 @@ cdef class DatetimeParseState:
bint found_tz
bint found_naive
bint found_naive_str
+ bint found_aware_str
bint found_other
bint creso_ever_changed
NPY_DATETIMEUNIT creso
+ set out_tzoffset_vals
cdef tzinfo process_datetime(self, datetime dt, tzinfo tz, bint utc_convert)
cdef bint update_creso(self, NPY_DATETIMEUNIT item_reso) noexcept
+ cdef tzinfo check_for_mixed_inputs(self, tzinfo tz_out, bint utc)
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index 5c9f1c770ea7f..d6c3285d89c59 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -252,8 +252,11 @@ cdef class DatetimeParseState:
# found_naive_str refers to a string that was parsed to a timezone-naive
# datetime.
self.found_naive_str = False
+ self.found_aware_str = False
self.found_other = False
+ self.out_tzoffset_vals = set()
+
self.creso = creso
self.creso_ever_changed = False
@@ -292,6 +295,58 @@ cdef class DatetimeParseState:
"tz-naive values")
return tz
+ cdef tzinfo check_for_mixed_inputs(
+ self,
+ tzinfo tz_out,
+ bint utc,
+ ):
+ cdef:
+ bint is_same_offsets
+ float tz_offset
+
+ if self.found_aware_str and not utc:
+ # GH#17697, GH#57275
+ # 1) If all the offsets are equal, return one offset for
+ # the parsed dates to (maybe) pass to DatetimeIndex
+ # 2) If the offsets are different, then do not force the parsing
+ # and raise a ValueError: "cannot parse datetimes with
+ # mixed time zones unless `utc=True`" instead
+ is_same_offsets = len(self.out_tzoffset_vals) == 1
+ if not is_same_offsets or (self.found_naive or self.found_other):
+ # e.g. test_to_datetime_mixed_awareness_mixed_types (array_to_datetime)
+ raise ValueError(
+ "Mixed timezones detected. Pass utc=True in to_datetime "
+ "or tz='UTC' in DatetimeIndex to convert to a common timezone."
+ )
+ elif tz_out is not None:
+ # GH#55693
+ tz_offset = self.out_tzoffset_vals.pop()
+ tz_out2 = timezone(timedelta(seconds=tz_offset))
+ if not tz_compare(tz_out, tz_out2):
+ # e.g. (array_strptime)
+ # test_to_datetime_mixed_offsets_with_utc_false_removed
+ # e.g. test_to_datetime_mixed_tzs_mixed_types (array_to_datetime)
+ raise ValueError(
+ "Mixed timezones detected. Pass utc=True in to_datetime "
+ "or tz='UTC' in DatetimeIndex to convert to a common timezone."
+ )
+ # e.g. (array_strptime)
+ # test_guess_datetime_format_with_parseable_formats
+ # e.g. test_to_datetime_mixed_types_matching_tzs (array_to_datetime)
+ else:
+ # e.g. test_to_datetime_iso8601_with_timezone_valid (array_strptime)
+ tz_offset = self.out_tzoffset_vals.pop()
+ tz_out = timezone(timedelta(seconds=tz_offset))
+ elif not utc:
+ if tz_out and (self.found_other or self.found_naive_str):
+ # found_other indicates a tz-naive int, float, dt64, or date
+ # e.g. test_to_datetime_mixed_awareness_mixed_types (array_to_datetime)
+ raise ValueError(
+ "Mixed timezones detected. Pass utc=True in to_datetime "
+ "or tz='UTC' in DatetimeIndex to convert to a common timezone."
+ )
+ return tz_out
+
def array_strptime(
ndarray[object] values,
@@ -319,11 +374,8 @@ def array_strptime(
npy_datetimestruct dts
int64_t[::1] iresult
object val
- bint seen_datetime_offset = False
bint is_raise = errors=="raise"
bint is_coerce = errors=="coerce"
- bint is_same_offsets
- set out_tzoffset_vals = set()
tzinfo tz, tz_out = None
bint iso_format = format_is_iso(fmt)
NPY_DATETIMEUNIT out_bestunit, item_reso
@@ -418,15 +470,15 @@ def array_strptime(
) from err
if out_local == 1:
nsecs = out_tzoffset * 60
- out_tzoffset_vals.add(nsecs)
- seen_datetime_offset = True
+ state.out_tzoffset_vals.add(nsecs)
+ state.found_aware_str = True
tz = timezone(timedelta(minutes=out_tzoffset))
value = tz_localize_to_utc_single(
value, tz, ambiguous="raise", nonexistent=None, creso=creso
)
else:
tz = None
- out_tzoffset_vals.add("naive")
+ state.out_tzoffset_vals.add("naive")
state.found_naive_str = True
iresult[i] = value
continue
@@ -475,12 +527,12 @@ def array_strptime(
elif creso == NPY_DATETIMEUNIT.NPY_FR_ms:
nsecs = nsecs // 10**3
- out_tzoffset_vals.add(nsecs)
- seen_datetime_offset = True
+ state.out_tzoffset_vals.add(nsecs)
+ state.found_aware_str = True
else:
state.found_naive_str = True
tz = None
- out_tzoffset_vals.add("naive")
+ state.out_tzoffset_vals.add("naive")
except ValueError as ex:
ex.args = (
@@ -499,35 +551,7 @@ def array_strptime(
raise
return values, None
- if seen_datetime_offset and not utc:
- is_same_offsets = len(out_tzoffset_vals) == 1
- if not is_same_offsets or (state.found_naive or state.found_other):
- raise ValueError(
- "Mixed timezones detected. Pass utc=True in to_datetime "
- "or tz='UTC' in DatetimeIndex to convert to a common timezone."
- )
- elif tz_out is not None:
- # GH#55693
- tz_offset = out_tzoffset_vals.pop()
- tz_out2 = timezone(timedelta(seconds=tz_offset))
- if not tz_compare(tz_out, tz_out2):
- # e.g. test_to_datetime_mixed_offsets_with_utc_false_removed
- raise ValueError(
- "Mixed timezones detected. Pass utc=True in to_datetime "
- "or tz='UTC' in DatetimeIndex to convert to a common timezone."
- )
- # e.g. test_guess_datetime_format_with_parseable_formats
- else:
- # e.g. test_to_datetime_iso8601_with_timezone_valid
- tz_offset = out_tzoffset_vals.pop()
- tz_out = timezone(timedelta(seconds=tz_offset))
- elif not utc:
- if tz_out and (state.found_other or state.found_naive_str):
- # found_other indicates a tz-naive int, float, dt64, or date
- raise ValueError(
- "Mixed timezones detected. Pass utc=True in to_datetime "
- "or tz='UTC' in DatetimeIndex to convert to a common timezone."
- )
+ tz_out = state.check_for_mixed_inputs(tz_out, utc)
if infer_reso:
if state.creso_ever_changed:
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 7992b48a4b0cc..b59dd194cac27 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -3545,19 +3545,27 @@ def test_to_datetime_mixed_awareness_mixed_types(aware_val, naive_val, naive_fir
# issued in _array_to_datetime_object
both_strs = isinstance(aware_val, str) and isinstance(naive_val, str)
has_numeric = isinstance(naive_val, (int, float))
+ both_datetime = isinstance(naive_val, datetime) and isinstance(aware_val, datetime)
+
+ mixed_msg = (
+ "Mixed timezones detected. Pass utc=True in to_datetime or tz='UTC' "
+ "in DatetimeIndex to convert to a common timezone"
+ )
first_non_null = next(x for x in vec if x != "")
# if first_non_null is a not a string, _guess_datetime_format_for_array
# doesn't guess a format so we don't go through array_strptime
if not isinstance(first_non_null, str):
# that case goes through array_strptime which has different behavior
- msg = "Cannot mix tz-aware with tz-naive values"
+ msg = mixed_msg
if naive_first and isinstance(aware_val, Timestamp):
if isinstance(naive_val, Timestamp):
msg = "Tz-aware datetime.datetime cannot be converted to datetime64"
with pytest.raises(ValueError, match=msg):
to_datetime(vec)
else:
+ if not naive_first and both_datetime:
+ msg = "Cannot mix tz-aware with tz-naive values"
with pytest.raises(ValueError, match=msg):
to_datetime(vec)
@@ -3586,7 +3594,7 @@ def test_to_datetime_mixed_awareness_mixed_types(aware_val, naive_val, naive_fir
to_datetime(vec, utc=True)
else:
- msg = "Mixed timezones detected. Pass utc=True in to_datetime"
+ msg = mixed_msg
with pytest.raises(ValueError, match=msg):
to_datetime(vec)
@@ -3594,13 +3602,13 @@ def test_to_datetime_mixed_awareness_mixed_types(aware_val, naive_val, naive_fir
to_datetime(vec, utc=True)
if both_strs:
- msg = "Mixed timezones detected. Pass utc=True in to_datetime"
+ msg = mixed_msg
with pytest.raises(ValueError, match=msg):
to_datetime(vec, format="mixed")
with pytest.raises(ValueError, match=msg):
DatetimeIndex(vec)
else:
- msg = "Cannot mix tz-aware with tz-naive values"
+ msg = mixed_msg
if naive_first and isinstance(aware_val, Timestamp):
if isinstance(naive_val, Timestamp):
msg = "Tz-aware datetime.datetime cannot be converted to datetime64"
@@ -3609,6 +3617,8 @@ def test_to_datetime_mixed_awareness_mixed_types(aware_val, naive_val, naive_fir
with pytest.raises(ValueError, match=msg):
DatetimeIndex(vec)
else:
+ if not naive_first and both_datetime:
+ msg = "Cannot mix tz-aware with tz-naive values"
with pytest.raises(ValueError, match=msg):
to_datetime(vec, format="mixed")
with pytest.raises(ValueError, match=msg):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58171 | 2024-04-06T15:44:29Z | 2024-04-08T16:44:54Z | 2024-04-08T16:44:54Z | 2024-04-08T17:02:17Z |
No longer produce test.tar/test.zip after running test suite | diff --git a/pandas/tests/frame/methods/test_to_csv.py b/pandas/tests/frame/methods/test_to_csv.py
index f87fa4137d62d..66a35c6f486a4 100644
--- a/pandas/tests/frame/methods/test_to_csv.py
+++ b/pandas/tests/frame/methods/test_to_csv.py
@@ -1406,19 +1406,21 @@ def test_to_csv_categorical_and_interval(self):
expected = tm.convert_rows_list_to_csv_str(expected_rows)
assert result == expected
- def test_to_csv_warn_when_zip_tar_and_append_mode(self):
+ def test_to_csv_warn_when_zip_tar_and_append_mode(self, tmp_path):
# GH57875
df = DataFrame({"a": [1, 2, 3]})
msg = (
"zip and tar do not support mode 'a' properly. This combination will "
"result in multiple files with same name being added to the archive"
)
+ zip_path = tmp_path / "test.zip"
+ tar_path = tmp_path / "test.tar"
with tm.assert_produces_warning(
RuntimeWarning, match=msg, raise_on_extra_warnings=False
):
- df.to_csv("test.zip", mode="a")
+ df.to_csv(zip_path, mode="a")
with tm.assert_produces_warning(
RuntimeWarning, match=msg, raise_on_extra_warnings=False
):
- df.to_csv("test.tar", mode="a")
+ df.to_csv(tar_path, mode="a")
| - [x] closes #58162 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58168 | 2024-04-05T21:45:59Z | 2024-04-05T23:11:11Z | 2024-04-05T23:11:11Z | 2024-04-05T23:11:18Z |
DEPR: enforce Sparse deprecations | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 983768e0f67da..190b515981098 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -208,6 +208,7 @@ Removal of prior version deprecations/changes
- All arguments except ``name`` in :meth:`Index.rename` are now keyword only (:issue:`56493`)
- All arguments except the first ``path``-like argument in IO writers are now keyword only (:issue:`54229`)
- Disallow calling :meth:`Series.replace` or :meth:`DataFrame.replace` without a ``value`` and with non-dict-like ``to_replace`` (:issue:`33302`)
+- Disallow constructing a :class:`arrays.SparseArray` with scalar data (:issue:`53039`)
- Disallow non-standard (``np.ndarray``, :class:`Index`, :class:`ExtensionArray`, or :class:`Series`) to :func:`isin`, :func:`unique`, :func:`factorize` (:issue:`52986`)
- Disallow passing a pandas type to :meth:`Index.view` (:issue:`55709`)
- Disallow units other than "s", "ms", "us", "ns" for datetime64 and timedelta64 dtypes in :func:`array` (:issue:`53817`)
@@ -216,6 +217,7 @@ Removal of prior version deprecations/changes
- Removed deprecated "method" and "limit" keywords from :meth:`Series.replace` and :meth:`DataFrame.replace` (:issue:`53492`)
- Removed extension test classes ``BaseNoReduceTests``, ``BaseNumericReduceTests``, ``BaseBooleanReduceTests`` (:issue:`54663`)
- Removed the "closed" and "normalize" keywords in :meth:`DatetimeIndex.__new__` (:issue:`52628`)
+- Require :meth:`SparseDtype.fill_value` to be a valid value for the :meth:`SparseDtype.subtype` (:issue:`53043`)
- Stopped performing dtype inference with in :meth:`Index.insert` with object-dtype index; this often affects the index/columns that result when setting new entries into an empty :class:`Series` or :class:`DataFrame` (:issue:`51363`)
- Removed the "closed" and "unit" keywords in :meth:`TimedeltaIndex.__new__` (:issue:`52628`, :issue:`55499`)
- All arguments in :meth:`Index.sort_values` are now keyword only (:issue:`56493`)
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 2a96423017bb7..134702099371d 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -40,7 +40,6 @@
from pandas.core.dtypes.astype import astype_array
from pandas.core.dtypes.cast import (
- construct_1d_arraylike_from_scalar,
find_common_type,
maybe_box_datetimelike,
)
@@ -399,19 +398,10 @@ def __init__(
dtype = dtype.subtype
if is_scalar(data):
- warnings.warn(
- f"Constructing {type(self).__name__} with scalar data is deprecated "
- "and will raise in a future version. Pass a sequence instead.",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise TypeError(
+ f"Cannot construct {type(self).__name__} from scalar data. "
+ "Pass a sequence instead."
)
- if sparse_index is None:
- npoints = 1
- else:
- npoints = sparse_index.length
-
- data = construct_1d_arraylike_from_scalar(data, npoints, dtype=None)
- dtype = data.dtype
if dtype is not None:
dtype = pandas_dtype(dtype)
diff --git a/pandas/core/dtypes/dtypes.py b/pandas/core/dtypes/dtypes.py
index f94d32a3b8547..2d8e490f02d52 100644
--- a/pandas/core/dtypes/dtypes.py
+++ b/pandas/core/dtypes/dtypes.py
@@ -1762,24 +1762,18 @@ def _check_fill_value(self) -> None:
val = self._fill_value
if isna(val):
if not is_valid_na_for_dtype(val, self.subtype):
- warnings.warn(
- "Allowing arbitrary scalar fill_value in SparseDtype is "
- "deprecated. In a future version, the fill_value must be "
- "a valid value for the SparseDtype.subtype.",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise ValueError(
+ # GH#53043
+ "fill_value must be a valid value for the SparseDtype.subtype"
)
else:
dummy = np.empty(0, dtype=self.subtype)
dummy = ensure_wrapped_if_datetimelike(dummy)
if not can_hold_element(dummy, val):
- warnings.warn(
- "Allowing arbitrary scalar fill_value in SparseDtype is "
- "deprecated. In a future version, the fill_value must be "
- "a valid value for the SparseDtype.subtype.",
- FutureWarning,
- stacklevel=find_stack_level(),
+ raise ValueError(
+ # GH#53043
+ "fill_value must be a valid value for the SparseDtype.subtype"
)
@property
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py
index 883d6ea3959ff..c35e8204f3437 100644
--- a/pandas/tests/arrays/sparse/test_array.py
+++ b/pandas/tests/arrays/sparse/test_array.py
@@ -52,10 +52,11 @@ def test_set_fill_value(self):
arr.fill_value = 2
assert arr.fill_value == 2
- msg = "Allowing arbitrary scalar fill_value in SparseDtype is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ msg = "fill_value must be a valid value for the SparseDtype.subtype"
+ with pytest.raises(ValueError, match=msg):
+ # GH#53043
arr.fill_value = 3.1
- assert arr.fill_value == 3.1
+ assert arr.fill_value == 2
arr.fill_value = np.nan
assert np.isnan(arr.fill_value)
@@ -64,8 +65,9 @@ def test_set_fill_value(self):
arr.fill_value = True
assert arr.fill_value is True
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with pytest.raises(ValueError, match=msg):
arr.fill_value = 0
+ assert arr.fill_value is True
arr.fill_value = np.nan
assert np.isnan(arr.fill_value)
diff --git a/pandas/tests/arrays/sparse/test_constructors.py b/pandas/tests/arrays/sparse/test_constructors.py
index 2831c8abdaf13..012ff1da0d431 100644
--- a/pandas/tests/arrays/sparse/test_constructors.py
+++ b/pandas/tests/arrays/sparse/test_constructors.py
@@ -144,20 +144,12 @@ def test_constructor_spindex_dtype(self):
@pytest.mark.parametrize("sparse_index", [None, IntIndex(1, [0])])
def test_constructor_spindex_dtype_scalar(self, sparse_index):
# scalar input
- msg = "Constructing SparseArray with scalar data is deprecated"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- arr = SparseArray(data=1, sparse_index=sparse_index, dtype=None)
- exp = SparseArray([1], dtype=None)
- tm.assert_sp_array_equal(arr, exp)
- assert arr.dtype == SparseDtype(np.int64)
- assert arr.fill_value == 0
+ msg = "Cannot construct SparseArray from scalar data. Pass a sequence instead"
+ with pytest.raises(TypeError, match=msg):
+ SparseArray(data=1, sparse_index=sparse_index, dtype=None)
- with tm.assert_produces_warning(FutureWarning, match=msg):
- arr = SparseArray(data=1, sparse_index=IntIndex(1, [0]), dtype=None)
- exp = SparseArray([1], dtype=None)
- tm.assert_sp_array_equal(arr, exp)
- assert arr.dtype == SparseDtype(np.int64)
- assert arr.fill_value == 0
+ with pytest.raises(TypeError, match=msg):
+ SparseArray(data=1, sparse_index=IntIndex(1, [0]), dtype=None)
def test_constructor_spindex_dtype_scalar_broadcasts(self):
arr = SparseArray(
diff --git a/pandas/tests/arrays/sparse/test_dtype.py b/pandas/tests/arrays/sparse/test_dtype.py
index 6f0d41333f2fd..1819744d9a9ae 100644
--- a/pandas/tests/arrays/sparse/test_dtype.py
+++ b/pandas/tests/arrays/sparse/test_dtype.py
@@ -84,7 +84,6 @@ def test_nans_not_equal():
(SparseDtype("float64"), SparseDtype("float32")),
(SparseDtype("float64"), SparseDtype("float64", 0)),
(SparseDtype("float64"), SparseDtype("datetime64[ns]", np.nan)),
- (SparseDtype(int, pd.NaT), SparseDtype(float, pd.NaT)),
(SparseDtype("float64"), np.dtype("float64")),
]
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58167 | 2024-04-05T20:26:39Z | 2024-04-05T21:40:31Z | 2024-04-05T21:40:30Z | 2024-04-06T00:13:37Z |
TST: Use temp_file fixture over ensure_clean | diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 0cc8018ea6213..1bd71768d226e 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -63,16 +63,16 @@ def read_csv(self, file):
return read_csv(file, parse_dates=True)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_read_empty_dta(self, version):
+ def test_read_empty_dta(self, version, temp_file):
empty_ds = DataFrame(columns=["unit"])
# GH 7369, make sure can read a 0-obs dta file
- with tm.ensure_clean() as path:
- empty_ds.to_stata(path, write_index=False, version=version)
- empty_ds2 = read_stata(path)
- tm.assert_frame_equal(empty_ds, empty_ds2)
+ path = temp_file
+ empty_ds.to_stata(path, write_index=False, version=version)
+ empty_ds2 = read_stata(path)
+ tm.assert_frame_equal(empty_ds, empty_ds2)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_read_empty_dta_with_dtypes(self, version):
+ def test_read_empty_dta_with_dtypes(self, version, temp_file):
# GH 46240
# Fixing above bug revealed that types are not correctly preserved when
# writing empty DataFrames
@@ -91,9 +91,9 @@ def test_read_empty_dta_with_dtypes(self, version):
}
)
# GH 7369, make sure can read a 0-obs dta file
- with tm.ensure_clean() as path:
- empty_df_typed.to_stata(path, write_index=False, version=version)
- empty_reread = read_stata(path)
+ path = temp_file
+ empty_df_typed.to_stata(path, write_index=False, version=version)
+ empty_reread = read_stata(path)
expected = empty_df_typed
# No uint# support. Downcast since values in range for int#
@@ -108,12 +108,12 @@ def test_read_empty_dta_with_dtypes(self, version):
tm.assert_series_equal(expected.dtypes, empty_reread.dtypes)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_read_index_col_none(self, version):
+ def test_read_index_col_none(self, version, temp_file):
df = DataFrame({"a": range(5), "b": ["b1", "b2", "b3", "b4", "b5"]})
# GH 7369, make sure can read a 0-obs dta file
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=False, version=version)
- read_df = read_stata(path)
+ path = temp_file
+ df.to_stata(path, write_index=False, version=version)
+ read_df = read_stata(path)
assert isinstance(read_df.index, pd.RangeIndex)
expected = df
@@ -324,39 +324,39 @@ def test_read_dta18(self, datapath):
assert rdr.data_label == "This is a Ünicode data label"
- def test_read_write_dta5(self):
+ def test_read_write_dta5(self, temp_file):
original = DataFrame(
[(np.nan, np.nan, np.nan, np.nan, np.nan)],
columns=["float_miss", "double_miss", "byte_miss", "int_miss", "long_miss"],
)
original.index.name = "index"
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_dates=None)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, convert_dates=None)
+ written_and_read_again = self.read_dta(path)
expected = original
expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
- def test_write_dta6(self, datapath):
+ def test_write_dta6(self, datapath, temp_file):
original = self.read_csv(datapath("io", "data", "stata", "stata3.csv"))
original.index.name = "index"
original.index = original.index.astype(np.int32)
original["year"] = original["year"].astype(np.int32)
original["quarter"] = original["quarter"].astype(np.int32)
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_dates=None)
- written_and_read_again = self.read_dta(path)
- tm.assert_frame_equal(
- written_and_read_again.set_index("index"),
- original,
- check_index_type=False,
- )
+ path = temp_file
+ original.to_stata(path, convert_dates=None)
+ written_and_read_again = self.read_dta(path)
+ tm.assert_frame_equal(
+ written_and_read_again.set_index("index"),
+ original,
+ check_index_type=False,
+ )
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_read_write_dta10(self, version):
+ def test_read_write_dta10(self, version, temp_file):
original = DataFrame(
data=[["string", "object", 1, 1.1, np.datetime64("2003-12-25")]],
columns=["string", "object", "integer", "floating", "datetime"],
@@ -366,9 +366,9 @@ def test_read_write_dta10(self, version):
original.index = original.index.astype(np.int32)
original["integer"] = original["integer"].astype(np.int32)
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_dates={"datetime": "tc"}, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, convert_dates={"datetime": "tc"}, version=version)
+ written_and_read_again = self.read_dta(path)
expected = original[:]
# "tc" convert_dates means we store in ms
@@ -379,14 +379,14 @@ def test_read_write_dta10(self, version):
expected,
)
- def test_stata_doc_examples(self):
- with tm.ensure_clean() as path:
- df = DataFrame(
- np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB")
- )
- df.to_stata(path)
+ def test_stata_doc_examples(self, temp_file):
+ path = temp_file
+ df = DataFrame(
+ np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB")
+ )
+ df.to_stata(path)
- def test_write_preserves_original(self):
+ def test_write_preserves_original(self, temp_file):
# 9795
df = DataFrame(
@@ -394,12 +394,12 @@ def test_write_preserves_original(self):
)
df.loc[2, "a":"c"] = np.nan
df_copy = df.copy()
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=False)
+ path = temp_file
+ df.to_stata(path, write_index=False)
tm.assert_frame_equal(df, df_copy)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_encoding(self, version, datapath):
+ def test_encoding(self, version, datapath, temp_file):
# GH 4626, proper encoding handling
raw = read_stata(datapath("io", "data", "stata", "stata1_encoding.dta"))
encoded = read_stata(datapath("io", "data", "stata", "stata1_encoding.dta"))
@@ -409,12 +409,12 @@ def test_encoding(self, version, datapath):
assert result == expected
assert isinstance(result, str)
- with tm.ensure_clean() as path:
- encoded.to_stata(path, write_index=False, version=version)
- reread_encoded = read_stata(path)
- tm.assert_frame_equal(encoded, reread_encoded)
+ path = temp_file
+ encoded.to_stata(path, write_index=False, version=version)
+ reread_encoded = read_stata(path)
+ tm.assert_frame_equal(encoded, reread_encoded)
- def test_read_write_dta11(self):
+ def test_read_write_dta11(self, temp_file):
original = DataFrame(
[(1, 2, 3, 4)],
columns=[
@@ -431,18 +431,18 @@ def test_read_write_dta11(self):
formatted.index.name = "index"
formatted = formatted.astype(np.int32)
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(InvalidColumnName):
- original.to_stata(path, convert_dates=None)
+ path = temp_file
+ with tm.assert_produces_warning(InvalidColumnName):
+ original.to_stata(path, convert_dates=None)
- written_and_read_again = self.read_dta(path)
+ written_and_read_again = self.read_dta(path)
expected = formatted
expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_read_write_dta12(self, version):
+ def test_read_write_dta12(self, version, temp_file):
original = DataFrame(
[(1, 2, 3, 4, 5, 6)],
columns=[
@@ -468,18 +468,18 @@ def test_read_write_dta12(self, version):
formatted.index.name = "index"
formatted = formatted.astype(np.int32)
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(InvalidColumnName):
- original.to_stata(path, convert_dates=None, version=version)
- # should get a warning for that format.
+ path = temp_file
+ with tm.assert_produces_warning(InvalidColumnName):
+ original.to_stata(path, convert_dates=None, version=version)
+ # should get a warning for that format.
- written_and_read_again = self.read_dta(path)
+ written_and_read_again = self.read_dta(path)
expected = formatted
expected.index = expected.index.astype(np.int32)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
- def test_read_write_dta13(self):
+ def test_read_write_dta13(self, temp_file):
s1 = Series(2**9, dtype=np.int16)
s2 = Series(2**17, dtype=np.int32)
s3 = Series(2**33, dtype=np.int64)
@@ -489,9 +489,9 @@ def test_read_write_dta13(self):
formatted = original
formatted["int64"] = formatted["int64"].astype(np.float64)
- with tm.ensure_clean() as path:
- original.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path)
+ written_and_read_again = self.read_dta(path)
expected = formatted
expected.index = expected.index.astype(np.int32)
@@ -501,16 +501,18 @@ def test_read_write_dta13(self):
@pytest.mark.parametrize(
"file", ["stata5_113", "stata5_114", "stata5_115", "stata5_117"]
)
- def test_read_write_reread_dta14(self, file, parsed_114, version, datapath):
+ def test_read_write_reread_dta14(
+ self, file, parsed_114, version, datapath, temp_file
+ ):
file = datapath("io", "data", "stata", f"{file}.dta")
parsed = self.read_dta(file)
parsed.index.name = "index"
tm.assert_frame_equal(parsed_114, parsed)
- with tm.ensure_clean() as path:
- parsed_114.to_stata(path, convert_dates={"date_td": "td"}, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ parsed_114.to_stata(path, convert_dates={"date_td": "td"}, version=version)
+ written_and_read_again = self.read_dta(path)
expected = parsed_114.copy()
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
@@ -536,38 +538,38 @@ def test_read_write_reread_dta15(self, file, datapath):
tm.assert_frame_equal(expected, parsed)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_timestamp_and_label(self, version):
+ def test_timestamp_and_label(self, version, temp_file):
original = DataFrame([(1,)], columns=["variable"])
time_stamp = datetime(2000, 2, 29, 14, 21)
data_label = "This is a data file."
- with tm.ensure_clean() as path:
- original.to_stata(
- path, time_stamp=time_stamp, data_label=data_label, version=version
- )
+ path = temp_file
+ original.to_stata(
+ path, time_stamp=time_stamp, data_label=data_label, version=version
+ )
- with StataReader(path) as reader:
- assert reader.time_stamp == "29 Feb 2000 14:21"
- assert reader.data_label == data_label
+ with StataReader(path) as reader:
+ assert reader.time_stamp == "29 Feb 2000 14:21"
+ assert reader.data_label == data_label
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_invalid_timestamp(self, version):
+ def test_invalid_timestamp(self, version, temp_file):
original = DataFrame([(1,)], columns=["variable"])
time_stamp = "01 Jan 2000, 00:00:00"
- with tm.ensure_clean() as path:
- msg = "time_stamp should be datetime type"
- with pytest.raises(ValueError, match=msg):
- original.to_stata(path, time_stamp=time_stamp, version=version)
- assert not os.path.isfile(path)
+ path = temp_file
+ msg = "time_stamp should be datetime type"
+ with pytest.raises(ValueError, match=msg):
+ original.to_stata(path, time_stamp=time_stamp, version=version)
+ assert not os.path.isfile(path)
- def test_numeric_column_names(self):
+ def test_numeric_column_names(self, temp_file):
original = DataFrame(np.reshape(np.arange(25.0), (5, 5)))
original.index.name = "index"
- with tm.ensure_clean() as path:
- # should get a warning for that format.
- with tm.assert_produces_warning(InvalidColumnName):
- original.to_stata(path)
+ path = temp_file
+ # should get a warning for that format.
+ with tm.assert_produces_warning(InvalidColumnName):
+ original.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ written_and_read_again = self.read_dta(path)
written_and_read_again = written_and_read_again.set_index("index")
columns = list(written_and_read_again.columns)
@@ -578,7 +580,7 @@ def test_numeric_column_names(self):
tm.assert_frame_equal(expected, written_and_read_again)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_nan_to_missing_value(self, version):
+ def test_nan_to_missing_value(self, version, temp_file):
s1 = Series(np.arange(4.0), dtype=np.float32)
s2 = Series(np.arange(4.0), dtype=np.float64)
s1[::2] = np.nan
@@ -586,48 +588,48 @@ def test_nan_to_missing_value(self, version):
original = DataFrame({"s1": s1, "s2": s2})
original.index.name = "index"
- with tm.ensure_clean() as path:
- original.to_stata(path, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, version=version)
+ written_and_read_again = self.read_dta(path)
written_and_read_again = written_and_read_again.set_index("index")
expected = original
tm.assert_frame_equal(written_and_read_again, expected)
- def test_no_index(self):
+ def test_no_index(self, temp_file):
columns = ["x", "y"]
original = DataFrame(np.reshape(np.arange(10.0), (5, 2)), columns=columns)
original.index.name = "index_not_written"
- with tm.ensure_clean() as path:
- original.to_stata(path, write_index=False)
- written_and_read_again = self.read_dta(path)
- with pytest.raises(KeyError, match=original.index.name):
- written_and_read_again["index_not_written"]
+ path = temp_file
+ original.to_stata(path, write_index=False)
+ written_and_read_again = self.read_dta(path)
+ with pytest.raises(KeyError, match=original.index.name):
+ written_and_read_again["index_not_written"]
- def test_string_no_dates(self):
+ def test_string_no_dates(self, temp_file):
s1 = Series(["a", "A longer string"])
s2 = Series([1.0, 2.0], dtype=np.float64)
original = DataFrame({"s1": s1, "s2": s2})
original.index.name = "index"
- with tm.ensure_clean() as path:
- original.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path)
+ written_and_read_again = self.read_dta(path)
expected = original
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
- def test_large_value_conversion(self):
+ def test_large_value_conversion(self, temp_file):
s0 = Series([1, 99], dtype=np.int8)
s1 = Series([1, 127], dtype=np.int8)
s2 = Series([1, 2**15 - 1], dtype=np.int16)
s3 = Series([1, 2**63 - 1], dtype=np.int64)
original = DataFrame({"s0": s0, "s1": s1, "s2": s2, "s3": s3})
original.index.name = "index"
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(PossiblePrecisionLoss):
- original.to_stata(path)
+ path = temp_file
+ with tm.assert_produces_warning(PossiblePrecisionLoss):
+ original.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ written_and_read_again = self.read_dta(path)
modified = original
modified["s1"] = Series(modified["s1"], dtype=np.int16)
@@ -635,14 +637,14 @@ def test_large_value_conversion(self):
modified["s3"] = Series(modified["s3"], dtype=np.float64)
tm.assert_frame_equal(written_and_read_again.set_index("index"), modified)
- def test_dates_invalid_column(self):
+ def test_dates_invalid_column(self, temp_file):
original = DataFrame([datetime(2006, 11, 19, 23, 13, 20)])
original.index.name = "index"
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(InvalidColumnName):
- original.to_stata(path, convert_dates={0: "tc"})
+ path = temp_file
+ with tm.assert_produces_warning(InvalidColumnName):
+ original.to_stata(path, convert_dates={0: "tc"})
- written_and_read_again = self.read_dta(path)
+ written_and_read_again = self.read_dta(path)
expected = original.copy()
expected.columns = ["_0"]
@@ -673,7 +675,7 @@ def test_value_labels_old_format(self, datapath):
with StataReader(dpath) as reader:
assert reader.value_labels() == {}
- def test_date_export_formats(self):
+ def test_date_export_formats(self, temp_file):
columns = ["tc", "td", "tw", "tm", "tq", "th", "ty"]
conversions = {c: c for c in columns}
data = [datetime(2006, 11, 20, 23, 13, 20)] * len(columns)
@@ -697,13 +699,13 @@ def test_date_export_formats(self):
)
expected["tc"] = expected["tc"].astype("M8[ms]")
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_dates=conversions)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, convert_dates=conversions)
+ written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
- def test_write_missing_strings(self):
+ def test_write_missing_strings(self, temp_file):
original = DataFrame([["1"], [None]], columns=["foo"])
expected = DataFrame(
@@ -712,15 +714,15 @@ def test_write_missing_strings(self):
columns=["foo"],
)
- with tm.ensure_clean() as path:
- original.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path)
+ written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(written_and_read_again.set_index("index"), expected)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("byteorder", [">", "<"])
- def test_bool_uint(self, byteorder, version):
+ def test_bool_uint(self, byteorder, version, temp_file):
s0 = Series([0, 1, True], dtype=np.bool_)
s1 = Series([0, 1, 100], dtype=np.uint8)
s2 = Series([0, 1, 255], dtype=np.uint8)
@@ -734,9 +736,9 @@ def test_bool_uint(self, byteorder, version):
)
original.index.name = "index"
- with tm.ensure_clean() as path:
- original.to_stata(path, byteorder=byteorder, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, byteorder=byteorder, version=version)
+ written_and_read_again = self.read_dta(path)
written_and_read_again = written_and_read_again.set_index("index")
@@ -768,7 +770,7 @@ def test_variable_labels(self, datapath):
assert k in keys
assert v in labels
- def test_minimal_size_col(self):
+ def test_minimal_size_col(self, temp_file):
str_lens = (1, 100, 244)
s = {}
for str_len in str_lens:
@@ -776,16 +778,16 @@ def test_minimal_size_col(self):
["a" * str_len, "b" * str_len, "c" * str_len]
)
original = DataFrame(s)
- with tm.ensure_clean() as path:
- original.to_stata(path, write_index=False)
+ path = temp_file
+ original.to_stata(path, write_index=False)
- with StataReader(path) as sr:
- sr._ensure_open() # The `_*list` variables are initialized here
- for variable, fmt, typ in zip(sr._varlist, sr._fmtlist, sr._typlist):
- assert int(variable[1:]) == int(fmt[1:-1])
- assert int(variable[1:]) == typ
+ with StataReader(path) as sr:
+ sr._ensure_open() # The `_*list` variables are initialized here
+ for variable, fmt, typ in zip(sr._varlist, sr._fmtlist, sr._typlist):
+ assert int(variable[1:]) == int(fmt[1:-1])
+ assert int(variable[1:]) == typ
- def test_excessively_long_string(self):
+ def test_excessively_long_string(self, temp_file):
str_lens = (1, 244, 500)
s = {}
for str_len in str_lens:
@@ -800,16 +802,16 @@ def test_excessively_long_string(self):
r"the newer \(Stata 13 and later\) format\."
)
with pytest.raises(ValueError, match=msg):
- with tm.ensure_clean() as path:
- original.to_stata(path)
+ path = temp_file
+ original.to_stata(path)
- def test_missing_value_generator(self):
+ def test_missing_value_generator(self, temp_file):
types = ("b", "h", "l")
df = DataFrame([[0.0]], columns=["float_"])
- with tm.ensure_clean() as path:
- df.to_stata(path)
- with StataReader(path) as rdr:
- valid_range = rdr.VALID_RANGE
+ path = temp_file
+ df.to_stata(path)
+ with StataReader(path) as rdr:
+ valid_range = rdr.VALID_RANGE
expected_values = ["." + chr(97 + i) for i in range(26)]
expected_values.insert(0, ".")
for t in types:
@@ -850,7 +852,7 @@ def test_missing_value_conversion(self, file, datapath):
)
tm.assert_frame_equal(parsed, expected)
- def test_big_dates(self, datapath):
+ def test_big_dates(self, datapath, temp_file):
yr = [1960, 2000, 9999, 100, 2262, 1677]
mo = [1, 1, 12, 1, 4, 9]
dd = [1, 1, 31, 1, 22, 23]
@@ -906,10 +908,10 @@ def test_big_dates(self, datapath):
date_conversion = {c: c[-2:] for c in columns}
# {c : c[-2:] for c in columns}
- with tm.ensure_clean() as path:
- expected.index.name = "index"
- expected.to_stata(path, convert_dates=date_conversion)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ expected.index.name = "index"
+ expected.to_stata(path, convert_dates=date_conversion)
+ written_and_read_again = self.read_dta(path)
tm.assert_frame_equal(
written_and_read_again.set_index("index"),
@@ -994,7 +996,7 @@ def test_drop_column(self, datapath):
@pytest.mark.filterwarnings(
"ignore:\\nStata value:pandas.io.stata.ValueLabelTypeMismatch"
)
- def test_categorical_writing(self, version):
+ def test_categorical_writing(self, version, temp_file):
original = DataFrame.from_records(
[
["one", "ten", "one", "one", "one", 1],
@@ -1017,9 +1019,9 @@ def test_categorical_writing(self, version):
"unlabeled",
],
)
- with tm.ensure_clean() as path:
- original.astype("category").to_stata(path, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.astype("category").to_stata(path, version=version)
+ written_and_read_again = self.read_dta(path)
res = written_and_read_again.set_index("index")
@@ -1042,7 +1044,7 @@ def test_categorical_writing(self, version):
tm.assert_frame_equal(res, expected)
- def test_categorical_warnings_and_errors(self):
+ def test_categorical_warnings_and_errors(self, temp_file):
# Warning for non-string labels
# Error for labels too long
original = DataFrame.from_records(
@@ -1051,13 +1053,13 @@ def test_categorical_warnings_and_errors(self):
)
original = original.astype("category")
- with tm.ensure_clean() as path:
- msg = (
- "Stata value labels for a single variable must have "
- r"a combined length less than 32,000 characters\."
- )
- with pytest.raises(ValueError, match=msg):
- original.to_stata(path)
+ path = temp_file
+ msg = (
+ "Stata value labels for a single variable must have "
+ r"a combined length less than 32,000 characters\."
+ )
+ with pytest.raises(ValueError, match=msg):
+ original.to_stata(path)
original = DataFrame.from_records(
[["a"], ["b"], ["c"], ["d"], [1]], columns=["Too_long"]
@@ -1068,7 +1070,7 @@ def test_categorical_warnings_and_errors(self):
# should get a warning for mixed content
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_categorical_with_stata_missing_values(self, version):
+ def test_categorical_with_stata_missing_values(self, version, temp_file):
values = [["a" + str(i)] for i in range(120)]
values.append([np.nan])
original = DataFrame.from_records(values, columns=["many_labels"])
@@ -1076,9 +1078,9 @@ def test_categorical_with_stata_missing_values(self, version):
[original[col].astype("category") for col in original], axis=1
)
original.index.name = "index"
- with tm.ensure_clean() as path:
- original.to_stata(path, version=version)
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(path, version=version)
+ written_and_read_again = self.read_dta(path)
res = written_and_read_again.set_index("index")
@@ -1313,54 +1315,50 @@ def test_read_chunks_columns(self, datapath):
pos += chunksize
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_write_variable_labels(self, version, mixed_frame):
+ def test_write_variable_labels(self, version, mixed_frame, temp_file):
# GH 13631, add support for writing variable labels
mixed_frame.index.name = "index"
variable_labels = {"a": "City Rank", "b": "City Exponent", "c": "City"}
- with tm.ensure_clean() as path:
- mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
- with StataReader(path) as sr:
- read_labels = sr.variable_labels()
- expected_labels = {
- "index": "",
- "a": "City Rank",
- "b": "City Exponent",
- "c": "City",
- }
- assert read_labels == expected_labels
+ path = temp_file
+ mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
+ with StataReader(path) as sr:
+ read_labels = sr.variable_labels()
+ expected_labels = {
+ "index": "",
+ "a": "City Rank",
+ "b": "City Exponent",
+ "c": "City",
+ }
+ assert read_labels == expected_labels
variable_labels["index"] = "The Index"
- with tm.ensure_clean() as path:
- mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
- with StataReader(path) as sr:
- read_labels = sr.variable_labels()
- assert read_labels == variable_labels
+ path = temp_file
+ mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
+ with StataReader(path) as sr:
+ read_labels = sr.variable_labels()
+ assert read_labels == variable_labels
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_invalid_variable_labels(self, version, mixed_frame):
+ def test_invalid_variable_labels(self, version, mixed_frame, temp_file):
mixed_frame.index.name = "index"
variable_labels = {"a": "very long" * 10, "b": "City Exponent", "c": "City"}
- with tm.ensure_clean() as path:
- msg = "Variable labels must be 80 characters or fewer"
- with pytest.raises(ValueError, match=msg):
- mixed_frame.to_stata(
- path, variable_labels=variable_labels, version=version
- )
+ path = temp_file
+ msg = "Variable labels must be 80 characters or fewer"
+ with pytest.raises(ValueError, match=msg):
+ mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
@pytest.mark.parametrize("version", [114, 117])
- def test_invalid_variable_label_encoding(self, version, mixed_frame):
+ def test_invalid_variable_label_encoding(self, version, mixed_frame, temp_file):
mixed_frame.index.name = "index"
variable_labels = {"a": "very long" * 10, "b": "City Exponent", "c": "City"}
variable_labels["a"] = "invalid character Œ"
- with tm.ensure_clean() as path:
- with pytest.raises(
- ValueError, match="Variable labels must contain only characters"
- ):
- mixed_frame.to_stata(
- path, variable_labels=variable_labels, version=version
- )
+ path = temp_file
+ with pytest.raises(
+ ValueError, match="Variable labels must contain only characters"
+ ):
+ mixed_frame.to_stata(path, variable_labels=variable_labels, version=version)
- def test_write_variable_label_errors(self, mixed_frame):
+ def test_write_variable_label_errors(self, mixed_frame, temp_file):
values = ["\u03a1", "\u0391", "\u039d", "\u0394", "\u0391", "\u03a3"]
variable_labels_utf8 = {
@@ -1374,8 +1372,8 @@ def test_write_variable_label_errors(self, mixed_frame):
"encoded in Latin-1"
)
with pytest.raises(ValueError, match=msg):
- with tm.ensure_clean() as path:
- mixed_frame.to_stata(path, variable_labels=variable_labels_utf8)
+ path = temp_file
+ mixed_frame.to_stata(path, variable_labels=variable_labels_utf8)
variable_labels_long = {
"a": "City Rank",
@@ -1387,10 +1385,10 @@ def test_write_variable_label_errors(self, mixed_frame):
msg = "Variable labels must be 80 characters or fewer"
with pytest.raises(ValueError, match=msg):
- with tm.ensure_clean() as path:
- mixed_frame.to_stata(path, variable_labels=variable_labels_long)
+ path = temp_file
+ mixed_frame.to_stata(path, variable_labels=variable_labels_long)
- def test_default_date_conversion(self):
+ def test_default_date_conversion(self, temp_file):
# GH 12259
dates = [
dt.datetime(1999, 12, 31, 12, 12, 12, 12000),
@@ -1409,29 +1407,29 @@ def test_default_date_conversion(self):
# "tc" for convert_dates below stores with "ms" resolution
expected["dates"] = expected["dates"].astype("M8[ms]")
- with tm.ensure_clean() as path:
- original.to_stata(path, write_index=False)
- reread = read_stata(path, convert_dates=True)
- tm.assert_frame_equal(expected, reread)
+ path = temp_file
+ original.to_stata(path, write_index=False)
+ reread = read_stata(path, convert_dates=True)
+ tm.assert_frame_equal(expected, reread)
- original.to_stata(path, write_index=False, convert_dates={"dates": "tc"})
- direct = read_stata(path, convert_dates=True)
- tm.assert_frame_equal(reread, direct)
+ original.to_stata(path, write_index=False, convert_dates={"dates": "tc"})
+ direct = read_stata(path, convert_dates=True)
+ tm.assert_frame_equal(reread, direct)
- dates_idx = original.columns.tolist().index("dates")
- original.to_stata(path, write_index=False, convert_dates={dates_idx: "tc"})
- direct = read_stata(path, convert_dates=True)
- tm.assert_frame_equal(reread, direct)
+ dates_idx = original.columns.tolist().index("dates")
+ original.to_stata(path, write_index=False, convert_dates={dates_idx: "tc"})
+ direct = read_stata(path, convert_dates=True)
+ tm.assert_frame_equal(reread, direct)
- def test_unsupported_type(self):
+ def test_unsupported_type(self, temp_file):
original = DataFrame({"a": [1 + 2j, 2 + 4j]})
msg = "Data type complex128 not supported"
with pytest.raises(NotImplementedError, match=msg):
- with tm.ensure_clean() as path:
- original.to_stata(path)
+ path = temp_file
+ original.to_stata(path)
- def test_unsupported_datetype(self):
+ def test_unsupported_datetype(self, temp_file):
dates = [
dt.datetime(1999, 12, 31, 12, 12, 12, 12000),
dt.datetime(2012, 12, 21, 12, 21, 12, 21000),
@@ -1447,8 +1445,8 @@ def test_unsupported_datetype(self):
msg = "Format %tC not implemented"
with pytest.raises(NotImplementedError, match=msg):
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_dates={"dates": "tC"})
+ path = temp_file
+ original.to_stata(path, convert_dates={"dates": "tC"})
dates = pd.date_range("1-1-1990", periods=3, tz="Asia/Hong_Kong")
original = DataFrame(
@@ -1459,8 +1457,8 @@ def test_unsupported_datetype(self):
}
)
with pytest.raises(NotImplementedError, match="Data type datetime64"):
- with tm.ensure_clean() as path:
- original.to_stata(path)
+ path = temp_file
+ original.to_stata(path)
def test_repeated_column_labels(self, datapath):
# GH 13923, 25772
@@ -1496,7 +1494,7 @@ def test_stata_111(self, datapath):
original = original[["y", "x", "w", "z"]]
tm.assert_frame_equal(original, df)
- def test_out_of_range_double(self):
+ def test_out_of_range_double(self, temp_file):
# GH 14618
df = DataFrame(
{
@@ -1509,10 +1507,10 @@ def test_out_of_range_double(self):
r"supported by Stata \(.+\)"
)
with pytest.raises(ValueError, match=msg):
- with tm.ensure_clean() as path:
- df.to_stata(path)
+ path = temp_file
+ df.to_stata(path)
- def test_out_of_range_float(self):
+ def test_out_of_range_float(self, temp_file):
original = DataFrame(
{
"ColumnOk": [
@@ -1531,16 +1529,16 @@ def test_out_of_range_float(self):
for col in original:
original[col] = original[col].astype(np.float32)
- with tm.ensure_clean() as path:
- original.to_stata(path)
- reread = read_stata(path)
+ path = temp_file
+ original.to_stata(path)
+ reread = read_stata(path)
original["ColumnTooBig"] = original["ColumnTooBig"].astype(np.float64)
expected = original
tm.assert_frame_equal(reread.set_index("index"), expected)
@pytest.mark.parametrize("infval", [np.inf, -np.inf])
- def test_inf(self, infval):
+ def test_inf(self, infval, temp_file):
# GH 45350
df = DataFrame({"WithoutInf": [0.0, 1.0], "WithInf": [2.0, infval]})
msg = (
@@ -1548,8 +1546,8 @@ def test_inf(self, infval):
"which is outside the range supported by Stata."
)
with pytest.raises(ValueError, match=msg):
- with tm.ensure_clean() as path:
- df.to_stata(path)
+ path = temp_file
+ df.to_stata(path)
def test_path_pathlib(self):
df = DataFrame(
@@ -1563,19 +1561,19 @@ def test_path_pathlib(self):
tm.assert_frame_equal(df, result)
@pytest.mark.parametrize("write_index", [True, False])
- def test_value_labels_iterator(self, write_index):
+ def test_value_labels_iterator(self, write_index, temp_file):
# GH 16923
d = {"A": ["B", "E", "C", "A", "E"]}
df = DataFrame(data=d)
df["A"] = df["A"].astype("category")
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=write_index)
+ path = temp_file
+ df.to_stata(path, write_index=write_index)
- with read_stata(path, iterator=True) as dta_iter:
- value_labels = dta_iter.value_labels()
+ with read_stata(path, iterator=True) as dta_iter:
+ value_labels = dta_iter.value_labels()
assert value_labels == {"A": {0: "A", 1: "B", 2: "C", 3: "E"}}
- def test_set_index(self):
+ def test_set_index(self, temp_file):
# GH 17328
df = DataFrame(
1.1 * np.arange(120).reshape((30, 4)),
@@ -1583,9 +1581,9 @@ def test_set_index(self):
index=pd.Index([f"i-{i}" for i in range(30)], dtype=object),
)
df.index.name = "index"
- with tm.ensure_clean() as path:
- df.to_stata(path)
- reread = read_stata(path, index_col="index")
+ path = temp_file
+ df.to_stata(path)
+ reread = read_stata(path, index_col="index")
tm.assert_frame_equal(df, reread)
@pytest.mark.parametrize(
@@ -1608,7 +1606,7 @@ def test_date_parsing_ignores_format_details(self, column, datapath):
formatted = df.loc[0, column + "_fmt"]
assert unformatted == formatted
- def test_writer_117(self):
+ def test_writer_117(self, temp_file):
original = DataFrame(
data=[
[
@@ -1662,14 +1660,14 @@ def test_writer_117(self):
original["float32"] = Series(original["float32"], dtype=np.float32)
original.index.name = "index"
copy = original.copy()
- with tm.ensure_clean() as path:
- original.to_stata(
- path,
- convert_dates={"datetime": "tc"},
- convert_strl=["forced_strl"],
- version=117,
- )
- written_and_read_again = self.read_dta(path)
+ path = temp_file
+ original.to_stata(
+ path,
+ convert_dates={"datetime": "tc"},
+ convert_strl=["forced_strl"],
+ version=117,
+ )
+ written_and_read_again = self.read_dta(path)
expected = original[:]
# "tc" for convert_dates means we store with "ms" resolution
@@ -1681,7 +1679,7 @@ def test_writer_117(self):
)
tm.assert_frame_equal(original, copy)
- def test_convert_strl_name_swap(self):
+ def test_convert_strl_name_swap(self, temp_file):
original = DataFrame(
[["a" * 3000, "A", "apple"], ["b" * 1000, "B", "banana"]],
columns=["long1" * 10, "long", 1],
@@ -1689,14 +1687,14 @@ def test_convert_strl_name_swap(self):
original.index.name = "index"
with tm.assert_produces_warning(InvalidColumnName):
- with tm.ensure_clean() as path:
- original.to_stata(path, convert_strl=["long", 1], version=117)
- reread = self.read_dta(path)
- reread = reread.set_index("index")
- reread.columns = original.columns
- tm.assert_frame_equal(reread, original, check_index_type=False)
-
- def test_invalid_date_conversion(self):
+ path = temp_file
+ original.to_stata(path, convert_strl=["long", 1], version=117)
+ reread = self.read_dta(path)
+ reread = reread.set_index("index")
+ reread.columns = original.columns
+ tm.assert_frame_equal(reread, original, check_index_type=False)
+
+ def test_invalid_date_conversion(self, temp_file):
# GH 12259
dates = [
dt.datetime(1999, 12, 31, 12, 12, 12, 12000),
@@ -1711,13 +1709,13 @@ def test_invalid_date_conversion(self):
}
)
- with tm.ensure_clean() as path:
- msg = "convert_dates key must be a column or an integer"
- with pytest.raises(ValueError, match=msg):
- original.to_stata(path, convert_dates={"wrong_name": "tc"})
+ path = temp_file
+ msg = "convert_dates key must be a column or an integer"
+ with pytest.raises(ValueError, match=msg):
+ original.to_stata(path, convert_dates={"wrong_name": "tc"})
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_nonfile_writing(self, version):
+ def test_nonfile_writing(self, version, temp_file):
# GH 21041
bio = io.BytesIO()
df = DataFrame(
@@ -1726,15 +1724,15 @@ def test_nonfile_writing(self, version):
index=pd.Index([f"i-{i}" for i in range(30)], dtype=object),
)
df.index.name = "index"
- with tm.ensure_clean() as path:
- df.to_stata(bio, version=version)
- bio.seek(0)
- with open(path, "wb") as dta:
- dta.write(bio.read())
- reread = read_stata(path, index_col="index")
+ path = temp_file
+ df.to_stata(bio, version=version)
+ bio.seek(0)
+ with open(path, "wb") as dta:
+ dta.write(bio.read())
+ reread = read_stata(path, index_col="index")
tm.assert_frame_equal(df, reread)
- def test_gzip_writing(self):
+ def test_gzip_writing(self, temp_file):
# writing version 117 requires seek and cannot be used with gzip
df = DataFrame(
1.1 * np.arange(120).reshape((30, 4)),
@@ -1742,11 +1740,11 @@ def test_gzip_writing(self):
index=pd.Index([f"i-{i}" for i in range(30)], dtype=object),
)
df.index.name = "index"
- with tm.ensure_clean() as path:
- with gzip.GzipFile(path, "wb") as gz:
- df.to_stata(gz, version=114)
- with gzip.GzipFile(path, "rb") as gz:
- reread = read_stata(gz, index_col="index")
+ path = temp_file
+ with gzip.GzipFile(path, "wb") as gz:
+ df.to_stata(gz, version=114)
+ with gzip.GzipFile(path, "rb") as gz:
+ reread = read_stata(gz, index_col="index")
tm.assert_frame_equal(df, reread)
def test_unicode_dta_118(self, datapath):
@@ -1766,70 +1764,65 @@ def test_unicode_dta_118(self, datapath):
tm.assert_frame_equal(unicode_df, expected)
- def test_mixed_string_strl(self):
+ def test_mixed_string_strl(self, temp_file):
# GH 23633
output = [{"mixed": "string" * 500, "number": 0}, {"mixed": None, "number": 1}]
output = DataFrame(output)
output.number = output.number.astype("int32")
- with tm.ensure_clean() as path:
- output.to_stata(path, write_index=False, version=117)
- reread = read_stata(path)
- expected = output.fillna("")
- tm.assert_frame_equal(reread, expected)
+ path = temp_file
+ output.to_stata(path, write_index=False, version=117)
+ reread = read_stata(path)
+ expected = output.fillna("")
+ tm.assert_frame_equal(reread, expected)
- # Check strl supports all None (null)
- output["mixed"] = None
- output.to_stata(
- path, write_index=False, convert_strl=["mixed"], version=117
- )
- reread = read_stata(path)
- expected = output.fillna("")
- tm.assert_frame_equal(reread, expected)
+ # Check strl supports all None (null)
+ output["mixed"] = None
+ output.to_stata(path, write_index=False, convert_strl=["mixed"], version=117)
+ reread = read_stata(path)
+ expected = output.fillna("")
+ tm.assert_frame_equal(reread, expected)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_all_none_exception(self, version):
+ def test_all_none_exception(self, version, temp_file):
output = [{"none": "none", "number": 0}, {"none": None, "number": 1}]
output = DataFrame(output)
output["none"] = None
- with tm.ensure_clean() as path:
- with pytest.raises(ValueError, match="Column `none` cannot be exported"):
- output.to_stata(path, version=version)
+ with pytest.raises(ValueError, match="Column `none` cannot be exported"):
+ output.to_stata(temp_file, version=version)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
- def test_invalid_file_not_written(self, version):
+ def test_invalid_file_not_written(self, version, temp_file):
content = "Here is one __�__ Another one __·__ Another one __½__"
df = DataFrame([content], columns=["invalid"])
- with tm.ensure_clean() as path:
- msg1 = (
- r"'latin-1' codec can't encode character '\\ufffd' "
- r"in position 14: ordinal not in range\(256\)"
- )
- msg2 = (
- "'ascii' codec can't decode byte 0xef in position 14: "
- r"ordinal not in range\(128\)"
- )
- with pytest.raises(UnicodeEncodeError, match=f"{msg1}|{msg2}"):
- df.to_stata(path)
+ msg1 = (
+ r"'latin-1' codec can't encode character '\\ufffd' "
+ r"in position 14: ordinal not in range\(256\)"
+ )
+ msg2 = (
+ "'ascii' codec can't decode byte 0xef in position 14: "
+ r"ordinal not in range\(128\)"
+ )
+ with pytest.raises(UnicodeEncodeError, match=f"{msg1}|{msg2}"):
+ df.to_stata(temp_file)
- def test_strl_latin1(self):
+ def test_strl_latin1(self, temp_file):
# GH 23573, correct GSO data to reflect correct size
output = DataFrame(
[["pandas"] * 2, ["þâÑÐŧ"] * 2], columns=["var_str", "var_strl"]
)
- with tm.ensure_clean() as path:
- output.to_stata(path, version=117, convert_strl=["var_strl"])
- with open(path, "rb") as reread:
- content = reread.read()
- expected = "þâÑÐŧ"
- assert expected.encode("latin-1") in content
- assert expected.encode("utf-8") in content
- gsos = content.split(b"strls")[1][1:-2]
- for gso in gsos.split(b"GSO")[1:]:
- val = gso.split(b"\x00")[-2]
- size = gso[gso.find(b"\x82") + 1]
- assert len(val) == size - 1
+ output.to_stata(temp_file, version=117, convert_strl=["var_strl"])
+ with open(temp_file, "rb") as reread:
+ content = reread.read()
+ expected = "þâÑÐŧ"
+ assert expected.encode("latin-1") in content
+ assert expected.encode("utf-8") in content
+ gsos = content.split(b"strls")[1][1:-2]
+ for gso in gsos.split(b"GSO")[1:]:
+ val = gso.split(b"\x00")[-2]
+ size = gso[gso.find(b"\x82") + 1]
+ assert len(val) == size - 1
def test_encoding_latin1_118(self, datapath):
# GH 25960
@@ -1864,7 +1857,7 @@ def test_stata_119(self, datapath):
assert reader._nvar == 32999
@pytest.mark.parametrize("version", [118, 119, None])
- def test_utf8_writer(self, version):
+ def test_utf8_writer(self, version, temp_file):
cat = pd.Categorical(["a", "β", "ĉ"], ordered=True)
data = DataFrame(
[
@@ -1885,48 +1878,45 @@ def test_utf8_writer(self, version):
data_label = "ᴅaᵀa-label"
value_labels = {"β": {1: "label", 2: "æøå", 3: "ŋot valid latin-1"}}
data["β"] = data["β"].astype(np.int32)
- with tm.ensure_clean() as path:
- writer = StataWriterUTF8(
- path,
- data,
- data_label=data_label,
- convert_strl=["strls"],
- variable_labels=variable_labels,
- write_index=False,
- version=version,
- value_labels=value_labels,
- )
- writer.write_file()
- reread_encoded = read_stata(path)
- # Missing is intentionally converted to empty strl
- data["strls"] = data["strls"].fillna("")
- # Variable with value labels is reread as categorical
- data["β"] = (
- data["β"].replace(value_labels["β"]).astype("category").cat.as_ordered()
- )
- tm.assert_frame_equal(data, reread_encoded)
- with StataReader(path) as reader:
- assert reader.data_label == data_label
- assert reader.variable_labels() == variable_labels
+ writer = StataWriterUTF8(
+ temp_file,
+ data,
+ data_label=data_label,
+ convert_strl=["strls"],
+ variable_labels=variable_labels,
+ write_index=False,
+ version=version,
+ value_labels=value_labels,
+ )
+ writer.write_file()
+ reread_encoded = read_stata(temp_file)
+ # Missing is intentionally converted to empty strl
+ data["strls"] = data["strls"].fillna("")
+ # Variable with value labels is reread as categorical
+ data["β"] = (
+ data["β"].replace(value_labels["β"]).astype("category").cat.as_ordered()
+ )
+ tm.assert_frame_equal(data, reread_encoded)
+ with StataReader(temp_file) as reader:
+ assert reader.data_label == data_label
+ assert reader.variable_labels() == variable_labels
- data.to_stata(path, version=version, write_index=False)
- reread_to_stata = read_stata(path)
- tm.assert_frame_equal(data, reread_to_stata)
+ data.to_stata(temp_file, version=version, write_index=False)
+ reread_to_stata = read_stata(temp_file)
+ tm.assert_frame_equal(data, reread_to_stata)
- def test_writer_118_exceptions(self):
+ def test_writer_118_exceptions(self, temp_file):
df = DataFrame(np.zeros((1, 33000), dtype=np.int8))
- with tm.ensure_clean() as path:
- with pytest.raises(ValueError, match="version must be either 118 or 119."):
- StataWriterUTF8(path, df, version=117)
- with tm.ensure_clean() as path:
- with pytest.raises(ValueError, match="You must use version 119"):
- StataWriterUTF8(path, df, version=118)
+ with pytest.raises(ValueError, match="version must be either 118 or 119."):
+ StataWriterUTF8(temp_file, df, version=117)
+ with pytest.raises(ValueError, match="You must use version 119"):
+ StataWriterUTF8(temp_file, df, version=118)
@pytest.mark.parametrize(
"dtype_backend",
["numpy_nullable", pytest.param("pyarrow", marks=td.skip_if_no("pyarrow"))],
)
- def test_read_write_ea_dtypes(self, dtype_backend):
+ def test_read_write_ea_dtypes(self, dtype_backend, temp_file):
df = DataFrame(
{
"a": [1, 2, None],
@@ -1940,9 +1930,8 @@ def test_read_write_ea_dtypes(self, dtype_backend):
df = df.convert_dtypes(dtype_backend=dtype_backend)
df.to_stata("test_stata.dta", version=118)
- with tm.ensure_clean() as path:
- df.to_stata(path)
- written_and_read_again = self.read_dta(path)
+ df.to_stata(temp_file)
+ written_and_read_again = self.read_dta(temp_file)
expected = DataFrame(
{
@@ -1995,7 +1984,9 @@ def test_direct_read(datapath, monkeypatch):
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@pytest.mark.parametrize("use_dict", [True, False])
@pytest.mark.parametrize("infer", [True, False])
-def test_compression(compression, version, use_dict, infer, compression_to_extension):
+def test_compression(
+ compression, version, use_dict, infer, compression_to_extension, tmp_path
+):
file_name = "dta_inferred_compression.dta"
if compression:
if use_dict:
@@ -2013,31 +2004,32 @@ def test_compression(compression, version, use_dict, infer, compression_to_exten
np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB")
)
df.index.name = "index"
- with tm.ensure_clean(file_name) as path:
- df.to_stata(path, version=version, compression=compression_arg)
- if compression == "gzip":
- with gzip.open(path, "rb") as comp:
- fp = io.BytesIO(comp.read())
- elif compression == "zip":
- with zipfile.ZipFile(path, "r") as comp:
- fp = io.BytesIO(comp.read(comp.filelist[0]))
- elif compression == "tar":
- with tarfile.open(path) as tar:
- fp = io.BytesIO(tar.extractfile(tar.getnames()[0]).read())
- elif compression == "bz2":
- with bz2.open(path, "rb") as comp:
- fp = io.BytesIO(comp.read())
- elif compression == "zstd":
- zstd = pytest.importorskip("zstandard")
- with zstd.open(path, "rb") as comp:
- fp = io.BytesIO(comp.read())
- elif compression == "xz":
- lzma = pytest.importorskip("lzma")
- with lzma.open(path, "rb") as comp:
- fp = io.BytesIO(comp.read())
- elif compression is None:
- fp = path
- reread = read_stata(fp, index_col="index")
+ path = tmp_path / file_name
+ path.touch()
+ df.to_stata(path, version=version, compression=compression_arg)
+ if compression == "gzip":
+ with gzip.open(path, "rb") as comp:
+ fp = io.BytesIO(comp.read())
+ elif compression == "zip":
+ with zipfile.ZipFile(path, "r") as comp:
+ fp = io.BytesIO(comp.read(comp.filelist[0]))
+ elif compression == "tar":
+ with tarfile.open(path) as tar:
+ fp = io.BytesIO(tar.extractfile(tar.getnames()[0]).read())
+ elif compression == "bz2":
+ with bz2.open(path, "rb") as comp:
+ fp = io.BytesIO(comp.read())
+ elif compression == "zstd":
+ zstd = pytest.importorskip("zstandard")
+ with zstd.open(path, "rb") as comp:
+ fp = io.BytesIO(comp.read())
+ elif compression == "xz":
+ lzma = pytest.importorskip("lzma")
+ with lzma.open(path, "rb") as comp:
+ fp = io.BytesIO(comp.read())
+ elif compression is None:
+ fp = path
+ reread = read_stata(fp, index_col="index")
expected = df
tm.assert_frame_equal(reread, expected)
@@ -2045,47 +2037,47 @@ def test_compression(compression, version, use_dict, infer, compression_to_exten
@pytest.mark.parametrize("method", ["zip", "infer"])
@pytest.mark.parametrize("file_ext", [None, "dta", "zip"])
-def test_compression_dict(method, file_ext):
+def test_compression_dict(method, file_ext, tmp_path):
file_name = f"test.{file_ext}"
archive_name = "test.dta"
df = DataFrame(
np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB")
)
df.index.name = "index"
- with tm.ensure_clean(file_name) as path:
- compression = {"method": method, "archive_name": archive_name}
- df.to_stata(path, compression=compression)
- if method == "zip" or file_ext == "zip":
- with zipfile.ZipFile(path, "r") as zp:
- assert len(zp.filelist) == 1
- assert zp.filelist[0].filename == archive_name
- fp = io.BytesIO(zp.read(zp.filelist[0]))
- else:
- fp = path
- reread = read_stata(fp, index_col="index")
+ compression = {"method": method, "archive_name": archive_name}
+ path = tmp_path / file_name
+ path.touch()
+ df.to_stata(path, compression=compression)
+ if method == "zip" or file_ext == "zip":
+ with zipfile.ZipFile(path, "r") as zp:
+ assert len(zp.filelist) == 1
+ assert zp.filelist[0].filename == archive_name
+ fp = io.BytesIO(zp.read(zp.filelist[0]))
+ else:
+ fp = path
+ reread = read_stata(fp, index_col="index")
expected = df
tm.assert_frame_equal(reread, expected)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
-def test_chunked_categorical(version):
+def test_chunked_categorical(version, temp_file):
df = DataFrame({"cats": Series(["a", "b", "a", "b", "c"], dtype="category")})
df.index.name = "index"
expected = df.copy()
- with tm.ensure_clean() as path:
- df.to_stata(path, version=version)
- with StataReader(path, chunksize=2, order_categoricals=False) as reader:
- for i, block in enumerate(reader):
- block = block.set_index("index")
- assert "cats" in block
- tm.assert_series_equal(
- block.cats,
- expected.cats.iloc[2 * i : 2 * (i + 1)],
- check_index_type=len(block) > 1,
- )
+ df.to_stata(temp_file, version=version)
+ with StataReader(temp_file, chunksize=2, order_categoricals=False) as reader:
+ for i, block in enumerate(reader):
+ block = block.set_index("index")
+ assert "cats" in block
+ tm.assert_series_equal(
+ block.cats,
+ expected.cats.iloc[2 * i : 2 * (i + 1)],
+ check_index_type=len(block) > 1,
+ )
def test_chunked_categorical_partial(datapath):
@@ -2115,38 +2107,36 @@ def test_iterator_errors(datapath, chunksize):
pass
-def test_iterator_value_labels():
+def test_iterator_value_labels(temp_file):
# GH 31544
values = ["c_label", "b_label"] + ["a_label"] * 500
df = DataFrame({f"col{k}": pd.Categorical(values, ordered=True) for k in range(2)})
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=False)
- expected = pd.Index(["a_label", "b_label", "c_label"], dtype="object")
- with read_stata(path, chunksize=100) as reader:
- for j, chunk in enumerate(reader):
- for i in range(2):
- tm.assert_index_equal(chunk.dtypes.iloc[i].categories, expected)
- tm.assert_frame_equal(chunk, df.iloc[j * 100 : (j + 1) * 100])
+ df.to_stata(temp_file, write_index=False)
+ expected = pd.Index(["a_label", "b_label", "c_label"], dtype="object")
+ with read_stata(temp_file, chunksize=100) as reader:
+ for j, chunk in enumerate(reader):
+ for i in range(2):
+ tm.assert_index_equal(chunk.dtypes.iloc[i].categories, expected)
+ tm.assert_frame_equal(chunk, df.iloc[j * 100 : (j + 1) * 100])
-def test_precision_loss():
+def test_precision_loss(temp_file):
df = DataFrame(
[[sum(2**i for i in range(60)), sum(2**i for i in range(52))]],
columns=["big", "little"],
)
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(
- PossiblePrecisionLoss, match="Column converted from int64 to float64"
- ):
- df.to_stata(path, write_index=False)
- reread = read_stata(path)
- expected_dt = Series([np.float64, np.float64], index=["big", "little"])
- tm.assert_series_equal(reread.dtypes, expected_dt)
- assert reread.loc[0, "little"] == df.loc[0, "little"]
- assert reread.loc[0, "big"] == float(df.loc[0, "big"])
+ with tm.assert_produces_warning(
+ PossiblePrecisionLoss, match="Column converted from int64 to float64"
+ ):
+ df.to_stata(temp_file, write_index=False)
+ reread = read_stata(temp_file)
+ expected_dt = Series([np.float64, np.float64], index=["big", "little"])
+ tm.assert_series_equal(reread.dtypes, expected_dt)
+ assert reread.loc[0, "little"] == df.loc[0, "little"]
+ assert reread.loc[0, "big"] == float(df.loc[0, "big"])
-def test_compression_roundtrip(compression):
+def test_compression_roundtrip(compression, temp_file):
df = DataFrame(
[[0.123456, 0.234567, 0.567567], [12.32112, 123123.2, 321321.2]],
index=["A", "B"],
@@ -2154,22 +2144,21 @@ def test_compression_roundtrip(compression):
)
df.index.name = "index"
- with tm.ensure_clean() as path:
- df.to_stata(path, compression=compression)
- reread = read_stata(path, compression=compression, index_col="index")
- tm.assert_frame_equal(df, reread)
+ df.to_stata(temp_file, compression=compression)
+ reread = read_stata(temp_file, compression=compression, index_col="index")
+ tm.assert_frame_equal(df, reread)
- # explicitly ensure file was compressed.
- with tm.decompress_file(path, compression) as fh:
- contents = io.BytesIO(fh.read())
- reread = read_stata(contents, index_col="index")
- tm.assert_frame_equal(df, reread)
+ # explicitly ensure file was compressed.
+ with tm.decompress_file(temp_file, compression) as fh:
+ contents = io.BytesIO(fh.read())
+ reread = read_stata(contents, index_col="index")
+ tm.assert_frame_equal(df, reread)
@pytest.mark.parametrize("to_infer", [True, False])
@pytest.mark.parametrize("read_infer", [True, False])
def test_stata_compression(
- compression_only, read_infer, to_infer, compression_to_extension
+ compression_only, read_infer, to_infer, compression_to_extension, tmp_path
):
compression = compression_only
@@ -2186,13 +2175,14 @@ def test_stata_compression(
to_compression = "infer" if to_infer else compression
read_compression = "infer" if read_infer else compression
- with tm.ensure_clean(filename) as path:
- df.to_stata(path, compression=to_compression)
- result = read_stata(path, compression=read_compression, index_col="index")
- tm.assert_frame_equal(result, df)
+ path = tmp_path / filename
+ path.touch()
+ df.to_stata(path, compression=to_compression)
+ result = read_stata(path, compression=read_compression, index_col="index")
+ tm.assert_frame_equal(result, df)
-def test_non_categorical_value_labels():
+def test_non_categorical_value_labels(temp_file):
data = DataFrame(
{
"fully_labelled": [1, 2, 3, 3, 1],
@@ -2202,35 +2192,35 @@ def test_non_categorical_value_labels():
}
)
- with tm.ensure_clean() as path:
- value_labels = {
- "fully_labelled": {1: "one", 2: "two", 3: "three"},
- "partially_labelled": {1.0: "one", 2.0: "two"},
- }
- expected = {**value_labels, "Z": {0: "j", 1: "k", 2: "l"}}
+ path = temp_file
+ value_labels = {
+ "fully_labelled": {1: "one", 2: "two", 3: "three"},
+ "partially_labelled": {1.0: "one", 2.0: "two"},
+ }
+ expected = {**value_labels, "Z": {0: "j", 1: "k", 2: "l"}}
- writer = StataWriter(path, data, value_labels=value_labels)
- writer.write_file()
+ writer = StataWriter(path, data, value_labels=value_labels)
+ writer.write_file()
- with StataReader(path) as reader:
- reader_value_labels = reader.value_labels()
- assert reader_value_labels == expected
+ with StataReader(path) as reader:
+ reader_value_labels = reader.value_labels()
+ assert reader_value_labels == expected
- msg = "Can't create value labels for notY, it wasn't found in the dataset."
- value_labels = {"notY": {7: "label1", 8: "label2"}}
- with pytest.raises(KeyError, match=msg):
- StataWriter(path, data, value_labels=value_labels)
+ msg = "Can't create value labels for notY, it wasn't found in the dataset."
+ value_labels = {"notY": {7: "label1", 8: "label2"}}
+ with pytest.raises(KeyError, match=msg):
+ StataWriter(path, data, value_labels=value_labels)
- msg = (
- "Can't create value labels for Z, value labels "
- "can only be applied to numeric columns."
- )
- value_labels = {"Z": {1: "a", 2: "k", 3: "j", 4: "i"}}
- with pytest.raises(ValueError, match=msg):
- StataWriter(path, data, value_labels=value_labels)
+ msg = (
+ "Can't create value labels for Z, value labels "
+ "can only be applied to numeric columns."
+ )
+ value_labels = {"Z": {1: "a", 2: "k", 3: "j", 4: "i"}}
+ with pytest.raises(ValueError, match=msg):
+ StataWriter(path, data, value_labels=value_labels)
-def test_non_categorical_value_label_name_conversion():
+def test_non_categorical_value_label_name_conversion(temp_file):
# Check conversion of invalid variable names
data = DataFrame(
{
@@ -2258,16 +2248,15 @@ def test_non_categorical_value_label_name_conversion():
"_1__2_": {3: "three"},
}
- with tm.ensure_clean() as path:
- with tm.assert_produces_warning(InvalidColumnName):
- data.to_stata(path, value_labels=value_labels)
+ with tm.assert_produces_warning(InvalidColumnName):
+ data.to_stata(temp_file, value_labels=value_labels)
- with StataReader(path) as reader:
- reader_value_labels = reader.value_labels()
- assert reader_value_labels == expected
+ with StataReader(temp_file) as reader:
+ reader_value_labels = reader.value_labels()
+ assert reader_value_labels == expected
-def test_non_categorical_value_label_convert_categoricals_error():
+def test_non_categorical_value_label_convert_categoricals_error(temp_file):
# Mapping more than one value to the same label is valid for Stata
# labels, but can't be read with convert_categoricals=True
value_labels = {
@@ -2280,17 +2269,16 @@ def test_non_categorical_value_label_convert_categoricals_error():
}
)
- with tm.ensure_clean() as path:
- data.to_stata(path, value_labels=value_labels)
+ data.to_stata(temp_file, value_labels=value_labels)
- with StataReader(path, convert_categoricals=False) as reader:
- reader_value_labels = reader.value_labels()
- assert reader_value_labels == value_labels
+ with StataReader(temp_file, convert_categoricals=False) as reader:
+ reader_value_labels = reader.value_labels()
+ assert reader_value_labels == value_labels
- col = "repeated_labels"
- repeats = "-" * 80 + "\n" + "\n".join(["More than ten"])
+ col = "repeated_labels"
+ repeats = "-" * 80 + "\n" + "\n".join(["More than ten"])
- msg = f"""
+ msg = f"""
Value labels for column {col} are not unique. These cannot be converted to
pandas categoricals.
@@ -2301,8 +2289,8 @@ def test_non_categorical_value_label_convert_categoricals_error():
The repeated labels are:
{repeats}
"""
- with pytest.raises(ValueError, match=msg):
- read_stata(path, convert_categoricals=True)
+ with pytest.raises(ValueError, match=msg):
+ read_stata(temp_file, convert_categoricals=True)
@pytest.mark.parametrize("version", [114, 117, 118, 119, None])
@@ -2320,7 +2308,7 @@ def test_non_categorical_value_label_convert_categoricals_error():
pd.UInt64Dtype,
],
)
-def test_nullable_support(dtype, version):
+def test_nullable_support(dtype, version, temp_file):
df = DataFrame(
{
"a": Series([1.0, 2.0, 3.0]),
@@ -2339,27 +2327,26 @@ def test_nullable_support(dtype, version):
smv = StataMissingValue(value)
expected_b = Series([1, smv, smv], dtype=object, name="b")
expected_c = Series(["a", "b", ""], name="c")
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=False, version=version)
- reread = read_stata(path, convert_missing=True)
- tm.assert_series_equal(df.a, reread.a)
- tm.assert_series_equal(reread.b, expected_b)
- tm.assert_series_equal(reread.c, expected_c)
+ df.to_stata(temp_file, write_index=False, version=version)
+ reread = read_stata(temp_file, convert_missing=True)
+ tm.assert_series_equal(df.a, reread.a)
+ tm.assert_series_equal(reread.b, expected_b)
+ tm.assert_series_equal(reread.c, expected_c)
-def test_empty_frame():
+def test_empty_frame(temp_file):
# GH 46240
# create an empty DataFrame with int64 and float64 dtypes
df = DataFrame(data={"a": range(3), "b": [1.0, 2.0, 3.0]}).head(0)
- with tm.ensure_clean() as path:
- df.to_stata(path, write_index=False, version=117)
- # Read entire dataframe
- df2 = read_stata(path)
- assert "b" in df2
- # Dtypes don't match since no support for int32
- dtypes = Series({"a": np.dtype("int32"), "b": np.dtype("float64")})
- tm.assert_series_equal(df2.dtypes, dtypes)
- # read one column of empty .dta file
- df3 = read_stata(path, columns=["a"])
- assert "b" not in df3
- tm.assert_series_equal(df3.dtypes, dtypes.loc[["a"]])
+ path = temp_file
+ df.to_stata(path, write_index=False, version=117)
+ # Read entire dataframe
+ df2 = read_stata(path)
+ assert "b" in df2
+ # Dtypes don't match since no support for int32
+ dtypes = Series({"a": np.dtype("int32"), "b": np.dtype("float64")})
+ tm.assert_series_equal(df2.dtypes, dtypes)
+ # read one column of empty .dta file
+ df3 = read_stata(path, columns=["a"])
+ assert "b" not in df3
+ tm.assert_series_equal(df3.dtypes, dtypes.loc[["a"]])
diff --git a/pandas/tests/series/methods/test_to_csv.py b/pandas/tests/series/methods/test_to_csv.py
index e292861012c8f..f7dec02ab0e5b 100644
--- a/pandas/tests/series/methods/test_to_csv.py
+++ b/pandas/tests/series/methods/test_to_csv.py
@@ -24,58 +24,55 @@ def read_csv(self, path, **kwargs):
return out
- def test_from_csv(self, datetime_series, string_series):
+ def test_from_csv(self, datetime_series, string_series, temp_file):
# freq doesn't round-trip
datetime_series.index = datetime_series.index._with_freq(None)
- with tm.ensure_clean() as path:
- datetime_series.to_csv(path, header=False)
- ts = self.read_csv(path, parse_dates=True)
- tm.assert_series_equal(datetime_series, ts, check_names=False)
+ path = temp_file
+ datetime_series.to_csv(path, header=False)
+ ts = self.read_csv(path, parse_dates=True)
+ tm.assert_series_equal(datetime_series, ts, check_names=False)
- assert ts.name is None
- assert ts.index.name is None
+ assert ts.name is None
+ assert ts.index.name is None
- # see gh-10483
- datetime_series.to_csv(path, header=True)
- ts_h = self.read_csv(path, header=0)
- assert ts_h.name == "ts"
+ # see gh-10483
+ datetime_series.to_csv(path, header=True)
+ ts_h = self.read_csv(path, header=0)
+ assert ts_h.name == "ts"
- string_series.to_csv(path, header=False)
- series = self.read_csv(path)
- tm.assert_series_equal(string_series, series, check_names=False)
+ string_series.to_csv(path, header=False)
+ series = self.read_csv(path)
+ tm.assert_series_equal(string_series, series, check_names=False)
- assert series.name is None
- assert series.index.name is None
+ assert series.name is None
+ assert series.index.name is None
- string_series.to_csv(path, header=True)
- series_h = self.read_csv(path, header=0)
- assert series_h.name == "series"
+ string_series.to_csv(path, header=True)
+ series_h = self.read_csv(path, header=0)
+ assert series_h.name == "series"
- with open(path, "w", encoding="utf-8") as outfile:
- outfile.write("1998-01-01|1.0\n1999-01-01|2.0")
+ with open(path, "w", encoding="utf-8") as outfile:
+ outfile.write("1998-01-01|1.0\n1999-01-01|2.0")
- series = self.read_csv(path, sep="|", parse_dates=True)
- check_series = Series(
- {datetime(1998, 1, 1): 1.0, datetime(1999, 1, 1): 2.0}
- )
- tm.assert_series_equal(check_series, series)
+ series = self.read_csv(path, sep="|", parse_dates=True)
+ check_series = Series({datetime(1998, 1, 1): 1.0, datetime(1999, 1, 1): 2.0})
+ tm.assert_series_equal(check_series, series)
- series = self.read_csv(path, sep="|", parse_dates=False)
- check_series = Series({"1998-01-01": 1.0, "1999-01-01": 2.0})
- tm.assert_series_equal(check_series, series)
+ series = self.read_csv(path, sep="|", parse_dates=False)
+ check_series = Series({"1998-01-01": 1.0, "1999-01-01": 2.0})
+ tm.assert_series_equal(check_series, series)
- def test_to_csv(self, datetime_series):
- with tm.ensure_clean() as path:
- datetime_series.to_csv(path, header=False)
+ def test_to_csv(self, datetime_series, temp_file):
+ datetime_series.to_csv(temp_file, header=False)
- with open(path, newline=None, encoding="utf-8") as f:
- lines = f.readlines()
- assert lines[1] != "\n"
+ with open(temp_file, newline=None, encoding="utf-8") as f:
+ lines = f.readlines()
+ assert lines[1] != "\n"
- datetime_series.to_csv(path, index=False, header=False)
- arr = np.loadtxt(path)
- tm.assert_almost_equal(arr, datetime_series.values)
+ datetime_series.to_csv(temp_file, index=False, header=False)
+ arr = np.loadtxt(temp_file)
+ tm.assert_almost_equal(arr, datetime_series.values)
def test_to_csv_unicode_index(self):
buf = StringIO()
@@ -87,14 +84,13 @@ def test_to_csv_unicode_index(self):
s2 = self.read_csv(buf, index_col=0, encoding="UTF-8")
tm.assert_series_equal(s, s2)
- def test_to_csv_float_format(self):
- with tm.ensure_clean() as filename:
- ser = Series([0.123456, 0.234567, 0.567567])
- ser.to_csv(filename, float_format="%.2f", header=False)
+ def test_to_csv_float_format(self, temp_file):
+ ser = Series([0.123456, 0.234567, 0.567567])
+ ser.to_csv(temp_file, float_format="%.2f", header=False)
- rs = self.read_csv(filename)
- xp = Series([0.12, 0.23, 0.57])
- tm.assert_series_equal(rs, xp)
+ rs = self.read_csv(temp_file)
+ xp = Series([0.12, 0.23, 0.57])
+ tm.assert_series_equal(rs, xp)
def test_to_csv_list_entries(self):
s = Series(["jack and jill", "jesse and frank"])
@@ -128,50 +124,49 @@ def test_to_csv_path_is_none(self):
),
],
)
- def test_to_csv_compression(self, s, encoding, compression):
- with tm.ensure_clean() as filename:
- s.to_csv(filename, compression=compression, encoding=encoding, header=True)
- # test the round trip - to_csv -> read_csv
- result = pd.read_csv(
- filename,
- compression=compression,
- encoding=encoding,
- index_col=0,
- ).squeeze("columns")
- tm.assert_series_equal(s, result)
-
- # test the round trip using file handle - to_csv -> read_csv
- with get_handle(
- filename, "w", compression=compression, encoding=encoding
- ) as handles:
- s.to_csv(handles.handle, encoding=encoding, header=True)
-
- result = pd.read_csv(
- filename,
- compression=compression,
- encoding=encoding,
- index_col=0,
- ).squeeze("columns")
- tm.assert_series_equal(s, result)
-
- # explicitly ensure file was compressed
- with tm.decompress_file(filename, compression) as fh:
- text = fh.read().decode(encoding or "utf8")
- assert s.name in text
-
- with tm.decompress_file(filename, compression) as fh:
- tm.assert_series_equal(
- s,
- pd.read_csv(fh, index_col=0, encoding=encoding).squeeze("columns"),
- )
-
- def test_to_csv_interval_index(self, using_infer_string):
+ def test_to_csv_compression(self, s, encoding, compression, temp_file):
+ filename = temp_file
+ s.to_csv(filename, compression=compression, encoding=encoding, header=True)
+ # test the round trip - to_csv -> read_csv
+ result = pd.read_csv(
+ filename,
+ compression=compression,
+ encoding=encoding,
+ index_col=0,
+ ).squeeze("columns")
+ tm.assert_series_equal(s, result)
+
+ # test the round trip using file handle - to_csv -> read_csv
+ with get_handle(
+ filename, "w", compression=compression, encoding=encoding
+ ) as handles:
+ s.to_csv(handles.handle, encoding=encoding, header=True)
+
+ result = pd.read_csv(
+ filename,
+ compression=compression,
+ encoding=encoding,
+ index_col=0,
+ ).squeeze("columns")
+ tm.assert_series_equal(s, result)
+
+ # explicitly ensure file was compressed
+ with tm.decompress_file(filename, compression) as fh:
+ text = fh.read().decode(encoding or "utf8")
+ assert s.name in text
+
+ with tm.decompress_file(filename, compression) as fh:
+ tm.assert_series_equal(
+ s,
+ pd.read_csv(fh, index_col=0, encoding=encoding).squeeze("columns"),
+ )
+
+ def test_to_csv_interval_index(self, using_infer_string, temp_file):
# GH 28210
s = Series(["foo", "bar", "baz"], index=pd.interval_range(0, 3))
- with tm.ensure_clean("__tmp_to_csv_interval_index__.csv") as path:
- s.to_csv(path, header=False)
- result = self.read_csv(path, index_col=0)
+ s.to_csv(temp_file, header=False)
+ result = self.read_csv(temp_file, index_col=0)
# can't roundtrip intervalindex via read_csv so check string repr (GH 23595)
expected = s
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/58166 | 2024-04-05T20:14:49Z | 2024-04-09T13:11:36Z | 2024-04-09T13:11:36Z | 2024-04-09T16:32:00Z |
STY: Add flake8-slots and flake8-raise rules | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index b8726e058a52b..8eec5d5515239 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -30,12 +30,6 @@ repos:
files: ^pandas
exclude: ^pandas/tests
args: [--select, "ANN001,ANN2", --fix-only, --exit-non-zero-on-fix]
- - id: ruff
- name: ruff-use-pd_array-in-core
- alias: ruff-use-pd_array-in-core
- files: ^pandas/core/
- exclude: ^pandas/core/api\.py$
- args: [--select, "ICN001", --exit-non-zero-on-fix]
- id: ruff-format
exclude: ^scripts
- repo: https://github.com/jendrikseipp/vulture
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 3d2e855831c05..c8a4e9d8a23b2 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -41,7 +41,7 @@
UInt64Dtype,
)
from pandas.core.arrays.string_ import StringDtype
-from pandas.core.construction import array
+from pandas.core.construction import array # noqa: ICN001
from pandas.core.flags import Flags
from pandas.core.groupby import (
Grouper,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 99462917599e1..8af9503a3691d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6729,7 +6729,7 @@ def _pad_or_backfill(
if axis == 1:
if not self._mgr.is_single_block and inplace:
- raise NotImplementedError()
+ raise NotImplementedError
# e.g. test_align_fill_method
result = self.T._pad_or_backfill(
method=method, limit=limit, limit_area=limit_area
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 982e305b7e471..a834d3e54d30b 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1191,13 +1191,13 @@ def __getitem__(self, key):
return self._getitem_axis(maybe_callable, axis=axis)
def _is_scalar_access(self, key: tuple):
- raise NotImplementedError()
+ raise NotImplementedError
def _getitem_tuple(self, tup: tuple):
raise AbstractMethodError(self)
def _getitem_axis(self, key, axis: AxisInt):
- raise NotImplementedError()
+ raise NotImplementedError
def _has_valid_setitem_indexer(self, indexer) -> bool:
raise AbstractMethodError(self)
diff --git a/pandas/core/interchange/buffer.py b/pandas/core/interchange/buffer.py
index 5d24325e67f62..62bf396256f2a 100644
--- a/pandas/core/interchange/buffer.py
+++ b/pandas/core/interchange/buffer.py
@@ -114,7 +114,7 @@ def __dlpack__(self) -> Any:
"""
Represent this structure as DLPack interface.
"""
- raise NotImplementedError()
+ raise NotImplementedError
def __dlpack_device__(self) -> tuple[DlpackDeviceType, int | None]:
"""
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 3ec077806d6c4..50ee2d52ee51d 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1376,7 +1376,7 @@ def _get_time_stamp(self) -> str:
elif self._format_version > 104:
return self._decode(self._path_or_buf.read(18))
else:
- raise ValueError()
+ raise ValueError
def _get_seek_variable_labels(self) -> int:
if self._format_version == 117:
@@ -1388,7 +1388,7 @@ def _get_seek_variable_labels(self) -> int:
elif self._format_version >= 118:
return self._read_int64() + 17
else:
- raise ValueError()
+ raise ValueError
def _read_old_header(self, first_char: bytes) -> None:
self._format_version = int(first_char[0])
diff --git a/pandas/tests/config/test_localization.py b/pandas/tests/config/test_localization.py
index 844f67cd2d0ea..b9a0a44bf8c89 100644
--- a/pandas/tests/config/test_localization.py
+++ b/pandas/tests/config/test_localization.py
@@ -92,7 +92,7 @@ def test_can_set_locale_invalid_get(monkeypatch):
# but a subsequent getlocale() raises a ValueError.
def mock_get_locale():
- raise ValueError()
+ raise ValueError
with monkeypatch.context() as m:
m.setattr(locale, "getlocale", mock_get_locale)
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 658fafd3ea2cc..3f98f49cd1877 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -41,7 +41,7 @@ class TestDataFrameSetItem:
def test_setitem_str_subclass(self):
# GH#37366
class mystring(str):
- pass
+ __slots__ = ()
data = ["2020-10-22 01:21:00+00:00"]
index = DatetimeIndex(data)
diff --git a/pyproject.toml b/pyproject.toml
index c9180cf04e7f5..0a0a3e8b484f0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -234,6 +234,12 @@ select = [
"G",
# flake8-future-annotations
"FA",
+ # unconventional-import-alias
+ "ICN001",
+ # flake8-slots
+ "SLOT",
+ # flake8-raise
+ "RSE"
]
ignore = [
| Also moves `ICN001` to the main ruff linter | https://api.github.com/repos/pandas-dev/pandas/pulls/58161 | 2024-04-05T17:48:52Z | 2024-04-08T16:45:42Z | 2024-04-08T16:45:41Z | 2024-04-08T16:45:45Z |
DEPR: PandasArray alias | diff --git a/doc/redirects.csv b/doc/redirects.csv
index e71a031bd67fd..c11e4e242f128 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -1422,7 +1422,6 @@ reference/api/pandas.Series.transpose,pandas.Series.T
reference/api/pandas.Index.transpose,pandas.Index.T
reference/api/pandas.Index.notnull,pandas.Index.notna
reference/api/pandas.Index.tolist,pandas.Index.to_list
-reference/api/pandas.arrays.PandasArray,pandas.arrays.NumpyExtensionArray
reference/api/pandas.core.groupby.DataFrameGroupBy.backfill,pandas.core.groupby.DataFrameGroupBy.bfill
reference/api/pandas.core.groupby.GroupBy.backfill,pandas.core.groupby.DataFrameGroupBy.bfill
reference/api/pandas.core.resample.Resampler.backfill,pandas.core.resample.Resampler.bfill
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index d2d5707f32bf3..6e6d80eb5c138 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -212,6 +212,7 @@ Removal of prior version deprecations/changes
- Disallow passing a pandas type to :meth:`Index.view` (:issue:`55709`)
- Disallow units other than "s", "ms", "us", "ns" for datetime64 and timedelta64 dtypes in :func:`array` (:issue:`53817`)
- Removed "freq" keyword from :class:`PeriodArray` constructor, use "dtype" instead (:issue:`52462`)
+- Removed alias :class:`arrays.PandasArray` for :class:`arrays.NumpyExtensionArray` (:issue:`53694`)
- Removed deprecated "method" and "limit" keywords from :meth:`Series.replace` and :meth:`DataFrame.replace` (:issue:`53492`)
- Removed extension test classes ``BaseNoReduceTests``, ``BaseNumericReduceTests``, ``BaseBooleanReduceTests`` (:issue:`54663`)
- Removed the "closed" and "normalize" keywords in :meth:`DatetimeIndex.__new__` (:issue:`52628`)
diff --git a/pandas/arrays/__init__.py b/pandas/arrays/__init__.py
index bcf295fd6b490..b5c1c98da1c78 100644
--- a/pandas/arrays/__init__.py
+++ b/pandas/arrays/__init__.py
@@ -35,20 +35,3 @@
"StringArray",
"TimedeltaArray",
]
-
-
-def __getattr__(name: str) -> type[NumpyExtensionArray]:
- if name == "PandasArray":
- # GH#53694
- import warnings
-
- from pandas.util._exceptions import find_stack_level
-
- warnings.warn(
- "PandasArray has been renamed NumpyExtensionArray. Use that "
- "instead. This alias will be removed in a future version.",
- FutureWarning,
- stacklevel=find_stack_level(),
- )
- return NumpyExtensionArray
- raise AttributeError(f"module 'pandas.arrays' has no attribute '{name}'")
diff --git a/pandas/tests/api/test_api.py b/pandas/tests/api/test_api.py
index 82c5c305b574c..e32e5a268d46d 100644
--- a/pandas/tests/api/test_api.py
+++ b/pandas/tests/api/test_api.py
@@ -395,13 +395,5 @@ def test_util_in_top_level(self):
pd.util.foo
-def test_pandas_array_alias():
- msg = "PandasArray has been renamed NumpyExtensionArray"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- res = pd.arrays.PandasArray
-
- assert res is pd.arrays.NumpyExtensionArray
-
-
def test_set_module():
assert pd.DataFrame.__module__ == "pandas"
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58158 | 2024-04-05T15:58:26Z | 2024-04-05T17:01:56Z | 2024-04-05T17:01:55Z | 2024-04-05T18:02:17Z |
DEPR: Categorical fastpath | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 983768e0f67da..bb1e0b33c2ab8 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -212,6 +212,7 @@ Removal of prior version deprecations/changes
- Disallow passing a pandas type to :meth:`Index.view` (:issue:`55709`)
- Disallow units other than "s", "ms", "us", "ns" for datetime64 and timedelta64 dtypes in :func:`array` (:issue:`53817`)
- Removed "freq" keyword from :class:`PeriodArray` constructor, use "dtype" instead (:issue:`52462`)
+- Removed 'fastpath' keyword in :class:`Categorical` constructor (:issue:`20110`)
- Removed alias :class:`arrays.PandasArray` for :class:`arrays.NumpyExtensionArray` (:issue:`53694`)
- Removed deprecated "method" and "limit" keywords from :meth:`Series.replace` and :meth:`DataFrame.replace` (:issue:`53492`)
- Removed extension test classes ``BaseNoReduceTests``, ``BaseNumericReduceTests``, ``BaseBooleanReduceTests`` (:issue:`54663`)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 416331a260e9f..3af4c528ceae8 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -276,9 +276,6 @@ class Categorical(NDArrayBackedExtensionArray, PandasObject, ObjectStringArrayMi
provided).
dtype : CategoricalDtype
An instance of ``CategoricalDtype`` to use for this categorical.
- fastpath : bool
- The 'fastpath' keyword in Categorical is deprecated and will be
- removed in a future version. Use Categorical.from_codes instead.
copy : bool, default True
Whether to copy if the codes are unchanged.
@@ -391,20 +388,8 @@ def __init__(
categories=None,
ordered=None,
dtype: Dtype | None = None,
- fastpath: bool | lib.NoDefault = lib.no_default,
copy: bool = True,
) -> None:
- if fastpath is not lib.no_default:
- # GH#20110
- warnings.warn(
- "The 'fastpath' keyword in Categorical is deprecated and will "
- "be removed in a future version. Use Categorical.from_codes instead",
- DeprecationWarning,
- stacklevel=find_stack_level(),
- )
- else:
- fastpath = False
-
dtype = CategoricalDtype._from_values_or_dtype(
values, categories, ordered, dtype
)
@@ -412,12 +397,6 @@ def __init__(
# we may have dtype.categories be None, and we need to
# infer categories in a factorization step further below
- if fastpath:
- codes = coerce_indexer_dtype(values, dtype.categories)
- dtype = CategoricalDtype(ordered=False).update_dtype(dtype)
- super().__init__(codes, dtype)
- return
-
if not is_list_like(values):
# GH#38433
raise TypeError("Categorical input must be list-like")
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index 857b14e2a2558..1069a9e5aaa90 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -35,13 +35,6 @@
class TestCategoricalConstructors:
- def test_fastpath_deprecated(self):
- codes = np.array([1, 2, 3])
- dtype = CategoricalDtype(categories=["a", "b", "c", "d"], ordered=False)
- msg = "The 'fastpath' keyword in Categorical is deprecated"
- with tm.assert_produces_warning(DeprecationWarning, match=msg):
- Categorical(codes, dtype=dtype, fastpath=True)
-
def test_categorical_from_cat_and_dtype_str_preserve_ordered(self):
# GH#49309 we should preserve orderedness in `res`
cat = Categorical([3, 1], categories=[3, 2, 1], ordered=True)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58157 | 2024-04-05T15:56:26Z | 2024-04-05T19:45:43Z | 2024-04-05T19:45:43Z | 2024-04-06T00:13:28Z |
ENH: Add support for reading value labels from 108-format and prior Stata dta files | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 19b448a1871c2..0f71b52120a47 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -33,6 +33,7 @@ Other enhancements
- :meth:`Styler.set_tooltips` provides alternative method to storing tooltips by using title attribute of td elements. (:issue:`56981`)
- Allow dictionaries to be passed to :meth:`pandas.Series.str.replace` via ``pat`` parameter (:issue:`51748`)
- Support passing a :class:`Series` input to :func:`json_normalize` that retains the :class:`Series` :class:`Index` (:issue:`51452`)
+- Support reading value labels from Stata 108-format (Stata 6) and earlier files (:issue:`58154`)
- Users can globally disable any ``PerformanceWarning`` by setting the option ``mode.performance_warnings`` to ``False`` (:issue:`56920`)
- :meth:`Styler.format_index_names` can now be used to format the index and column names (:issue:`48936` and :issue:`47489`)
-
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 50ee2d52ee51d..47d879c022ee6 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -1122,6 +1122,7 @@ def __init__(
# State variables for the file
self._close_file: Callable[[], None] | None = None
self._column_selector_set = False
+ self._value_label_dict: dict[str, dict[int, str]] = {}
self._value_labels_read = False
self._dtype: np.dtype | None = None
self._lines_read = 0
@@ -1502,17 +1503,8 @@ def _decode(self, s: bytes) -> str:
)
return s.decode("latin-1")
- def _read_value_labels(self) -> None:
- self._ensure_open()
- if self._value_labels_read:
- # Don't read twice
- return
- if self._format_version <= 108:
- # Value labels are not supported in version 108 and earlier.
- self._value_labels_read = True
- self._value_label_dict: dict[str, dict[float, str]] = {}
- return
-
+ def _read_new_value_labels(self) -> None:
+ """Reads value labels with variable length strings (108 and later format)"""
if self._format_version >= 117:
self._path_or_buf.seek(self._seek_value_labels)
else:
@@ -1520,9 +1512,6 @@ def _read_value_labels(self) -> None:
offset = self._nobs * self._dtype.itemsize
self._path_or_buf.seek(self._data_location + offset)
- self._value_labels_read = True
- self._value_label_dict = {}
-
while True:
if self._format_version >= 117:
if self._path_or_buf.read(5) == b"</val": # <lbl>
@@ -1530,8 +1519,10 @@ def _read_value_labels(self) -> None:
slength = self._path_or_buf.read(4)
if not slength:
- break # end of value label table (format < 117)
- if self._format_version <= 117:
+ break # end of value label table (format < 117), or end-of-file
+ if self._format_version == 108:
+ labname = self._decode(self._path_or_buf.read(9))
+ elif self._format_version <= 117:
labname = self._decode(self._path_or_buf.read(33))
else:
labname = self._decode(self._path_or_buf.read(129))
@@ -1555,8 +1546,45 @@ def _read_value_labels(self) -> None:
self._value_label_dict[labname][val[i]] = self._decode(
txt[off[i] : end]
)
+
if self._format_version >= 117:
self._path_or_buf.read(6) # </lbl>
+
+ def _read_old_value_labels(self) -> None:
+ """Reads value labels with fixed-length strings (105 and earlier format)"""
+ assert self._dtype is not None
+ offset = self._nobs * self._dtype.itemsize
+ self._path_or_buf.seek(self._data_location + offset)
+
+ while True:
+ if not self._path_or_buf.read(2):
+ # end-of-file may have been reached, if so stop here
+ break
+
+ # otherwise back up and read again, taking byteorder into account
+ self._path_or_buf.seek(-2, os.SEEK_CUR)
+ n = self._read_uint16()
+ labname = self._decode(self._path_or_buf.read(9))
+ self._path_or_buf.read(1) # padding
+ codes = np.frombuffer(
+ self._path_or_buf.read(2 * n), dtype=f"{self._byteorder}i2", count=n
+ )
+ self._value_label_dict[labname] = {}
+ for i in range(n):
+ self._value_label_dict[labname][codes[i]] = self._decode(
+ self._path_or_buf.read(8)
+ )
+
+ def _read_value_labels(self) -> None:
+ self._ensure_open()
+ if self._value_labels_read:
+ # Don't read twice
+ return
+
+ if self._format_version >= 108:
+ self._read_new_value_labels()
+ else:
+ self._read_old_value_labels()
self._value_labels_read = True
def _read_strls(self) -> None:
@@ -1729,7 +1757,7 @@ def read(
i, _stata_elapsed_date_to_datetime_vec(data.iloc[:, i], fmt)
)
- if convert_categoricals and self._format_version > 108:
+ if convert_categoricals:
data = self._do_convert_categoricals(
data, self._value_label_dict, self._lbllist, order_categoricals
)
@@ -1845,7 +1873,7 @@ def _do_select_columns(self, data: DataFrame, columns: Sequence[str]) -> DataFra
def _do_convert_categoricals(
self,
data: DataFrame,
- value_label_dict: dict[str, dict[float, str]],
+ value_label_dict: dict[str, dict[int, str]],
lbllist: Sequence[str],
order_categoricals: bool,
) -> DataFrame:
@@ -1983,7 +2011,7 @@ def variable_labels(self) -> dict[str, str]:
self._ensure_open()
return dict(zip(self._varlist, self._variable_labels))
- def value_labels(self) -> dict[str, dict[float, str]]:
+ def value_labels(self) -> dict[str, dict[int, str]]:
"""
Return a nested dict associating each variable name to its value and label.
diff --git a/pandas/tests/io/data/stata/stata4_105.dta b/pandas/tests/io/data/stata/stata4_105.dta
new file mode 100644
index 0000000000000..f804c315b344b
Binary files /dev/null and b/pandas/tests/io/data/stata/stata4_105.dta differ
diff --git a/pandas/tests/io/data/stata/stata4_108.dta b/pandas/tests/io/data/stata/stata4_108.dta
new file mode 100644
index 0000000000000..e78c24b319e47
Binary files /dev/null and b/pandas/tests/io/data/stata/stata4_108.dta differ
diff --git a/pandas/tests/io/data/stata/stata4_111.dta b/pandas/tests/io/data/stata/stata4_111.dta
new file mode 100644
index 0000000000000..b69034174fcfe
Binary files /dev/null and b/pandas/tests/io/data/stata/stata4_111.dta differ
diff --git a/pandas/tests/io/test_stata.py b/pandas/tests/io/test_stata.py
index 0cc8018ea6213..a58655d91a417 100644
--- a/pandas/tests/io/test_stata.py
+++ b/pandas/tests/io/test_stata.py
@@ -225,7 +225,7 @@ def test_read_dta3(self, file, datapath):
tm.assert_frame_equal(parsed, expected)
@pytest.mark.parametrize(
- "file", ["stata4_113", "stata4_114", "stata4_115", "stata4_117"]
+ "file", ["stata4_111", "stata4_113", "stata4_114", "stata4_115", "stata4_117"]
)
def test_read_dta4(self, file, datapath):
file = datapath("io", "data", "stata", f"{file}.dta")
@@ -270,6 +270,52 @@ def test_read_dta4(self, file, datapath):
# stata doesn't save .category metadata
tm.assert_frame_equal(parsed, expected)
+ @pytest.mark.parametrize("file", ["stata4_105", "stata4_108"])
+ def test_readold_dta4(self, file, datapath):
+ # This test is the same as test_read_dta4 above except that the columns
+ # had to be renamed to match the restrictions in older file format
+ file = datapath("io", "data", "stata", f"{file}.dta")
+ parsed = self.read_dta(file)
+
+ expected = DataFrame.from_records(
+ [
+ ["one", "ten", "one", "one", "one"],
+ ["two", "nine", "two", "two", "two"],
+ ["three", "eight", "three", "three", "three"],
+ ["four", "seven", 4, "four", "four"],
+ ["five", "six", 5, np.nan, "five"],
+ ["six", "five", 6, np.nan, "six"],
+ ["seven", "four", 7, np.nan, "seven"],
+ ["eight", "three", 8, np.nan, "eight"],
+ ["nine", "two", 9, np.nan, "nine"],
+ ["ten", "one", "ten", np.nan, "ten"],
+ ],
+ columns=[
+ "fulllab",
+ "fulllab2",
+ "incmplab",
+ "misslab",
+ "floatlab",
+ ],
+ )
+
+ # these are all categoricals
+ for col in expected:
+ orig = expected[col].copy()
+
+ categories = np.asarray(expected["fulllab"][orig.notna()])
+ if col == "incmplab":
+ categories = orig
+
+ cat = orig.astype("category")._values
+ cat = cat.set_categories(categories, ordered=True)
+ cat.categories.rename(None, inplace=True)
+
+ expected[col] = cat
+
+ # stata doesn't save .category metadata
+ tm.assert_frame_equal(parsed, expected)
+
# File containing strls
def test_read_dta12(self, datapath):
parsed_117 = self.read_dta(datapath("io", "data", "stata", "stata12_117.dta"))
| - [x] closes #58154
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
This change extends support for reading value labels to Stata 108-format (Stata 6) and earlier dta files. | https://api.github.com/repos/pandas-dev/pandas/pulls/58155 | 2024-04-05T13:23:31Z | 2024-04-09T16:55:40Z | 2024-04-09T16:55:40Z | 2024-04-09T16:55:47Z |
BUG: Fix Index.sort_values with natsort_key raises | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index d2d5707f32bf3..46ecb8ac9db5d 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -448,6 +448,7 @@ Other
- Bug in :func:`unique` on :class:`Index` not always returning :class:`Index` (:issue:`57043`)
- Bug in :meth:`DataFrame.sort_index` when passing ``axis="columns"`` and ``ignore_index=True`` and ``ascending=False`` not returning a :class:`RangeIndex` columns (:issue:`57293`)
- Bug in :meth:`DataFrame.where` where using a non-bool type array in the function would return a ``ValueError`` instead of a ``TypeError`` (:issue:`56330`)
+- Bug in :meth:`Index.sort_values` when passing a key function that turns values into tuples, e.g. ``key=natsort.natsort_key``, would raise ``TypeError`` (:issue:`56081`)
- Bug in Dataframe Interchange Protocol implementation was returning incorrect results for data buffers' associated dtype, for string and datetime columns (:issue:`54781`)
.. ***DO NOT USE THIS SECTION***
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index 493e856c6dcc6..002efec1d8b52 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -577,7 +577,7 @@ def ensure_key_mapped(
if isinstance(
values, Index
): # convert to a new Index subclass, not necessarily the same
- result = Index(result)
+ result = Index(result, tupleize_cols=False)
else:
# try to revert to original type otherwise
type_of_values = type(values)
diff --git a/pandas/tests/indexes/test_common.py b/pandas/tests/indexes/test_common.py
index a2dee61295c74..732f7cc624f86 100644
--- a/pandas/tests/indexes/test_common.py
+++ b/pandas/tests/indexes/test_common.py
@@ -479,6 +479,17 @@ def test_sort_values_with_missing(index_with_missing, na_position, request):
tm.assert_index_equal(result, expected)
+def test_sort_values_natsort_key():
+ # GH#56081
+ def split_convert(s):
+ return tuple(int(x) for x in s.split("."))
+
+ idx = pd.Index(["1.9", "2.0", "1.11", "1.10"])
+ expected = pd.Index(["1.9", "1.10", "1.11", "2.0"])
+ result = idx.sort_values(key=lambda x: tuple(map(split_convert, x)))
+ tm.assert_index_equal(result, expected)
+
+
def test_ndarray_compat_properties(index):
if isinstance(index, PeriodIndex) and not IS64:
pytest.skip("Overflow")
| - [x] closes #56081 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58148 | 2024-04-04T12:09:08Z | 2024-04-05T16:50:15Z | 2024-04-05T16:50:15Z | 2024-04-06T04:16:14Z |
Backport PR #58138 on branch 2.2.x (BLD: Fix nightlies not building) | diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index 470c044d2e99e..b9bfc766fb45c 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -139,8 +139,7 @@ jobs:
shell: bash -el {0}
run: echo "sdist_name=$(cd ./dist && ls -d */)" >> "$GITHUB_ENV"
- - name: Build normal wheels
- if: ${{ (env.IS_SCHEDULE_DISPATCH != 'true' || env.IS_PUSH == 'true') }}
+ - name: Build wheels
uses: pypa/cibuildwheel@v2.17.0
with:
package-dir: ./dist/${{ startsWith(matrix.buildplat[1], 'macosx') && env.sdist_name || needs.build_sdist.outputs.sdist_file }}
| Backport PR #58138: BLD: Fix nightlies not building | https://api.github.com/repos/pandas-dev/pandas/pulls/58140 | 2024-04-03T19:30:17Z | 2024-04-03T22:02:48Z | 2024-04-03T22:02:48Z | 2024-04-03T22:02:48Z |
BLD: Fix nightlies not building | diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index d6cfd3b0ad257..4bd9068e91b67 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -139,8 +139,7 @@ jobs:
shell: bash -el {0}
run: echo "sdist_name=$(cd ./dist && ls -d */)" >> "$GITHUB_ENV"
- - name: Build normal wheels
- if: ${{ (env.IS_SCHEDULE_DISPATCH != 'true' || env.IS_PUSH == 'true') }}
+ - name: Build wheels
uses: pypa/cibuildwheel@v2.17.0
with:
package-dir: ./dist/${{ startsWith(matrix.buildplat[1], 'macosx') && env.sdist_name || needs.build_sdist.outputs.sdist_file }}
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58138 | 2024-04-03T17:46:00Z | 2024-04-03T19:29:37Z | 2024-04-03T19:29:37Z | 2024-04-03T19:30:43Z |
Backport PR #58100 on branch 2.2.x (MNT: fix compatibility with beautifulsoup4 4.13.0b2) | diff --git a/pandas/io/html.py b/pandas/io/html.py
index 26e71c9546ffd..4eeeb1b655f8a 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -591,14 +591,8 @@ class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
:class:`pandas.io.html._HtmlFrameParser`.
"""
- def __init__(self, *args, **kwargs) -> None:
- super().__init__(*args, **kwargs)
- from bs4 import SoupStrainer
-
- self._strainer = SoupStrainer("table")
-
def _parse_tables(self, document, match, attrs):
- element_name = self._strainer.name
+ element_name = "table"
tables = document.find_all(element_name, attrs=attrs)
if not tables:
raise ValueError("No tables found")
| Backport PR #58100: MNT: fix compatibility with beautifulsoup4 4.13.0b2 | https://api.github.com/repos/pandas-dev/pandas/pulls/58137 | 2024-04-03T17:28:29Z | 2024-04-03T19:29:53Z | 2024-04-03T19:29:53Z | 2024-04-03T19:29:53Z |
DOC: Add return value for Index.slice_locs when label not found #32680 | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 73e564f95cf65..feb58a4806047 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6453,6 +6453,10 @@ def slice_locs(self, start=None, end=None, step=None) -> tuple[int, int]:
>>> idx = pd.Index(list("abcd"))
>>> idx.slice_locs(start="b", end="c")
(1, 3)
+
+ >>> idx = pd.Index(list("bcde"))
+ >>> idx.slice_locs(start="a", end="c")
+ (0, 2)
"""
inc = step is None or step >= 0
| - [ ] Add return value for Index.slice_locs when label not found #32680
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58135 | 2024-04-03T16:38:26Z | 2024-04-03T21:36:34Z | 2024-04-03T21:36:34Z | 2024-04-03T21:36:41Z |
Backport PR #58126: BLD: Build wheels with numpy 2.0rc1 | diff --git a/pyproject.toml b/pyproject.toml
index c225ed80dcb10..b2764b137a1f8 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -6,12 +6,9 @@ requires = [
"meson==1.2.1",
"wheel",
"Cython==3.0.5", # Note: sync with setup.py, environment.yml and asv.conf.json
- # Any NumPy version should be fine for compiling. Users are unlikely
- # to get a NumPy<1.25 so the result will be compatible with all relevant
- # NumPy versions (if not it is presumably compatible with their version).
- # Pin <2.0 for releases until tested against an RC. But explicitly allow
- # testing the `.dev0` nightlies (which require the extra index).
- "numpy>1.22.4,<=2.0.0.dev0",
+ # Force numpy higher than 2.0rc1, so that built wheels are compatible
+ # with both numpy 1 and 2
+ "numpy>=2.0.0rc1",
"versioneer[toml]"
]
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58127 | 2024-04-03T02:04:55Z | 2024-04-03T02:57:16Z | 2024-04-03T02:57:16Z | 2024-04-03T02:57:16Z |
BLD: Build wheels with numpy 2.0rc1 | diff --git a/pyproject.toml b/pyproject.toml
index 259d003de92d5..c9180cf04e7f5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -6,12 +6,9 @@ requires = [
"meson==1.2.1",
"wheel",
"Cython~=3.0.5", # Note: sync with setup.py, environment.yml and asv.conf.json
- # Any NumPy version should be fine for compiling. Users are unlikely
- # to get a NumPy<1.25 so the result will be compatible with all relevant
- # NumPy versions (if not it is presumably compatible with their version).
- # Pin <2.0 for releases until tested against an RC. But explicitly allow
- # testing the `.dev0` nightlies (which require the extra index).
- "numpy>1.22.4,<=2.0.0.dev0",
+ # Force numpy higher than 2.0rc1, so that built wheels are compatible
+ # with both numpy 1 and 2
+ "numpy>=2.0.0rc1",
"versioneer[toml]"
]
@@ -152,9 +149,6 @@ setup = ['--vsenv'] # For Windows
skip = "cp36-* cp37-* cp38-* pp* *_i686 *_ppc64le *_s390x"
build-verbosity = "3"
environment = {LDFLAGS="-Wl,--strip-all"}
-# TODO: remove this once numpy 2.0 proper releases
-# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
-before-build = "pip install numpy==2.0.0rc1"
test-requires = "hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0"
test-command = """
PANDAS_CI='1' python -c 'import pandas as pd; \
@@ -163,9 +157,7 @@ test-command = """
"""
[tool.cibuildwheel.windows]
-# TODO: remove this once numpy 2.0 proper releases
-# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
-before-build = "pip install delvewheel numpy==2.0.0rc1"
+before-build = "pip install delvewheel"
repair-wheel-command = "delvewheel repair -w {dest_dir} {wheel}"
[[tool.cibuildwheel.overrides]]
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58126 | 2024-04-02T23:15:29Z | 2024-04-03T00:23:49Z | 2024-04-03T00:23:49Z | 2024-04-03T02:04:15Z |
DEPR: keyword-only arguments for DataFrame/Series statistical methods | diff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst
index eed3fc149263a..92799359a61d2 100644
--- a/doc/source/user_guide/basics.rst
+++ b/doc/source/user_guide/basics.rst
@@ -476,15 +476,15 @@ For example:
.. ipython:: python
df
- df.mean(0)
- df.mean(1)
+ df.mean(axis=0)
+ df.mean(axis=1)
All such methods have a ``skipna`` option signaling whether to exclude missing
data (``True`` by default):
.. ipython:: python
- df.sum(0, skipna=False)
+ df.sum(axis=0, skipna=False)
df.sum(axis=1, skipna=True)
Combined with the broadcasting / arithmetic behavior, one can describe various
@@ -495,8 +495,8 @@ standard deviation of 1), very concisely:
ts_stand = (df - df.mean()) / df.std()
ts_stand.std()
- xs_stand = df.sub(df.mean(1), axis=0).div(df.std(1), axis=0)
- xs_stand.std(1)
+ xs_stand = df.sub(df.mean(axis=1), axis=0).div(df.std(axis=1), axis=0)
+ xs_stand.std(axis=1)
Note that methods like :meth:`~DataFrame.cumsum` and :meth:`~DataFrame.cumprod`
preserve the location of ``NaN`` values. This is somewhat different from
diff --git a/doc/source/user_guide/indexing.rst b/doc/source/user_guide/indexing.rst
index fd843ca68a60b..0da87e1d31fec 100644
--- a/doc/source/user_guide/indexing.rst
+++ b/doc/source/user_guide/indexing.rst
@@ -952,7 +952,7 @@ To select a row where each column meets its own criterion:
values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
- row_mask = df.isin(values).all(1)
+ row_mask = df.isin(values).all(axis=1)
df[row_mask]
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4debd41de213f..e4711f6f215cd 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -190,6 +190,7 @@ Other Deprecations
- Deprecated :meth:`Timestamp.utcfromtimestamp`, use ``Timestamp.fromtimestamp(ts, "UTC")`` instead (:issue:`56680`)
- Deprecated :meth:`Timestamp.utcnow`, use ``Timestamp.now("UTC")`` instead (:issue:`56680`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.all`, :meth:`DataFrame.min`, :meth:`DataFrame.max`, :meth:`DataFrame.sum`, :meth:`DataFrame.prod`, :meth:`DataFrame.mean`, :meth:`DataFrame.median`, :meth:`DataFrame.sem`, :meth:`DataFrame.var`, :meth:`DataFrame.std`, :meth:`DataFrame.skew`, :meth:`DataFrame.kurt`, :meth:`Series.all`, :meth:`Series.min`, :meth:`Series.max`, :meth:`Series.sum`, :meth:`Series.prod`, :meth:`Series.mean`, :meth:`Series.median`, :meth:`Series.sem`, :meth:`Series.var`, :meth:`Series.std`, :meth:`Series.skew`, and :meth:`Series.kurt`. (:issue:`57087`)
- Deprecated allowing non-keyword arguments in :meth:`Series.to_markdown` except ``buf``. (:issue:`57280`)
- Deprecated allowing non-keyword arguments in :meth:`Series.to_string` except ``buf``. (:issue:`57280`)
-
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 66a68755a2a09..97cf86d45812d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -64,6 +64,7 @@
from pandas.util._decorators import (
Appender,
Substitution,
+ deprecate_nonkeyword_arguments,
doc,
set_module,
)
@@ -11543,6 +11544,7 @@ def all(
**kwargs,
) -> Series | bool: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="all")
@doc(make_doc("all", ndim=1))
def all(
self,
@@ -11589,6 +11591,7 @@ def min(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="min")
@doc(make_doc("min", ndim=2))
def min(
self,
@@ -11635,6 +11638,7 @@ def max(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="max")
@doc(make_doc("max", ndim=2))
def max(
self,
@@ -11650,6 +11654,7 @@ def max(
result = result.__finalize__(self, method="max")
return result
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="sum")
@doc(make_doc("sum", ndim=2))
def sum(
self,
@@ -11670,6 +11675,7 @@ def sum(
result = result.__finalize__(self, method="sum")
return result
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="prod")
@doc(make_doc("prod", ndim=2))
def prod(
self,
@@ -11721,6 +11727,7 @@ def mean(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="mean")
@doc(make_doc("mean", ndim=2))
def mean(
self,
@@ -11767,6 +11774,7 @@ def median(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="median")
@doc(make_doc("median", ndim=2))
def median(
self,
@@ -11816,6 +11824,7 @@ def sem(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="sem")
@doc(make_doc("sem", ndim=2))
def sem(
self,
@@ -11866,6 +11875,7 @@ def var(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="var")
@doc(make_doc("var", ndim=2))
def var(
self,
@@ -11916,6 +11926,7 @@ def std(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="std")
@doc(make_doc("std", ndim=2))
def std(
self,
@@ -11963,6 +11974,7 @@ def skew(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="skew")
@doc(make_doc("skew", ndim=2))
def skew(
self,
@@ -12009,6 +12021,7 @@ def kurt(
**kwargs,
) -> Series | Any: ...
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="kurt")
@doc(make_doc("kurt", ndim=2))
def kurt(
self,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ee496a355f6ca..d6f3c5990820d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -6189,6 +6189,7 @@ def any( # type: ignore[override]
filter_type="bool",
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="all")
@Appender(make_doc("all", ndim=1))
def all(
self,
@@ -6208,6 +6209,7 @@ def all(
filter_type="bool",
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="min")
@doc(make_doc("min", ndim=1))
def min(
self,
@@ -6220,6 +6222,7 @@ def min(
self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="max")
@doc(make_doc("max", ndim=1))
def max(
self,
@@ -6232,6 +6235,7 @@ def max(
self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="sum")
@doc(make_doc("sum", ndim=1))
def sum(
self,
@@ -6250,6 +6254,7 @@ def sum(
**kwargs,
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="prod")
@doc(make_doc("prod", ndim=1))
def prod(
self,
@@ -6268,6 +6273,7 @@ def prod(
**kwargs,
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="mean")
@doc(make_doc("mean", ndim=1))
def mean(
self,
@@ -6280,6 +6286,7 @@ def mean(
self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="median")
@doc(make_doc("median", ndim=1))
def median(
self,
@@ -6292,6 +6299,7 @@ def median(
self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="sem")
@doc(make_doc("sem", ndim=1))
def sem(
self,
@@ -6310,6 +6318,7 @@ def sem(
**kwargs,
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="var")
@doc(make_doc("var", ndim=1))
def var(
self,
@@ -6328,6 +6337,7 @@ def var(
**kwargs,
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="std")
@doc(make_doc("std", ndim=1))
def std(
self,
@@ -6346,6 +6356,7 @@ def std(
**kwargs,
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="skew")
@doc(make_doc("skew", ndim=1))
def skew(
self,
@@ -6358,6 +6369,7 @@ def skew(
self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
)
+ @deprecate_nonkeyword_arguments(version="3.0", allowed_args=["self"], name="kurt")
@doc(make_doc("kurt", ndim=1))
def kurt(
self,
diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 9f3fee686a056..ba551fef3a42d 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -402,7 +402,7 @@ def test_apply_yield_list(float_frame):
def test_apply_reduce_Series(float_frame):
float_frame.iloc[::2, float_frame.columns.get_loc("A")] = np.nan
- expected = float_frame.mean(1)
+ expected = float_frame.mean(axis=1)
result = float_frame.apply(np.mean, axis=1)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/apply/test_str.py b/pandas/tests/apply/test_str.py
index e9192dae66a46..50cf0f0ed3e84 100644
--- a/pandas/tests/apply/test_str.py
+++ b/pandas/tests/apply/test_str.py
@@ -19,27 +19,18 @@
@pytest.mark.parametrize("func", ["sum", "mean", "min", "max", "std"])
@pytest.mark.parametrize(
- "args,kwds",
+ "kwds",
[
- pytest.param([], {}, id="no_args_or_kwds"),
- pytest.param([1], {}, id="axis_from_args"),
- pytest.param([], {"axis": 1}, id="axis_from_kwds"),
- pytest.param([], {"numeric_only": True}, id="optional_kwds"),
- pytest.param([1, True], {"numeric_only": True}, id="args_and_kwds"),
+ pytest.param({}, id="no_kwds"),
+ pytest.param({"axis": 1}, id="on_axis"),
+ pytest.param({"numeric_only": True}, id="func_kwds"),
+ pytest.param({"axis": 1, "numeric_only": True}, id="axis_and_func_kwds"),
],
)
@pytest.mark.parametrize("how", ["agg", "apply"])
-def test_apply_with_string_funcs(request, float_frame, func, args, kwds, how):
- if len(args) > 1 and how == "agg":
- request.applymarker(
- pytest.mark.xfail(
- raises=TypeError,
- reason="agg/apply signature mismatch - agg passes 2nd "
- "argument to func",
- )
- )
- result = getattr(float_frame, how)(func, *args, **kwds)
- expected = getattr(float_frame, func)(*args, **kwds)
+def test_apply_with_string_funcs(request, float_frame, func, kwds, how):
+ result = getattr(float_frame, how)(func, **kwds)
+ expected = getattr(float_frame, func)(**kwds)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/frame/methods/test_asof.py b/pandas/tests/frame/methods/test_asof.py
index 029aa3a5b8f05..c510ef78d03aa 100644
--- a/pandas/tests/frame/methods/test_asof.py
+++ b/pandas/tests/frame/methods/test_asof.py
@@ -36,18 +36,18 @@ def test_basic(self, date_range_frame):
dates = date_range("1/1/1990", periods=N * 3, freq="25s")
result = df.asof(dates)
- assert result.notna().all(1).all()
+ assert result.notna().all(axis=1).all()
lb = df.index[14]
ub = df.index[30]
dates = list(dates)
result = df.asof(dates)
- assert result.notna().all(1).all()
+ assert result.notna().all(axis=1).all()
mask = (result.index >= lb) & (result.index < ub)
rs = result[mask]
- assert (rs == 14).all(1).all()
+ assert (rs == 14).all(axis=1).all()
def test_subset(self, date_range_frame):
N = 10
diff --git a/pandas/tests/frame/methods/test_fillna.py b/pandas/tests/frame/methods/test_fillna.py
index 81f66cfd48b0a..b8f67138889cc 100644
--- a/pandas/tests/frame/methods/test_fillna.py
+++ b/pandas/tests/frame/methods/test_fillna.py
@@ -466,7 +466,7 @@ def test_fillna_dict_series(self):
# disable this for now
with pytest.raises(NotImplementedError, match="column by column"):
- df.fillna(df.max(1), axis=1)
+ df.fillna(df.max(axis=1), axis=1)
def test_fillna_dataframe(self):
# GH#8377
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index bb79072d389db..82cdd34e59f11 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -773,7 +773,7 @@ def test_operators_timedelta64(self):
tm.assert_series_equal(result, expected)
# works when only those columns are selected
- result = mixed[["A", "B"]].min(1)
+ result = mixed[["A", "B"]].min(axis=1)
expected = Series([timedelta(days=-1)] * 3)
tm.assert_series_equal(result, expected)
@@ -832,8 +832,8 @@ def test_std_datetime64_with_nat(self, values, skipna, request, unit):
def test_sum_corner(self):
empty_frame = DataFrame()
- axis0 = empty_frame.sum(0)
- axis1 = empty_frame.sum(1)
+ axis0 = empty_frame.sum(axis=0)
+ axis1 = empty_frame.sum(axis=1)
assert isinstance(axis0, Series)
assert isinstance(axis1, Series)
assert len(axis0) == 0
@@ -967,8 +967,8 @@ def test_sum_object(self, float_frame):
def test_sum_bool(self, float_frame):
# ensure this works, bug report
bools = np.isnan(float_frame)
- bools.sum(1)
- bools.sum(0)
+ bools.sum(axis=1)
+ bools.sum(axis=0)
def test_sum_mixed_datetime(self):
# GH#30886
@@ -990,7 +990,7 @@ def test_mean_corner(self, float_frame, float_string_frame):
# take mean of boolean column
float_frame["bool"] = float_frame["A"] > 0
- means = float_frame.mean(0)
+ means = float_frame.mean(axis=0)
assert means["bool"] == float_frame["bool"].values.mean()
def test_mean_datetimelike(self):
@@ -1043,13 +1043,13 @@ def test_mean_extensionarray_numeric_only_true(self):
def test_stats_mixed_type(self, float_string_frame):
with pytest.raises(TypeError, match="could not convert"):
- float_string_frame.std(1)
+ float_string_frame.std(axis=1)
with pytest.raises(TypeError, match="could not convert"):
- float_string_frame.var(1)
+ float_string_frame.var(axis=1)
with pytest.raises(TypeError, match="unsupported operand type"):
- float_string_frame.mean(1)
+ float_string_frame.mean(axis=1)
with pytest.raises(TypeError, match="could not convert"):
- float_string_frame.skew(1)
+ float_string_frame.skew(axis=1)
def test_sum_bools(self):
df = DataFrame(index=range(1), columns=range(10))
@@ -1331,11 +1331,11 @@ def test_any_all_extra(self):
result = df[["A", "B"]].any(axis=1, bool_only=True)
tm.assert_series_equal(result, expected)
- result = df.all(1)
+ result = df.all(axis=1)
expected = Series([True, False, False], index=["a", "b", "c"])
tm.assert_series_equal(result, expected)
- result = df.all(1, bool_only=True)
+ result = df.all(axis=1, bool_only=True)
tm.assert_series_equal(result, expected)
# Axis is None
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 73ac5d0d8f62e..ba283e807a219 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -665,7 +665,7 @@ def test_empty(self, method, unit, use_bottleneck, dtype):
# GH#844 (changed in GH#9422)
df = DataFrame(np.empty((10, 0)), dtype=dtype)
- assert (getattr(df, method)(1) == unit).all()
+ assert (getattr(df, method)(axis=1) == unit).all()
s = Series([1], dtype=dtype)
result = getattr(s, method)(min_count=2)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 7b45a267a4572..a63ffbbd3a5a1 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -107,7 +107,7 @@ def test_contains(self, datetime_series):
def test_axis_alias(self):
s = Series([1, 2, np.nan])
tm.assert_series_equal(s.dropna(axis="rows"), s.dropna(axis="index"))
- assert s.dropna().sum("rows") == 3
+ assert s.dropna().sum(axis="rows") == 3
assert s._get_axis_number("rows") == 0
assert s._get_axis_name("rows") == "index"
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 4e2af9fef377b..97e0fa93c90ef 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -135,7 +135,7 @@ def test_multilevel_consolidate(self):
df = DataFrame(
np.random.default_rng(2).standard_normal((4, 4)), index=index, columns=index
)
- df["Totals", ""] = df.sum(1)
+ df["Totals", ""] = df.sum(axis=1)
df = df._consolidate()
def test_level_with_tuples(self):
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index d2e92bb971888..3bffd1f1987aa 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -36,7 +36,7 @@ def redundant_import(self, paramx=None, paramy=None) -> None:
>>> import pandas as pd
>>> df = pd.DataFrame(np.ones((3, 3)),
... columns=('a', 'b', 'c'))
- >>> df.all(1)
+ >>> df.all(axis=1)
0 True
1 True
2 True
| - [x] closes #57087 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58122 | 2024-04-02T19:19:01Z | 2024-04-09T16:58:28Z | 2024-04-09T16:58:28Z | 2024-04-09T16:58:36Z |
PERF: concat([Series, DataFrame], axis=0) returns RangeIndex columns when possible | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 78ee359397df5..d2d5707f32bf3 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -298,6 +298,7 @@ Performance improvements
- :attr:`Categorical.categories` returns a :class:`RangeIndex` columns instead of an :class:`Index` if the constructed ``values`` was a ``range``. (:issue:`57787`)
- :class:`DataFrame` returns a :class:`RangeIndex` columns when possible when ``data`` is a ``dict`` (:issue:`57943`)
- :class:`Series` returns a :class:`RangeIndex` index when possible when ``data`` is a ``dict`` (:issue:`58118`)
+- :func:`concat` returns a :class:`RangeIndex` column when possible when ``objs`` contains :class:`Series` and :class:`DataFrame` and ``axis=0`` (:issue:`58119`)
- :func:`concat` returns a :class:`RangeIndex` level in the :class:`MultiIndex` result when ``keys`` is a ``range`` or :class:`RangeIndex` (:issue:`57542`)
- :meth:`RangeIndex.append` returns a :class:`RangeIndex` instead of a :class:`Index` when appending values that could continue the :class:`RangeIndex` (:issue:`57467`)
- :meth:`Series.str.extract` returns a :class:`RangeIndex` columns instead of an :class:`Index` column when possible (:issue:`57542`)
diff --git a/pandas/core/reshape/concat.py b/pandas/core/reshape/concat.py
index 0868f711093d6..d17e5b475ae57 100644
--- a/pandas/core/reshape/concat.py
+++ b/pandas/core/reshape/concat.py
@@ -518,8 +518,11 @@ def _sanitize_mixed_ndim(
# to have unique names
name = current_column
current_column += 1
-
- obj = sample._constructor({name: obj}, copy=False)
+ obj = sample._constructor(obj, copy=False)
+ if isinstance(obj, ABCDataFrame):
+ obj.columns = range(name, name + 1, 1)
+ else:
+ obj = sample._constructor({name: obj}, copy=False)
new_objs.append(obj)
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index b986aa8182219..92e756756547d 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -912,3 +912,11 @@ def test_concat_none_with_timezone_timestamp():
result = concat([df1, df2], ignore_index=True)
expected = DataFrame({"A": [None, pd.Timestamp("1990-12-20 00:00:00+00:00")]})
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_with_series_and_frame_returns_rangeindex_columns():
+ ser = Series([0])
+ df = DataFrame([1, 2])
+ result = concat([ser, df])
+ expected = DataFrame([0, 1, 2], index=[0, 0, 1])
+ tm.assert_frame_equal(result, expected, check_column_type=True)
| Discovered in #57441 | https://api.github.com/repos/pandas-dev/pandas/pulls/58119 | 2024-04-02T18:13:03Z | 2024-04-03T20:14:07Z | 2024-04-03T20:14:07Z | 2024-04-03T20:15:11Z |
PERF: Series(dict) returns RangeIndex when possible | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 4debd41de213f..78ee359397df5 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -297,6 +297,7 @@ Performance improvements
~~~~~~~~~~~~~~~~~~~~~~~~
- :attr:`Categorical.categories` returns a :class:`RangeIndex` columns instead of an :class:`Index` if the constructed ``values`` was a ``range``. (:issue:`57787`)
- :class:`DataFrame` returns a :class:`RangeIndex` columns when possible when ``data`` is a ``dict`` (:issue:`57943`)
+- :class:`Series` returns a :class:`RangeIndex` index when possible when ``data`` is a ``dict`` (:issue:`58118`)
- :func:`concat` returns a :class:`RangeIndex` level in the :class:`MultiIndex` result when ``keys`` is a ``range`` or :class:`RangeIndex` (:issue:`57542`)
- :meth:`RangeIndex.append` returns a :class:`RangeIndex` instead of a :class:`Index` when appending values that could continue the :class:`RangeIndex` (:issue:`57467`)
- :meth:`Series.str.extract` returns a :class:`RangeIndex` columns instead of an :class:`Index` column when possible (:issue:`57542`)
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index a4b58445289ad..73e564f95cf65 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -7144,7 +7144,10 @@ def maybe_sequence_to_range(sequence) -> Any | range:
return sequence
if len(sequence) == 0:
return range(0)
- np_sequence = np.asarray(sequence, dtype=np.int64)
+ try:
+ np_sequence = np.asarray(sequence, dtype=np.int64)
+ except OverflowError:
+ return sequence
diff = np_sequence[1] - np_sequence[0]
if diff == 0:
return sequence
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ee496a355f6ca..967aea15fff60 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -132,6 +132,7 @@
PeriodIndex,
default_index,
ensure_index,
+ maybe_sequence_to_range,
)
import pandas.core.indexes.base as ibase
from pandas.core.indexes.multi import maybe_droplevels
@@ -538,8 +539,6 @@ def _init_dict(
_data : BlockManager for the new Series
index : index for the new Series
"""
- keys: Index | tuple
-
# Looking for NaN in dict doesn't work ({np.nan : 1}[float('nan')]
# raises KeyError), so we iterate the entire dict, and align
if data:
@@ -547,7 +546,7 @@ def _init_dict(
# using generators in effects the performance.
# Below is the new way of extracting the keys and values
- keys = tuple(data.keys())
+ keys = maybe_sequence_to_range(tuple(data.keys()))
values = list(data.values()) # Generating list of values- faster way
elif index is not None:
# fastpath for Series(data=None). Just use broadcasting a scalar
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 68737e86f0c6a..97faba532e94a 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -2251,3 +2251,9 @@ def test_series_with_complex_nan(input_list):
result = Series(ser.array)
assert ser.dtype == "complex128"
tm.assert_series_equal(ser, result)
+
+
+def test_dict_keys_rangeindex():
+ result = Series({0: 1, 1: 2})
+ expected = Series([1, 2], index=RangeIndex(2))
+ tm.assert_series_equal(result, expected, check_index_type=True)
| Discovered in https://github.com/pandas-dev/pandas/pull/57441 | https://api.github.com/repos/pandas-dev/pandas/pulls/58118 | 2024-04-02T17:24:14Z | 2024-04-03T18:42:13Z | 2024-04-03T18:42:13Z | 2024-04-03T18:59:18Z |
Add note for name field in groupby | diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst
index 7f957a8b16787..8c222aff52fd7 100644
--- a/doc/source/user_guide/groupby.rst
+++ b/doc/source/user_guide/groupby.rst
@@ -416,6 +416,12 @@ You can also include the grouping columns if you want to operate on them.
grouped[["A", "B"]].sum()
+.. note::
+
+ The ``groupby`` operation in Pandas drops the ``name`` field of the columns Index object
+ after the operation. This change ensures consistency in syntax between different
+ column selection methods within groupby operations.
+
.. _groupby.iterating-label:
Iterating through groups
| - [x] closes #58024
- [N/A] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [N/A] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [N/A] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58114 | 2024-04-02T02:53:32Z | 2024-04-02T18:18:58Z | 2024-04-02T18:18:58Z | 2024-04-02T19:25:27Z |
BUG: Remove "Mean of empty slice" warning in nanmedian | diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index a124e8679ae8e..2bc5f031ce69d 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -745,6 +745,10 @@ def nanmedian(values, *, axis: AxisInt | None = None, skipna: bool = True, mask=
>>> s = pd.Series([1, np.nan, 2, 2])
>>> nanops.nanmedian(s.values)
2.0
+
+ >>> s = pd.Series([np.nan, np.nan, np.nan])
+ >>> nanops.nanmedian(s.values)
+ nan
"""
# for floats without mask, the data already uses NaN as missing value
# indicator, and `mask` will be calculated from that below -> in those
@@ -763,6 +767,7 @@ def get_median(x, _mask=None):
warnings.filterwarnings(
"ignore", "All-NaN slice encountered", RuntimeWarning
)
+ warnings.filterwarnings("ignore", "Mean of empty slice", RuntimeWarning)
res = np.nanmedian(x[_mask])
return res
| This PR supresses a warning when calculating the `median` of a `pd.Series` that is full of `NA`s. The warning is just letting you know that `numpy` got an empty array but correctly returns `np.nan`.
The warning is issued as follows:
1. When the series has `NA`s and you try to calculate the `median`, `nanops.nanmedian` is called.
2. When the array is full of NAs, the `_mask` in `nanops.nanmedian.get_median` is all `False`, so `np.nanmedian(x[_mask])` gets an empty array.
3. `np.nanmedian` calls `numpy.lib.nanfunctions._nanmedian` which has a short-circuit path when the array is empty [[ref](https://github.com/numpy/numpy/blob/1c8b03bf2c87f081eea211a5061e423285c548af/numpy/lib/_nanfunctions_impl.py#L1215-L1216)].
4. This path calls `np.nanmean`. This creates an empty mask [[ref](https://github.com/numpy/numpy/blob/1c8b03bf2c87f081eea211a5061e423285c548af/numpy/lib/_nanfunctions_impl.py#L1033)] because the array is empty.
5. Then, the sum of the negated mask is used as the denominator of the `mean`, which causes the result to be `np.nan` (correct!).
6. The method issues a warning because the array was empty.
Another way to solve this would be to change `get_median` to explicitly return `np.nan` before the call to `np.nanmedian`, but that would involve tampering with the condition in the if, which has no comments so I'm unsure if changing it makes sense.
```python
def get_median(x, _mask=None):
if _mask is None:
_mask = notna(x)
else:
_mask = ~_mask
all_na = _mask.all()
if (not skipna and not all_na) or all_na:
return np.nan
with warnings.catch_warnings():
# Suppress RuntimeWarning about All-NaN slice
warnings.filterwarnings(
"ignore", "All-NaN slice encountered", RuntimeWarning
)
res = np.nanmedian(x[_mask])
return res
```
which simplifies to `if skipna: return np.nan` I think.
I haven't added tests nor ensured all of the below passed not added comments because I want to ensure this is the preferred solution. Otherwise, I can edit `get_median`. Let me know!
---
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58107 | 2024-04-01T18:21:54Z | 2024-04-01T22:40:49Z | 2024-04-01T22:40:49Z | 2024-04-01T22:40:55Z |
Backport PR #58087 on branch 2.2.x (BLD: Build wheels using numpy 2.0rc1) | diff --git a/.circleci/config.yml b/.circleci/config.yml
index ea93575ac9430..6f134c9a7a7bd 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -72,10 +72,6 @@ jobs:
no_output_timeout: 30m # Sometimes the tests won't generate any output, make sure the job doesn't get killed by that
command: |
pip3 install cibuildwheel==2.15.0
- # When this is a nightly wheel build, allow picking up NumPy 2.0 dev wheels:
- if [[ "$IS_SCHEDULE_DISPATCH" == "true" || "$IS_PUSH" != 'true' ]]; then
- export CIBW_ENVIRONMENT="PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"
- fi
cibuildwheel --prerelease-pythons --output-dir wheelhouse
environment:
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index 470c044d2e99e..d6cfd3b0ad257 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -148,18 +148,6 @@ jobs:
CIBW_PRERELEASE_PYTHONS: True
CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}
- - name: Build nightly wheels (with NumPy pre-release)
- if: ${{ (env.IS_SCHEDULE_DISPATCH == 'true' && env.IS_PUSH != 'true') }}
- uses: pypa/cibuildwheel@v2.17.0
- with:
- package-dir: ./dist/${{ startsWith(matrix.buildplat[1], 'macosx') && env.sdist_name || needs.build_sdist.outputs.sdist_file }}
- env:
- # The nightly wheels should be build witht he NumPy 2.0 pre-releases
- # which requires the additional URL.
- CIBW_ENVIRONMENT: PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
- CIBW_PRERELEASE_PYTHONS: True
- CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}
-
- name: Set up Python
uses: mamba-org/setup-micromamba@v1
with:
diff --git a/pyproject.toml b/pyproject.toml
index c225ed80dcb10..375ff72250f26 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -156,6 +156,9 @@ setup = ['--vsenv'] # For Windows
skip = "cp36-* cp37-* cp38-* pp* *_i686 *_ppc64le *_s390x"
build-verbosity = "3"
environment = {LDFLAGS="-Wl,--strip-all"}
+# TODO: remove this once numpy 2.0 proper releases
+# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
+before-build = "pip install numpy==2.0.0rc1"
test-requires = "hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0"
test-command = """
PANDAS_CI='1' python -c 'import pandas as pd; \
@@ -164,7 +167,9 @@ test-command = """
"""
[tool.cibuildwheel.windows]
-before-build = "pip install delvewheel"
+# TODO: remove this once numpy 2.0 proper releases
+# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
+before-build = "pip install delvewheel numpy==2.0.0rc1"
repair-wheel-command = "delvewheel repair -w {dest_dir} {wheel}"
[[tool.cibuildwheel.overrides]]
| Backport PR #58087: BLD: Build wheels using numpy 2.0rc1 | https://api.github.com/repos/pandas-dev/pandas/pulls/58105 | 2024-04-01T17:37:14Z | 2024-04-09T22:40:19Z | 2024-04-09T22:40:19Z | 2024-04-09T22:40:20Z |
[pre-commit.ci] pre-commit autoupdate | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 41f1c4c6892a3..b8726e058a52b 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -19,7 +19,7 @@ ci:
skip: [pylint, pyright, mypy]
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
- rev: v0.3.1
+ rev: v0.3.4
hooks:
- id: ruff
args: [--exit-non-zero-on-fix]
@@ -39,7 +39,7 @@ repos:
- id: ruff-format
exclude: ^scripts
- repo: https://github.com/jendrikseipp/vulture
- rev: 'v2.10'
+ rev: 'v2.11'
hooks:
- id: vulture
entry: python scripts/run_vulture.py
@@ -93,11 +93,11 @@ repos:
args: [--disable=all, --enable=redefined-outer-name]
stages: [manual]
- repo: https://github.com/PyCQA/isort
- rev: 5.12.0
+ rev: 5.13.2
hooks:
- id: isort
- repo: https://github.com/asottile/pyupgrade
- rev: v3.15.0
+ rev: v3.15.2
hooks:
- id: pyupgrade
args: [--py39-plus]
@@ -116,7 +116,7 @@ repos:
hooks:
- id: sphinx-lint
- repo: https://github.com/pre-commit/mirrors-clang-format
- rev: v17.0.6
+ rev: v18.1.2
hooks:
- id: clang-format
files: ^pandas/_libs/src|^pandas/_libs/include
| <!--pre-commit.ci start-->
updates:
- [github.com/astral-sh/ruff-pre-commit: v0.3.1 → v0.3.4](https://github.com/astral-sh/ruff-pre-commit/compare/v0.3.1...v0.3.4)
- [github.com/jendrikseipp/vulture: v2.10 → v2.11](https://github.com/jendrikseipp/vulture/compare/v2.10...v2.11)
- [github.com/pylint-dev/pylint: v3.0.1 → v3.1.0](https://github.com/pylint-dev/pylint/compare/v3.0.1...v3.1.0)
- [github.com/PyCQA/isort: 5.12.0 → 5.13.2](https://github.com/PyCQA/isort/compare/5.12.0...5.13.2)
- [github.com/asottile/pyupgrade: v3.15.0 → v3.15.2](https://github.com/asottile/pyupgrade/compare/v3.15.0...v3.15.2)
- [github.com/pre-commit/mirrors-clang-format: v17.0.6 → v18.1.2](https://github.com/pre-commit/mirrors-clang-format/compare/v17.0.6...v18.1.2)
<!--pre-commit.ci end--> | https://api.github.com/repos/pandas-dev/pandas/pulls/58103 | 2024-04-01T16:29:30Z | 2024-04-01T20:45:20Z | 2024-04-01T20:45:20Z | 2024-04-01T20:45:23Z |
DOC: fix docstring for DataFrame.to_period, pandas.DataFrame.to_timestamp, pandas.DataFrame.tz_convert #58065 | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index f4ea0a1e2bc28..49f93a8589ac4 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -121,9 +121,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.to_feather SA01" \
-i "pandas.DataFrame.to_markdown SA01" \
-i "pandas.DataFrame.to_parquet RT03" \
- -i "pandas.DataFrame.to_period SA01" \
- -i "pandas.DataFrame.to_timestamp SA01" \
- -i "pandas.DataFrame.tz_convert SA01" \
-i "pandas.DataFrame.var PR01,RT03,SA01" \
-i "pandas.DataFrame.where RT03" \
-i "pandas.DatetimeIndex.ceil SA01" \
@@ -470,11 +467,8 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.Series.to_frame SA01" \
-i "pandas.Series.to_list RT03" \
-i "pandas.Series.to_markdown SA01" \
- -i "pandas.Series.to_period SA01" \
-i "pandas.Series.to_string SA01" \
- -i "pandas.Series.to_timestamp RT03,SA01" \
-i "pandas.Series.truediv PR07" \
- -i "pandas.Series.tz_convert SA01" \
-i "pandas.Series.update PR07,SA01" \
-i "pandas.Series.var PR01,RT03,SA01" \
-i "pandas.Series.where RT03" \
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4715164a4208a..66a68755a2a09 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -12482,7 +12482,9 @@ def to_timestamp(
copy: bool | lib.NoDefault = lib.no_default,
) -> DataFrame:
"""
- Cast to DatetimeIndex of timestamps, at *beginning* of period.
+ Cast PeriodIndex to DatetimeIndex of timestamps, at *beginning* of period.
+
+ This can be changed to the *end* of the period, by specifying `how="e"`.
Parameters
----------
@@ -12512,8 +12514,13 @@ def to_timestamp(
Returns
-------
- DataFrame
- The DataFrame has a DatetimeIndex.
+ DataFrame with DatetimeIndex
+ DataFrame with the PeriodIndex cast to DatetimeIndex.
+
+ See Also
+ --------
+ DataFrame.to_period: Inverse method to cast DatetimeIndex to PeriodIndex.
+ Series.to_timestamp: Equivalent method for Series.
Examples
--------
@@ -12569,7 +12576,8 @@ def to_period(
Convert DataFrame from DatetimeIndex to PeriodIndex.
Convert DataFrame from DatetimeIndex to PeriodIndex with desired
- frequency (inferred from index if not passed).
+ frequency (inferred from index if not passed). Either index of columns can be
+ converted, depending on `axis` argument.
Parameters
----------
@@ -12597,7 +12605,12 @@ def to_period(
Returns
-------
DataFrame
- The DataFrame has a PeriodIndex.
+ The DataFrame with the converted PeriodIndex.
+
+ See Also
+ --------
+ Series.to_period: Equivalent method for Series.
+ Series.dt.to_period: Convert DateTime column values.
Examples
--------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index bfee4ced2b3d8..13fa28003f775 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10423,6 +10423,11 @@ def tz_convert(
TypeError
If the axis is tz-naive.
+ See Also
+ --------
+ DataFrame.tz_localize: Localize tz-naive index of DataFrame to target time zone.
+ Series.tz_localize: Localize tz-naive index of Series to target time zone.
+
Examples
--------
Change to another time zone:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 843788273a6ef..ba04013749637 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -5648,6 +5648,8 @@ def to_timestamp(
"""
Cast to DatetimeIndex of Timestamps, at *beginning* of period.
+ This can be changed to the *end* of the period, by specifying `how="e"`.
+
Parameters
----------
freq : str, default frequency of PeriodIndex
@@ -5675,6 +5677,12 @@ def to_timestamp(
Returns
-------
Series with DatetimeIndex
+ Series with the PeriodIndex cast to DatetimeIndex.
+
+ See Also
+ --------
+ Series.to_period: Inverse method to cast DatetimeIndex to PeriodIndex.
+ DataFrame.to_timestamp: Equivalent method for DataFrame.
Examples
--------
@@ -5748,6 +5756,11 @@ def to_period(
Series
Series with index converted to PeriodIndex.
+ See Also
+ --------
+ DataFrame.to_period: Equivalent method for DataFrame.
+ Series.dt.to_period: Convert DateTime column values.
+
Examples
--------
>>> idx = pd.DatetimeIndex(["2023", "2024", "2025"])
| Fixes docstring errorrs for DataFrame and Series for methods `pandas.DataFrame.to_period` ,`pandas.DataFrame.to_timestamp` ,`pandas.DataFrame.tz_convert`. (issue #58065)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58101 | 2024-04-01T13:39:31Z | 2024-04-01T18:54:45Z | 2024-04-01T18:54:44Z | 2024-04-01T18:54:51Z |
MNT: fix compatibility with beautifulsoup4 4.13.0b2 | diff --git a/pandas/io/html.py b/pandas/io/html.py
index b4f6a5508726b..42f5266e7649b 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -584,14 +584,8 @@ class _BeautifulSoupHtml5LibFrameParser(_HtmlFrameParser):
:class:`pandas.io.html._HtmlFrameParser`.
"""
- def __init__(self, *args, **kwargs) -> None:
- super().__init__(*args, **kwargs)
- from bs4 import SoupStrainer
-
- self._strainer = SoupStrainer("table")
-
def _parse_tables(self, document, match, attrs):
- element_name = self._strainer.name
+ element_name = "table"
tables = document.find_all(element_name, attrs=attrs)
if not tables:
raise ValueError("No tables found")
| Closes #58086
- [x] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58100 | 2024-04-01T12:26:18Z | 2024-04-01T17:08:54Z | 2024-04-01T17:08:53Z | 2024-04-03T19:55:22Z |
DOC: Fix docstring errors for pandas.DataFrame.unstack, pandas.DataFrame.value_counts and pandas.DataFrame.tz_localize | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 0c4e6641444f1..a9f69ee4f6c57 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -127,9 +127,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.to_period SA01" \
-i "pandas.DataFrame.to_timestamp SA01" \
-i "pandas.DataFrame.tz_convert SA01" \
- -i "pandas.DataFrame.tz_localize SA01" \
- -i "pandas.DataFrame.unstack RT03" \
- -i "pandas.DataFrame.value_counts RT03" \
-i "pandas.DataFrame.var PR01,RT03,SA01" \
-i "pandas.DataFrame.where RT03" \
-i "pandas.DatetimeIndex.ceil SA01" \
@@ -226,7 +223,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.Index.to_list RT03" \
-i "pandas.Index.union PR07,RT03,SA01" \
-i "pandas.Index.unique RT03" \
- -i "pandas.Index.value_counts RT03" \
-i "pandas.Index.view GL08" \
-i "pandas.Int16Dtype SA01" \
-i "pandas.Int32Dtype SA01" \
@@ -482,10 +478,7 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.Series.to_timestamp RT03,SA01" \
-i "pandas.Series.truediv PR07" \
-i "pandas.Series.tz_convert SA01" \
- -i "pandas.Series.tz_localize SA01" \
- -i "pandas.Series.unstack SA01" \
-i "pandas.Series.update PR07,SA01" \
- -i "pandas.Series.value_counts RT03" \
-i "pandas.Series.var PR01,RT03,SA01" \
-i "pandas.Series.where RT03" \
-i "pandas.SparseDtype SA01" \
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 263265701691b..f923106e967b7 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -924,6 +924,7 @@ def value_counts(
Returns
-------
Series
+ Series containing counts of unique values.
See Also
--------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 50a93994dc76b..abb8d13342c9d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -7162,7 +7162,7 @@ def value_counts(
dropna: bool = True,
) -> Series:
"""
- Return a Series containing the frequency of each distinct row in the Dataframe.
+ Return a Series containing the frequency of each distinct row in the DataFrame.
Parameters
----------
@@ -7175,13 +7175,14 @@ def value_counts(
ascending : bool, default False
Sort in ascending order.
dropna : bool, default True
- Don't include counts of rows that contain NA values.
+ Do not include counts of rows that contain NA values.
.. versionadded:: 1.3.0
Returns
-------
Series
+ Series containing the frequency of each distinct row in the DataFrame.
See Also
--------
@@ -7192,8 +7193,8 @@ def value_counts(
The returned Series will have a MultiIndex with one level per input
column but an Index (non-multi) for a single label. By default, rows
that contain any NA values are omitted from the result. By default,
- the resulting Series will be in descending order so that the first
- element is the most frequently-occurring row.
+ the resulting Series will be sorted by frequencies in descending order so that
+ the first element is the most frequently-occurring row.
Examples
--------
@@ -9658,6 +9659,8 @@ def unstack(
Returns
-------
Series or DataFrame
+ If index is a MultiIndex: DataFrame with pivoted index labels as new
+ inner-most level column labels, else Series.
See Also
--------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 858d2ba82a969..ee1ce2f817b6c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10485,10 +10485,10 @@ def tz_localize(
nonexistent: TimeNonexistent = "raise",
) -> Self:
"""
- Localize tz-naive index of a Series or DataFrame to target time zone.
+ Localize time zone naive index of a Series or DataFrame to target time zone.
This operation localizes the Index. To localize the values in a
- timezone-naive Series, use :meth:`Series.dt.tz_localize`.
+ time zone naive Series, use :meth:`Series.dt.tz_localize`.
Parameters
----------
@@ -10548,13 +10548,19 @@ def tz_localize(
Returns
-------
{klass}
- Same type as the input.
+ Same type as the input, with time zone naive or aware index, depending on
+ ``tz``.
Raises
------
TypeError
If the TimeSeries is tz-aware and tz is not None.
+ See Also
+ --------
+ Series.dt.tz_localize: Localize the values in a time zone naive Series.
+ Timestamp.tz_localize: Localize the Timestamp to a timezone.
+
Examples
--------
Localize local times:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b0dc05fce7913..843788273a6ef 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4257,6 +4257,10 @@ def unstack(
DataFrame
Unstacked Series.
+ See Also
+ --------
+ DataFrame.unstack : Pivot the MultiIndex of a DataFrame.
+
Notes
-----
Reference :ref:`the user guide <reshaping.stacking>` for more examples.
| - Fixes docstring erros for following methods (https://github.com/pandas-dev/pandas/issues/58065):
`pandas.DataFrame.unstack`, `pandas.DataFrame.value_counts` and `pandas.DataFrame.tz_localize`
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58097 | 2024-03-31T23:12:27Z | 2024-04-01T17:11:43Z | 2024-04-01T17:11:43Z | 2024-04-01T17:11:51Z |
Remove 'Not implemented for Series' from _num_doc in generic.py | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 858d2ba82a969..b5cdafeb57134 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11712,7 +11712,7 @@ def last_valid_index(self) -> Hashable:
skipna : bool, default True
Exclude NA/null values when computing the result.
numeric_only : bool, default False
- Include only float, int, boolean columns. Not implemented for Series.
+ Include only float, int, boolean columns.
{min_count}\
**kwargs
| Documentation fix in `generic.py`
line 11715: Removed 'Not implemented for Series' in _num_doc given that `numeric_only` is now implemented for Series in all the methods min, max, mean, median, skew, kurt
- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58096 | 2024-03-31T22:27:34Z | 2024-04-01T17:19:57Z | 2024-04-01T17:19:57Z | 2024-04-01T17:20:04Z |
DOC: read_sql_query docstring contraine wrong examples #58094 | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index aa9d0d88ae69a..8c4c4bac884e5 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -473,8 +473,9 @@ def read_sql_query(
--------
>>> from sqlalchemy import create_engine # doctest: +SKIP
>>> engine = create_engine("sqlite:///database.db") # doctest: +SKIP
+ >>> sql_query = "SELECT int_column FROM test_data" # doctest: +SKIP
>>> with engine.connect() as conn, conn.begin(): # doctest: +SKIP
- ... data = pd.read_sql_table("data", conn) # doctest: +SKIP
+ ... data = pd.read_sql_query(sql_query, conn) # doctest: +SKIP
"""
check_dtype_backend(dtype_backend)
| - [x] closes #58094
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58095 | 2024-03-31T20:07:44Z | 2024-04-02T01:41:32Z | 2024-04-02T01:41:32Z | 2024-04-02T01:41:40Z |
Revert "BLD: Pin numpy on 2.2.x" | diff --git a/pyproject.toml b/pyproject.toml
index b2764b137a1f8..778146bbcd909 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -27,9 +27,9 @@ authors = [
license = {file = 'LICENSE'}
requires-python = '>=3.9'
dependencies = [
- "numpy>=1.22.4,<2; python_version<'3.11'",
- "numpy>=1.23.2,<2; python_version=='3.11'",
- "numpy>=1.26.0,<2; python_version>='3.12'",
+ "numpy>=1.22.4; python_version<'3.11'",
+ "numpy>=1.23.2; python_version=='3.11'",
+ "numpy>=1.26.0; python_version>='3.12'",
"python-dateutil>=2.8.2",
"pytz>=2020.1",
"tzdata>=2022.7"
| Reverts pandas-dev/pandas#56812
Waiting for #58087 to be merged and backported | https://api.github.com/repos/pandas-dev/pandas/pulls/58093 | 2024-03-31T15:06:52Z | 2024-04-03T17:28:27Z | 2024-04-03T17:28:27Z | 2024-04-03T17:28:30Z |
DOC: Changed ndim to 1, modified doc as suggested. #57088 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 50a93994dc76b..932344ae91eab 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -11494,7 +11494,7 @@ def any(
**kwargs,
) -> Series | bool: ...
- @doc(make_doc("any", ndim=2))
+ @doc(make_doc("any", ndim=1))
def any(
self,
*,
@@ -11540,7 +11540,7 @@ def all(
**kwargs,
) -> Series | bool: ...
- @doc(make_doc("all", ndim=2))
+ @doc(make_doc("all", ndim=1))
def all(
self,
axis: Axis | None = 0,
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 858d2ba82a969..4b5a3b5ffdb40 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11881,9 +11881,9 @@ def last_valid_index(self) -> Hashable:
Returns
-------
-{name1} or {name2}
- If level is specified, then, {name2} is returned; otherwise, {name1}
- is returned.
+{name2} or {name1}
+ If axis=None, then a scalar boolean is returned.
+ Otherwise a Series is returned with index matching the index argument.
{see_also}
{examples}"""
| - [ ] closes #57088 | https://api.github.com/repos/pandas-dev/pandas/pulls/58091 | 2024-03-31T01:28:35Z | 2024-04-01T17:32:02Z | 2024-04-01T17:32:02Z | 2024-04-01T17:32:08Z |
CI: Remove deprecated methods from code_checks.sh | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 0c4e6641444f1..3cdbbc2b93a01 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -84,7 +84,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.assign SA01" \
-i "pandas.DataFrame.at_time PR01" \
-i "pandas.DataFrame.axes SA01" \
- -i "pandas.DataFrame.backfill PR01,SA01" \
-i "pandas.DataFrame.bfill SA01" \
-i "pandas.DataFrame.columns SA01" \
-i "pandas.DataFrame.copy SA01" \
@@ -104,7 +103,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.mean RT03,SA01" \
-i "pandas.DataFrame.median RT03,SA01" \
-i "pandas.DataFrame.min RT03" \
- -i "pandas.DataFrame.pad PR01,SA01" \
-i "pandas.DataFrame.plot PR02,SA01" \
-i "pandas.DataFrame.pop SA01" \
-i "pandas.DataFrame.prod RT03" \
@@ -119,7 +117,6 @@ if [[ -z "$CHECK" || "$CHECK" == "docstrings" ]]; then
-i "pandas.DataFrame.sparse.to_dense SA01" \
-i "pandas.DataFrame.std PR01,RT03,SA01" \
-i "pandas.DataFrame.sum RT03" \
- -i "pandas.DataFrame.swapaxes PR01,SA01" \
-i "pandas.DataFrame.swaplevel SA01" \
-i "pandas.DataFrame.to_feather SA01" \
-i "pandas.DataFrame.to_markdown SA01" \
| - [ ] xref #58065
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58090 | 2024-03-31T00:22:21Z | 2024-04-01T17:27:25Z | 2024-04-01T17:27:25Z | 2024-04-01T17:27:31Z |
BLD: Build wheels using numpy 2.0rc1 | diff --git a/.circleci/config.yml b/.circleci/config.yml
index ea93575ac9430..6f134c9a7a7bd 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -72,10 +72,6 @@ jobs:
no_output_timeout: 30m # Sometimes the tests won't generate any output, make sure the job doesn't get killed by that
command: |
pip3 install cibuildwheel==2.15.0
- # When this is a nightly wheel build, allow picking up NumPy 2.0 dev wheels:
- if [[ "$IS_SCHEDULE_DISPATCH" == "true" || "$IS_PUSH" != 'true' ]]; then
- export CIBW_ENVIRONMENT="PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple"
- fi
cibuildwheel --prerelease-pythons --output-dir wheelhouse
environment:
diff --git a/.github/workflows/wheels.yml b/.github/workflows/wheels.yml
index 470c044d2e99e..d6cfd3b0ad257 100644
--- a/.github/workflows/wheels.yml
+++ b/.github/workflows/wheels.yml
@@ -148,18 +148,6 @@ jobs:
CIBW_PRERELEASE_PYTHONS: True
CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}
- - name: Build nightly wheels (with NumPy pre-release)
- if: ${{ (env.IS_SCHEDULE_DISPATCH == 'true' && env.IS_PUSH != 'true') }}
- uses: pypa/cibuildwheel@v2.17.0
- with:
- package-dir: ./dist/${{ startsWith(matrix.buildplat[1], 'macosx') && env.sdist_name || needs.build_sdist.outputs.sdist_file }}
- env:
- # The nightly wheels should be build witht he NumPy 2.0 pre-releases
- # which requires the additional URL.
- CIBW_ENVIRONMENT: PIP_EXTRA_INDEX_URL=https://pypi.anaconda.org/scientific-python-nightly-wheels/simple
- CIBW_PRERELEASE_PYTHONS: True
- CIBW_BUILD: ${{ matrix.python[0] }}-${{ matrix.buildplat[1] }}
-
- name: Set up Python
uses: mamba-org/setup-micromamba@v1
with:
diff --git a/pyproject.toml b/pyproject.toml
index 5f5b013ca8461..259d003de92d5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -152,6 +152,9 @@ setup = ['--vsenv'] # For Windows
skip = "cp36-* cp37-* cp38-* pp* *_i686 *_ppc64le *_s390x"
build-verbosity = "3"
environment = {LDFLAGS="-Wl,--strip-all"}
+# TODO: remove this once numpy 2.0 proper releases
+# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
+before-build = "pip install numpy==2.0.0rc1"
test-requires = "hypothesis>=6.46.1 pytest>=7.3.2 pytest-xdist>=2.2.0"
test-command = """
PANDAS_CI='1' python -c 'import pandas as pd; \
@@ -160,7 +163,9 @@ test-command = """
"""
[tool.cibuildwheel.windows]
-before-build = "pip install delvewheel"
+# TODO: remove this once numpy 2.0 proper releases
+# and specify numpy 2.0 as a dependency in [build-system] requires in pyproject.toml
+before-build = "pip install delvewheel numpy==2.0.0rc1"
repair-wheel-command = "delvewheel repair -w {dest_dir} {wheel}"
[[tool.cibuildwheel.overrides]]
| - [ ] closes #55519 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/58087 | 2024-03-30T17:21:12Z | 2024-04-01T17:36:41Z | 2024-04-01T17:36:41Z | 2024-04-02T02:45:12Z |
ENH: Enable fillna(value=None) | diff --git a/doc/source/user_guide/missing_data.rst b/doc/source/user_guide/missing_data.rst
index 2e104ac06f9f4..5149bd30dbbef 100644
--- a/doc/source/user_guide/missing_data.rst
+++ b/doc/source/user_guide/missing_data.rst
@@ -386,6 +386,27 @@ Replace NA with a scalar value
df
df.fillna(0)
+When the data has object dtype, you can control what type of NA values are present.
+
+.. ipython:: python
+
+ df = pd.DataFrame({"a": [pd.NA, np.nan, None]}, dtype=object)
+ df
+ df.fillna(None)
+ df.fillna(np.nan)
+ df.fillna(pd.NA)
+
+However when the dtype is not object, these will all be replaced with the proper NA value for the dtype.
+
+.. ipython:: python
+
+ data = {"np": [1.0, np.nan, np.nan, 2], "arrow": pd.array([1.0, pd.NA, pd.NA, 2], dtype="float64[pyarrow]")}
+ df = pd.DataFrame(data)
+ df
+ df.fillna(None)
+ df.fillna(np.nan)
+ df.fillna(pd.NA)
+
Fill gaps forward or backward
.. ipython:: python
diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index e05cc87d1af14..e4a5f5e855fc7 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -37,6 +37,7 @@ Other enhancements
- Users can globally disable any ``PerformanceWarning`` by setting the option ``mode.performance_warnings`` to ``False`` (:issue:`56920`)
- :meth:`Styler.format_index_names` can now be used to format the index and column names (:issue:`48936` and :issue:`47489`)
- :meth:`DataFrame.cummin`, :meth:`DataFrame.cummax`, :meth:`DataFrame.cumprod` and :meth:`DataFrame.cumsum` methods now have a ``numeric_only`` parameter (:issue:`53072`)
+- :meth:`DataFrame.fillna` and :meth:`Series.fillna` can now accept ``value=None``; for non-object dtype the corresponding NA value will be used (:issue:`57723`)
.. ---------------------------------------------------------------------------
.. _whatsnew_300.notable_bug_fixes:
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 123dc679a83ea..cbd0221cc2082 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -328,7 +328,7 @@ def _pad_or_backfill(
return new_values
@doc(ExtensionArray.fillna)
- def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
+ def fillna(self, value, limit: int | None = None, copy: bool = True) -> Self:
mask = self.isna()
# error: Argument 2 to "check_value_size" has incompatible type
# "ExtensionArray"; expected "ndarray"
@@ -347,8 +347,7 @@ def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Sel
new_values[mask] = value
else:
# We validate the fill_value even if there is nothing to fill
- if value is not None:
- self._validate_setitem_value(value)
+ self._validate_setitem_value(value)
if not copy:
new_values = self[:]
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 34ca81e36cbc5..1154130b9bed3 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1077,7 +1077,7 @@ def _pad_or_backfill(
@doc(ExtensionArray.fillna)
def fillna(
self,
- value: object | ArrayLike | None = None,
+ value: object | ArrayLike,
limit: int | None = None,
copy: bool = True,
) -> Self:
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index af666a591b1bc..86f58b48ea3be 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -892,7 +892,7 @@ def max(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOr
indexer = obj.argsort()[-1]
return obj[indexer]
- def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
+ def fillna(self, value, limit: int | None = None, copy: bool = True) -> Self:
"""
Fill NA/NaN values using the specified method.
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index d20d7f98b8aa8..190888d281ea9 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -236,7 +236,7 @@ def _pad_or_backfill(
return new_values
@doc(ExtensionArray.fillna)
- def fillna(self, value=None, limit: int | None = None, copy: bool = True) -> Self:
+ def fillna(self, value, limit: int | None = None, copy: bool = True) -> Self:
mask = self._mask
value = missing.check_value_size(value, mask, len(self))
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 134702099371d..522d86fb165f6 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -706,7 +706,7 @@ def isna(self) -> Self: # type: ignore[override]
def fillna(
self,
- value=None,
+ value,
limit: int | None = None,
copy: bool = True,
) -> Self:
@@ -736,8 +736,6 @@ def fillna(
When ``self.fill_value`` is not NA, the result dtype will be
``self.dtype``. Again, this preserves the amount of memory used.
"""
- if value is None:
- raise ValueError("Must specify 'value'.")
new_values = np.where(isna(self.sp_values), value, self.sp_values)
if self._null_fill_value:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e1c1b21249362..523ca9de201bf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -6752,7 +6752,7 @@ def _pad_or_backfill(
@overload
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame = ...,
+ value: Hashable | Mapping | Series | DataFrame,
*,
axis: Axis | None = ...,
inplace: Literal[False] = ...,
@@ -6762,7 +6762,7 @@ def fillna(
@overload
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame = ...,
+ value: Hashable | Mapping | Series | DataFrame,
*,
axis: Axis | None = ...,
inplace: Literal[True],
@@ -6772,7 +6772,7 @@ def fillna(
@overload
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame = ...,
+ value: Hashable | Mapping | Series | DataFrame,
*,
axis: Axis | None = ...,
inplace: bool = ...,
@@ -6786,7 +6786,7 @@ def fillna(
)
def fillna(
self,
- value: Hashable | Mapping | Series | DataFrame | None = None,
+ value: Hashable | Mapping | Series | DataFrame,
*,
axis: Axis | None = None,
inplace: bool = False,
@@ -6827,6 +6827,12 @@ def fillna(
reindex : Conform object to new index.
asfreq : Convert TimeSeries to specified frequency.
+ Notes
+ -----
+ For non-object dtype, ``value=None`` will use the NA value of the dtype.
+ See more details in the :ref:`Filling missing data<missing_data.fillna>`
+ section.
+
Examples
--------
>>> df = pd.DataFrame(
@@ -6909,101 +6915,92 @@ def fillna(
axis = 0
axis = self._get_axis_number(axis)
- if value is None:
- raise ValueError("Must specify a fill 'value'.")
- else:
- if self.ndim == 1:
- if isinstance(value, (dict, ABCSeries)):
- if not len(value):
- # test_fillna_nonscalar
- if inplace:
- return None
- return self.copy(deep=False)
- from pandas import Series
-
- value = Series(value)
- value = value.reindex(self.index)
- value = value._values
- elif not is_list_like(value):
- pass
- else:
- raise TypeError(
- '"value" parameter must be a scalar, dict '
- "or Series, but you passed a "
- f'"{type(value).__name__}"'
- )
+ if self.ndim == 1:
+ if isinstance(value, (dict, ABCSeries)):
+ if not len(value):
+ # test_fillna_nonscalar
+ if inplace:
+ return None
+ return self.copy(deep=False)
+ from pandas import Series
+
+ value = Series(value)
+ value = value.reindex(self.index)
+ value = value._values
+ elif not is_list_like(value):
+ pass
+ else:
+ raise TypeError(
+ '"value" parameter must be a scalar, dict '
+ "or Series, but you passed a "
+ f'"{type(value).__name__}"'
+ )
- new_data = self._mgr.fillna(value=value, limit=limit, inplace=inplace)
+ new_data = self._mgr.fillna(value=value, limit=limit, inplace=inplace)
- elif isinstance(value, (dict, ABCSeries)):
- if axis == 1:
- raise NotImplementedError(
- "Currently only can fill "
- "with dict/Series column "
- "by column"
- )
- result = self if inplace else self.copy(deep=False)
- for k, v in value.items():
- if k not in result:
- continue
+ elif isinstance(value, (dict, ABCSeries)):
+ if axis == 1:
+ raise NotImplementedError(
+ "Currently only can fill with dict/Series column by column"
+ )
+ result = self if inplace else self.copy(deep=False)
+ for k, v in value.items():
+ if k not in result:
+ continue
- res_k = result[k].fillna(v, limit=limit)
+ res_k = result[k].fillna(v, limit=limit)
- if not inplace:
- result[k] = res_k
+ if not inplace:
+ result[k] = res_k
+ else:
+ # We can write into our existing column(s) iff dtype
+ # was preserved.
+ if isinstance(res_k, ABCSeries):
+ # i.e. 'k' only shows up once in self.columns
+ if res_k.dtype == result[k].dtype:
+ result.loc[:, k] = res_k
+ else:
+ # Different dtype -> no way to do inplace.
+ result[k] = res_k
else:
- # We can write into our existing column(s) iff dtype
- # was preserved.
- if isinstance(res_k, ABCSeries):
- # i.e. 'k' only shows up once in self.columns
- if res_k.dtype == result[k].dtype:
- result.loc[:, k] = res_k
+ # see test_fillna_dict_inplace_nonunique_columns
+ locs = result.columns.get_loc(k)
+ if isinstance(locs, slice):
+ locs = np.arange(self.shape[1])[locs]
+ elif isinstance(locs, np.ndarray) and locs.dtype.kind == "b":
+ locs = locs.nonzero()[0]
+ elif not (
+ isinstance(locs, np.ndarray) and locs.dtype.kind == "i"
+ ):
+ # Should never be reached, but let's cover our bases
+ raise NotImplementedError(
+ "Unexpected get_loc result, please report a bug at "
+ "https://github.com/pandas-dev/pandas"
+ )
+
+ for i, loc in enumerate(locs):
+ res_loc = res_k.iloc[:, i]
+ target = self.iloc[:, loc]
+
+ if res_loc.dtype == target.dtype:
+ result.iloc[:, loc] = res_loc
else:
- # Different dtype -> no way to do inplace.
- result[k] = res_k
- else:
- # see test_fillna_dict_inplace_nonunique_columns
- locs = result.columns.get_loc(k)
- if isinstance(locs, slice):
- locs = np.arange(self.shape[1])[locs]
- elif (
- isinstance(locs, np.ndarray) and locs.dtype.kind == "b"
- ):
- locs = locs.nonzero()[0]
- elif not (
- isinstance(locs, np.ndarray) and locs.dtype.kind == "i"
- ):
- # Should never be reached, but let's cover our bases
- raise NotImplementedError(
- "Unexpected get_loc result, please report a bug at "
- "https://github.com/pandas-dev/pandas"
- )
-
- for i, loc in enumerate(locs):
- res_loc = res_k.iloc[:, i]
- target = self.iloc[:, loc]
-
- if res_loc.dtype == target.dtype:
- result.iloc[:, loc] = res_loc
- else:
- result.isetitem(loc, res_loc)
- if inplace:
- return self._update_inplace(result)
- else:
- return result
+ result.isetitem(loc, res_loc)
+ if inplace:
+ return self._update_inplace(result)
+ else:
+ return result
- elif not is_list_like(value):
- if axis == 1:
- result = self.T.fillna(value=value, limit=limit).T
- new_data = result._mgr
- else:
- new_data = self._mgr.fillna(
- value=value, limit=limit, inplace=inplace
- )
- elif isinstance(value, ABCDataFrame) and self.ndim == 2:
- new_data = self.where(self.notna(), value)._mgr
+ elif not is_list_like(value):
+ if axis == 1:
+ result = self.T.fillna(value=value, limit=limit).T
+ new_data = result._mgr
else:
- raise ValueError(f"invalid fill value with a {type(value)}")
+ new_data = self._mgr.fillna(value=value, limit=limit, inplace=inplace)
+ elif isinstance(value, ABCDataFrame) and self.ndim == 2:
+ new_data = self.where(self.notna(), value)._mgr
+ else:
+ raise ValueError(f"invalid fill value with a {type(value)}")
result = self._constructor_from_mgr(new_data, axes=new_data.axes)
if inplace:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index d5517a210b39d..69f916bb3f769 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2543,7 +2543,7 @@ def notna(self) -> npt.NDArray[np.bool_]:
notnull = notna
- def fillna(self, value=None):
+ def fillna(self, value):
"""
Fill NA/NaN values with the specified value.
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 4affa1337aa2a..9df0d26ce622a 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1675,7 +1675,7 @@ def duplicated(self, keep: DropKeep = "first") -> npt.NDArray[np.bool_]:
# (previously declared in base class "IndexOpsMixin")
_duplicated = duplicated # type: ignore[misc]
- def fillna(self, value=None, downcast=None):
+ def fillna(self, value, downcast=None):
"""
fillna is not implemented for MultiIndex
"""
diff --git a/pandas/tests/extension/base/missing.py b/pandas/tests/extension/base/missing.py
index 328c6cd6164fb..4b9234a9904a2 100644
--- a/pandas/tests/extension/base/missing.py
+++ b/pandas/tests/extension/base/missing.py
@@ -68,6 +68,12 @@ def test_fillna_scalar(self, data_missing):
expected = data_missing.fillna(valid)
tm.assert_extension_array_equal(result, expected)
+ def test_fillna_with_none(self, data_missing):
+ # GH#57723
+ result = data_missing.fillna(None)
+ expected = data_missing
+ tm.assert_extension_array_equal(result, expected)
+
def test_fillna_limit_pad(self, data_missing):
arr = data_missing.take([1, 0, 0, 0, 1])
result = pd.Series(arr).ffill(limit=2)
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index a2721908e858f..504bafc145108 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -144,6 +144,14 @@ def test_fillna_series(self, data_missing):
):
super().test_fillna_series(data_missing)
+ def test_fillna_with_none(self, data_missing):
+ # GH#57723
+ # EAs that don't have special logic for None will raise, unlike pandas'
+ # which interpret None as the NA value for the dtype.
+ msg = "conversion from NoneType to Decimal is not supported"
+ with pytest.raises(TypeError, match=msg):
+ super().test_fillna_with_none(data_missing)
+
@pytest.mark.parametrize("dropna", [True, False])
def test_value_counts(self, all_data, dropna):
all_data = all_data[:10]
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 6ecbf2063f203..22ac9627f6cda 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -149,6 +149,13 @@ def test_fillna_frame(self):
"""We treat dictionaries as a mapping in fillna, not a scalar."""
super().test_fillna_frame()
+ def test_fillna_with_none(self, data_missing):
+ # GH#57723
+ # EAs that don't have special logic for None will raise, unlike pandas'
+ # which interpret None as the NA value for the dtype.
+ with pytest.raises(AssertionError):
+ super().test_fillna_with_none(data_missing)
+
@pytest.mark.parametrize(
"limit_area, input_ilocs, expected_ilocs",
[
diff --git a/pandas/tests/frame/methods/test_fillna.py b/pandas/tests/frame/methods/test_fillna.py
index b8f67138889cc..e858c123e4dae 100644
--- a/pandas/tests/frame/methods/test_fillna.py
+++ b/pandas/tests/frame/methods/test_fillna.py
@@ -64,8 +64,8 @@ def test_fillna_datetime(self, datetime_frame):
padded.loc[padded.index[-5:], "A"] == padded.loc[padded.index[-5], "A"]
).all()
- msg = "Must specify a fill 'value'"
- with pytest.raises(ValueError, match=msg):
+ msg = r"missing 1 required positional argument: 'value'"
+ with pytest.raises(TypeError, match=msg):
datetime_frame.fillna()
@pytest.mark.xfail(using_pyarrow_string_dtype(), reason="can't fill 0 in string")
@@ -779,3 +779,17 @@ def test_ffill_bfill_limit_area(data, expected_data, method, kwargs):
expected = DataFrame(expected_data)
result = getattr(df, method)(**kwargs)
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("test_frame", [True, False])
+@pytest.mark.parametrize("dtype", ["float", "object"])
+def test_fillna_with_none_object(test_frame, dtype):
+ # GH#57723
+ obj = Series([1, np.nan, 3], dtype=dtype)
+ if test_frame:
+ obj = obj.to_frame()
+ result = obj.fillna(value=None)
+ expected = Series([1, None, 3], dtype=dtype)
+ if test_frame:
+ expected = expected.to_frame()
+ tm.assert_equal(result, expected)
| - [x] closes #57723 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The one behavior introduced here I think is important to highlight is using `None` with non-object dtypes. In such a case we treat `None` as the NA value for the given dtype for NumPy float/datetime and pandas' EA dtypes. However for external EAs, by default we try to put `None` in the values and raise if this fails.
For object dtype, we replace any NA value with `None`.
cc @Dr-Irv | https://api.github.com/repos/pandas-dev/pandas/pulls/58085 | 2024-03-30T14:14:53Z | 2024-04-15T17:22:40Z | 2024-04-15T17:22:40Z | 2024-04-15T20:32:33Z |
CLN: move raising TypeError for `interpolate` with `object` dtype to Block | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 858d2ba82a969..c6f7e6296bc13 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7749,11 +7749,6 @@ def interpolate(
raise ValueError("'method' should be a string, not None.")
obj, should_transpose = (self.T, True) if axis == 1 else (self, False)
- # GH#53631
- if np.any(obj.dtypes == object):
- raise TypeError(
- f"{type(self).__name__} cannot interpolate with object dtype."
- )
if isinstance(obj.index, MultiIndex) and method != "linear":
raise ValueError(
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 468ec32ce7760..7be1d5d95ffdf 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1388,12 +1388,10 @@ def interpolate(
# If there are no NAs, then interpolate is a no-op
return [self.copy(deep=False)]
- # TODO(3.0): this case will not be reachable once GH#53638 is enforced
if self.dtype == _dtype_obj:
- # only deal with floats
- # bc we already checked that can_hold_na, we don't have int dtype here
- # test_interp_basic checks that we make a copy here
- return [self.copy(deep=False)]
+ # GH#53631
+ name = {1: "Series", 2: "DataFrame"}[self.ndim]
+ raise TypeError(f"{name} cannot interpolate with object dtype.")
copy, refs = self._get_refs_and_copy(inplace)
diff --git a/pandas/tests/series/methods/test_interpolate.py b/pandas/tests/series/methods/test_interpolate.py
index c5df1fd498938..1008c2c87dc9e 100644
--- a/pandas/tests/series/methods/test_interpolate.py
+++ b/pandas/tests/series/methods/test_interpolate.py
@@ -790,11 +790,9 @@ def test_interpolate_unsorted_index(self, ascending, expected_values):
def test_interpolate_asfreq_raises(self):
ser = Series(["a", None, "b"], dtype=object)
- msg2 = "Series cannot interpolate with object dtype"
- msg = "Invalid fill method"
- with pytest.raises(TypeError, match=msg2):
- with pytest.raises(ValueError, match=msg):
- ser.interpolate(method="asfreq")
+ msg = "Can not interpolate with method=asfreq"
+ with pytest.raises(ValueError, match=msg):
+ ser.interpolate(method="asfreq")
def test_interpolate_fill_value(self):
# GH#54920
| xref #57820
moved raising `TypeError` for `interpolate` with object dtype from ``NDFrame`` to ``Block`` | https://api.github.com/repos/pandas-dev/pandas/pulls/58083 | 2024-03-30T00:45:24Z | 2024-04-01T20:49:57Z | 2024-04-01T20:49:57Z | 2024-04-01T20:50:04Z |
Backport PR #58075 on branch 2.2.x (DOC: whatsnew note for #57553) | diff --git a/doc/source/whatsnew/v2.2.2.rst b/doc/source/whatsnew/v2.2.2.rst
index 9e1a883d47cf8..0dac3660c76b2 100644
--- a/doc/source/whatsnew/v2.2.2.rst
+++ b/doc/source/whatsnew/v2.2.2.rst
@@ -15,6 +15,7 @@ Fixed regressions
~~~~~~~~~~~~~~~~~
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pandas nullable on with missing values (:issue:`56702`)
- :meth:`DataFrame.__dataframe__` was producing incorrect data buffers when the a column's type was a pyarrow nullable on with missing values (:issue:`57664`)
+- Avoid issuing a spurious ``DeprecationWarning`` when a custom :class:`DataFrame` or :class:`Series` subclass method is called (:issue:`57553`)
- Fixed regression in precision of :func:`to_datetime` with string and ``unit`` input (:issue:`57051`)
.. ---------------------------------------------------------------------------
| Backport PR #58075: DOC: whatsnew note for #57553 | https://api.github.com/repos/pandas-dev/pandas/pulls/58080 | 2024-03-29T21:40:25Z | 2024-04-01T21:23:48Z | 2024-04-01T21:23:48Z | 2024-04-01T21:23:50Z |
CLN: Remove unnecessary block in apply_str | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index de2fd9394e2fa..e8df24850f7a8 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -564,22 +564,10 @@ def apply_str(self) -> DataFrame | Series:
"axis" not in arg_names or func in ("corrwith", "skew")
):
raise ValueError(f"Operation {func} does not support axis=1")
- if "axis" in arg_names:
- if isinstance(obj, (SeriesGroupBy, DataFrameGroupBy)):
- # Try to avoid FutureWarning for deprecated axis keyword;
- # If self.axis matches the axis we would get by not passing
- # axis, we safely exclude the keyword.
-
- default_axis = 0
- if func in ["idxmax", "idxmin"]:
- # DataFrameGroupBy.idxmax, idxmin axis defaults to self.axis,
- # whereas other axis keywords default to 0
- default_axis = self.obj.axis
-
- if default_axis != self.axis:
- self.kwargs["axis"] = self.axis
- else:
- self.kwargs["axis"] = self.axis
+ if "axis" in arg_names and not isinstance(
+ obj, (SeriesGroupBy, DataFrameGroupBy)
+ ):
+ self.kwargs["axis"] = self.axis
return self._apply_str(obj, func, *self.args, **self.kwargs)
def apply_list_or_dict_like(self) -> DataFrame | Series:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Due to #57109, "axis" will not be an argument for groupby ops.
Hat tip to @jbrockmendel for finding this. | https://api.github.com/repos/pandas-dev/pandas/pulls/58077 | 2024-03-29T19:30:21Z | 2024-03-29T21:39:18Z | 2024-03-29T21:39:18Z | 2024-03-29T22:07:23Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.