title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: correct DataFrame.pivot docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1798a35168265..dfe7e90c134fc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3868,9 +3868,8 @@ def last_valid_index(self):
def pivot(self, index=None, columns=None, values=None):
"""
Reshape data (produce a "pivot" table) based on column values. Uses
- unique values from index / columns to form axes and return either
- DataFrame or Panel, depending on whether you request a single value
- column (DataFrame) or all columns (Panel)
+ unique values from index / columns to form axes of the resulting
+ DataFrame.
Parameters
----------
@@ -3880,7 +3879,20 @@ def pivot(self, index=None, columns=None, values=None):
columns : string or object
Column name to use to make new frame's columns
values : string or object, optional
- Column name to use for populating new frame's values
+ Column name to use for populating new frame's values. If not
+ specified, all remaining columns will be used and the result will
+ have hierarchically indexed columns
+
+ Returns
+ -------
+ pivoted : DataFrame
+
+ See also
+ --------
+ DataFrame.pivot_table : generalization of pivot that can handle
+ duplicate values for one index/column pair
+ DataFrame.unstack : pivot based on the index values instead of a
+ column
Notes
-----
@@ -3889,30 +3901,30 @@ def pivot(self, index=None, columns=None, values=None):
Examples
--------
+
+ >>> df = pd.DataFrame({'foo': ['one','one','one','two','two','two'],
+ 'bar': ['A', 'B', 'C', 'A', 'B', 'C'],
+ 'baz': [1, 2, 3, 4, 5, 6]})
>>> df
foo bar baz
- 0 one A 1.
- 1 one B 2.
- 2 one C 3.
- 3 two A 4.
- 4 two B 5.
- 5 two C 6.
-
- >>> df.pivot('foo', 'bar', 'baz')
+ 0 one A 1
+ 1 one B 2
+ 2 one C 3
+ 3 two A 4
+ 4 two B 5
+ 5 two C 6
+
+ >>> df.pivot(index='foo', columns='bar', values='baz')
A B C
one 1 2 3
two 4 5 6
- >>> df.pivot('foo', 'bar')['baz']
+ >>> df.pivot(index='foo', columns='bar')['baz']
A B C
one 1 2 3
two 4 5 6
- Returns
- -------
- pivoted : DataFrame
- If no values column specified, will have hierarchically indexed
- columns
+
"""
from pandas.core.reshape import pivot
return pivot(self, index=index, columns=columns, values=values)
| The mention of panels that are created is not correct. You get a multi-index, I noticed
| https://api.github.com/repos/pandas-dev/pandas/pulls/14430 | 2016-10-15T20:29:41Z | 2016-10-22T09:50:20Z | 2016-10-22T09:50:20Z | 2016-10-22T09:50:20Z |
Bug: Grouping by index and column fails on DataFrame with single index (GH14327) | diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index 5c03408cbf20f..0046e2553ad6b 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -46,3 +46,4 @@ Bug Fixes
- Bug in ``pd.concat`` where names of the ``keys`` were not propagated to the resulting ``MultiIndex`` (:issue:`14252`)
- Bug in ``MultiIndex.set_levels`` where illegal level values were still set after raising an error (:issue:`13754`)
- Bug in ``DataFrame.to_json`` where ``lines=True`` and a value contained a ``}`` character (:issue:`14391`)
+- Bug in ``df.groupby`` causing an ``AttributeError`` when grouping a single index frame by a column and the index level (:issue`14327`)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 3c376e3188eac..5223c0ac270f3 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2201,36 +2201,12 @@ def __init__(self, index, grouper=None, obj=None, name=None, level=None,
raise AssertionError('Level %s not in index' % str(level))
level = index.names.index(level)
- inds = index.labels[level]
- level_index = index.levels[level]
-
if self.name is None:
self.name = index.names[level]
- # XXX complete hack
-
- if grouper is not None:
- level_values = index.levels[level].take(inds)
- self.grouper = level_values.map(self.grouper)
- else:
- # all levels may not be observed
- labels, uniques = algos.factorize(inds, sort=True)
-
- if len(uniques) > 0 and uniques[0] == -1:
- # handle NAs
- mask = inds != -1
- ok_labels, uniques = algos.factorize(inds[mask], sort=True)
-
- labels = np.empty(len(inds), dtype=inds.dtype)
- labels[mask] = ok_labels
- labels[~mask] = -1
-
- if len(uniques) < len(level_index):
- level_index = level_index.take(uniques)
+ self.grouper, self._labels, self._group_index = \
+ index._get_grouper_for_level(self.grouper, level)
- self._labels = labels
- self._group_index = level_index
- self.grouper = level_index.take(labels)
else:
if isinstance(self.grouper, (list, tuple)):
self.grouper = com._asarray_tuplesafe(self.grouper)
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index 5082fc84982c6..1c24a0db34b2b 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -432,6 +432,36 @@ def _update_inplace(self, result, **kwargs):
# guard when called from IndexOpsMixin
raise TypeError("Index can't be updated inplace")
+ _index_shared_docs['_get_grouper_for_level'] = """
+ Get index grouper corresponding to an index level
+
+ Parameters
+ ----------
+ mapper: Group mapping function or None
+ Function mapping index values to groups
+ level : int or None
+ Index level
+
+ Returns
+ -------
+ grouper : Index
+ Index of values to group on
+ labels : ndarray of int or None
+ Array of locations in level_index
+ uniques : Index or None
+ Index of unique values for level
+ """
+
+ @Appender(_index_shared_docs['_get_grouper_for_level'])
+ def _get_grouper_for_level(self, mapper, level=None):
+ assert level is None or level == 0
+ if mapper is None:
+ grouper = self
+ else:
+ grouper = self.map(mapper)
+
+ return grouper, None, None
+
def is_(self, other):
"""
More flexible, faster check like ``is`` but that works through views
diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py
index 0c465da24a17e..a9f452db69659 100644
--- a/pandas/indexes/multi.py
+++ b/pandas/indexes/multi.py
@@ -539,6 +539,37 @@ def _format_native_types(self, na_rep='nan', **kwargs):
return mi.values
+ @Appender(_index_shared_docs['_get_grouper_for_level'])
+ def _get_grouper_for_level(self, mapper, level):
+ indexer = self.labels[level]
+ level_index = self.levels[level]
+
+ if mapper is not None:
+ # Handle group mapping function and return
+ level_values = self.levels[level].take(indexer)
+ grouper = level_values.map(mapper)
+ return grouper, None, None
+
+ labels, uniques = algos.factorize(indexer, sort=True)
+
+ if len(uniques) > 0 and uniques[0] == -1:
+ # Handle NAs
+ mask = indexer != -1
+ ok_labels, uniques = algos.factorize(indexer[mask],
+ sort=True)
+
+ labels = np.empty(len(indexer), dtype=indexer.dtype)
+ labels[mask] = ok_labels
+ labels[~mask] = -1
+
+ if len(uniques) < len(level_index):
+ # Remove unobserved levels from level_index
+ level_index = level_index.take(uniques)
+
+ grouper = level_index.take(labels)
+
+ return grouper, labels, level_index
+
@property
def _constructor(self):
return MultiIndex.from_tuples
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 02917ab18c29f..f3791ee1d5c91 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -458,6 +458,39 @@ def test_grouper_creation_bug(self):
expected = s.groupby(level='one').sum()
assert_series_equal(result, expected)
+ def test_grouper_column_and_index(self):
+ # GH 14327
+
+ # Grouping a multi-index frame by a column and an index level should
+ # be equivalent to resetting the index and grouping by two columns
+ idx = pd.MultiIndex.from_tuples([('a', 1), ('a', 2), ('a', 3),
+ ('b', 1), ('b', 2), ('b', 3)])
+ idx.names = ['outer', 'inner']
+ df_multi = pd.DataFrame({"A": np.arange(6),
+ 'B': ['one', 'one', 'two',
+ 'two', 'one', 'one']},
+ index=idx)
+ result = df_multi.groupby(['B', pd.Grouper(level='inner')]).mean()
+ expected = df_multi.reset_index().groupby(['B', 'inner']).mean()
+ assert_frame_equal(result, expected)
+
+ # Test the reverse grouping order
+ result = df_multi.groupby([pd.Grouper(level='inner'), 'B']).mean()
+ expected = df_multi.reset_index().groupby(['inner', 'B']).mean()
+ assert_frame_equal(result, expected)
+
+ # Grouping a single-index frame by a column and the index should
+ # be equivalent to resetting the index and grouping by two columns
+ df_single = df_multi.reset_index('outer')
+ result = df_single.groupby(['B', pd.Grouper(level='inner')]).mean()
+ expected = df_single.reset_index().groupby(['B', 'inner']).mean()
+ assert_frame_equal(result, expected)
+
+ # Test the reverse grouping order
+ result = df_single.groupby([pd.Grouper(level='inner'), 'B']).mean()
+ expected = df_single.reset_index().groupby(['inner', 'B']).mean()
+ assert_frame_equal(result, expected)
+
def test_grouper_getting_correct_binner(self):
# GH 10063
| - [x] closes #14327
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
This PR is a continuation of #14333. See discussion there for explanation of why this new PR was needed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14428 | 2016-10-14T23:39:26Z | 2016-10-15T19:25:41Z | 2016-10-15T19:25:41Z | 2016-10-15T19:25:48Z |
BUG: String indexing against object dtype may raise AttributeError | diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index 2a421e5d2eacd..6dada92099a7f 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -37,6 +37,7 @@ Bug Fixes
- Bug in localizing an ambiguous timezone when a boolean is passed (:issue:`14402`)
+- Bug in string indexing against data with ``object`` ``Index`` may raise ``AttributeError`` (:issue:`14424`)
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index 5082fc84982c6..8b3ec86a44f11 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -2936,6 +2936,11 @@ def _wrap_joined_index(self, joined, other):
name = self.name if self.name == other.name else None
return Index(joined, name=name)
+ def _get_string_slice(self, key, use_lhs=True, use_rhs=True):
+ # this is for partial string indexing,
+ # overridden in DatetimeIndex, TimedeltaIndex and PeriodIndex
+ raise NotImplementedError
+
def slice_indexer(self, start=None, end=None, step=None, kind=None):
"""
For an ordered Index, compute the slice indexer for input labels and
diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py
index fa406a27bef69..a50d3d28e5a11 100644
--- a/pandas/tests/indexing/test_indexing.py
+++ b/pandas/tests/indexing/test_indexing.py
@@ -3613,6 +3613,27 @@ def test_iloc_non_unique_indexing(self):
result = df2.loc[idx]
tm.assert_frame_equal(result, expected, check_index_type=False)
+ def test_string_slice(self):
+ # GH 14424
+ # string indexing against datetimelike with object
+ # dtype should properly raises KeyError
+ df = pd.DataFrame([1], pd.Index([pd.Timestamp('2011-01-01')],
+ dtype=object))
+ self.assertTrue(df.index.is_all_dates)
+ with tm.assertRaises(KeyError):
+ df['2011']
+
+ with tm.assertRaises(KeyError):
+ df.loc['2011', 0]
+
+ df = pd.DataFrame()
+ self.assertFalse(df.index.is_all_dates)
+ with tm.assertRaises(KeyError):
+ df['2011']
+
+ with tm.assertRaises(KeyError):
+ df.loc['2011', 0]
+
def test_mi_access(self):
# GH 4145
| - [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
String indexing may raise `AttributeError`, rather than `KeyError`.
```
df = pd.DataFrame([1], pd.Index([pd.Timestamp('2011-01-01')], dtype=object))
df.index.is_all_dates
# True
df['2011']
# AttributeError: 'Index' object has no attribute '_get_string_slice'
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/14424 | 2016-10-14T11:21:48Z | 2016-10-22T09:54:33Z | 2016-10-22T09:54:33Z | 2016-10-23T04:50:43Z |
Period factorization | diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
index e12b00dd06b39..5f3671012e6d5 100644
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -548,6 +548,32 @@ def time_groupby_sum(self):
self.df.groupby(['a'])['b'].sum()
+class groupby_period(object):
+ # GH 14338
+ goal_time = 0.2
+
+ def make_grouper(self, N):
+ return pd.period_range('1900-01-01', freq='D', periods=N)
+
+ def setup(self):
+ N = 10000
+ self.grouper = self.make_grouper(N)
+ self.df = pd.DataFrame(np.random.randn(N, 2))
+
+ def time_groupby_sum(self):
+ self.df.groupby(self.grouper).sum()
+
+
+class groupby_datetime(groupby_period):
+ def make_grouper(self, N):
+ return pd.date_range('1900-01-01', freq='D', periods=N)
+
+
+class groupby_datetimetz(groupby_period):
+ def make_grouper(self, N):
+ return pd.date_range('1900-01-01', freq='D', periods=N,
+ tz='US/Central')
+
#----------------------------------------------------------------------
# Series.value_counts
diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index 3edb8c1fa9071..8843a7849c200 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -20,7 +20,7 @@ Highlights include:
Performance Improvements
~~~~~~~~~~~~~~~~~~~~~~~~
-
+ - Fixed performance regression in factorization of ``Period`` data (:issue:`14338`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index ee59d6552bb2f..8644d4568e44d 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -285,18 +285,27 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
note: an array of Periods will ignore sort as it returns an always sorted
PeriodIndex
"""
- from pandas import Index, Series, DatetimeIndex
-
- vals = np.asarray(values)
-
- # localize to UTC
- is_datetimetz_type = is_datetimetz(values)
- if is_datetimetz_type:
- values = DatetimeIndex(values)
- vals = values.asi8
+ from pandas import Index, Series, DatetimeIndex, PeriodIndex
+
+ # handling two possibilities here
+ # - for a numpy datetimelike simply view as i8 then cast back
+ # - for an extension datetimelike view as i8 then
+ # reconstruct from boxed values to transfer metadata
+ dtype = None
+ if needs_i8_conversion(values):
+ if is_period_dtype(values):
+ values = PeriodIndex(values)
+ vals = values.asi8
+ elif is_datetimetz(values):
+ values = DatetimeIndex(values)
+ vals = values.asi8
+ else:
+ # numpy dtype
+ dtype = values.dtype
+ vals = values.view(np.int64)
+ else:
+ vals = np.asarray(values)
- is_datetime = is_datetime64_dtype(vals)
- is_timedelta = is_timedelta64_dtype(vals)
(hash_klass, vec_klass), vals = _get_data_algo(vals, _hashtables)
table = hash_klass(size_hint or len(vals))
@@ -311,13 +320,9 @@ def factorize(values, sort=False, order=None, na_sentinel=-1, size_hint=None):
uniques, labels = safe_sort(uniques, labels, na_sentinel=na_sentinel,
assume_unique=True)
- if is_datetimetz_type:
- # reset tz
- uniques = values._shallow_copy(uniques)
- elif is_datetime:
- uniques = uniques.astype('M8[ns]')
- elif is_timedelta:
- uniques = uniques.astype('m8[ns]')
+ if dtype is not None:
+ uniques = uniques.astype(dtype)
+
if isinstance(values, Index):
uniques = values._shallow_copy(uniques, name=None)
elif isinstance(values, Series):
| - [x] closes #14338
- [x] tests not needed / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
asv
```
before after ratio
[c41c6511] [96b364a4]
- 2.44s 46.28ms 0.02 groupby.groupby_period.time_groupby_sum
```
Continuation of #14348
| https://api.github.com/repos/pandas-dev/pandas/pulls/14419 | 2016-10-13T20:17:07Z | 2016-10-15T19:27:11Z | 2016-10-15T19:27:11Z | 2016-11-30T01:02:08Z |
DOC: pydata/pandas -> pandas-dev/pandas | diff --git a/asv_bench/benchmarks/attrs_caching.py b/asv_bench/benchmarks/attrs_caching.py
index 2b10cb88a3134..de9aa18937985 100644
--- a/asv_bench/benchmarks/attrs_caching.py
+++ b/asv_bench/benchmarks/attrs_caching.py
@@ -20,4 +20,4 @@ def setup(self):
self.cur_index = self.df.index
def time_setattr_dataframe_index(self):
- self.df.index = self.cur_index
\ No newline at end of file
+ self.df.index = self.cur_index
diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py
index 265ffbc7261ca..f68cf9399c546 100644
--- a/asv_bench/benchmarks/ctors.py
+++ b/asv_bench/benchmarks/ctors.py
@@ -49,4 +49,4 @@ def setup(self):
self.s = Series(([Timestamp('20110101'), Timestamp('20120101'), Timestamp('20130101')] * 1000))
def time_index_from_series_ctor(self):
- Index(self.s)
\ No newline at end of file
+ Index(self.s)
diff --git a/asv_bench/benchmarks/frame_ctor.py b/asv_bench/benchmarks/frame_ctor.py
index 85f3c1628bd8b..6f40611e68531 100644
--- a/asv_bench/benchmarks/frame_ctor.py
+++ b/asv_bench/benchmarks/frame_ctor.py
@@ -1703,4 +1703,4 @@ def setup(self):
self.dict_list = [dict(zip(self.columns, row)) for row in self.frame.values]
def time_series_ctor_from_dict(self):
- Series(self.some_dict)
\ No newline at end of file
+ Series(self.some_dict)
diff --git a/asv_bench/benchmarks/hdfstore_bench.py b/asv_bench/benchmarks/hdfstore_bench.py
index 7638cc2a0f8df..659fc4941da54 100644
--- a/asv_bench/benchmarks/hdfstore_bench.py
+++ b/asv_bench/benchmarks/hdfstore_bench.py
@@ -348,4 +348,4 @@ def remove(self, f):
try:
os.remove(self.f)
except:
- pass
\ No newline at end of file
+ pass
diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index a0a1b560d36f3..2c94f9b2b1e8c 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -344,4 +344,4 @@ def setup(self):
self.mi = MultiIndex.from_product([self.level1, self.level2])
def time_multiindex_with_datetime_level_sliced(self):
- self.mi[:10].values
\ No newline at end of file
+ self.mi[:10].values
diff --git a/asv_bench/benchmarks/io_sql.py b/asv_bench/benchmarks/io_sql.py
index 9a6b21f9e067a..c583ac1768c90 100644
--- a/asv_bench/benchmarks/io_sql.py
+++ b/asv_bench/benchmarks/io_sql.py
@@ -212,4 +212,4 @@ def setup(self):
self.df = DataFrame({'float1': randn(10000), 'float2': randn(10000), 'string1': (['foo'] * 10000), 'bool1': ([True] * 10000), 'int1': np.random.randint(0, 100000, size=10000), }, index=self.index)
def time_sql_write_sqlalchemy(self):
- self.df.to_sql('test1', self.engine, if_exists='replace')
\ No newline at end of file
+ self.df.to_sql('test1', self.engine, if_exists='replace')
diff --git a/asv_bench/benchmarks/panel_ctor.py b/asv_bench/benchmarks/panel_ctor.py
index 0b0e73847aa96..4f6fd4a5a2df8 100644
--- a/asv_bench/benchmarks/panel_ctor.py
+++ b/asv_bench/benchmarks/panel_ctor.py
@@ -61,4 +61,4 @@ def setup(self):
self.data_frames[x] = self.df
def time_panel_from_dict_two_different_indexes(self):
- Panel.from_dict(self.data_frames)
\ No newline at end of file
+ Panel.from_dict(self.data_frames)
diff --git a/asv_bench/benchmarks/panel_methods.py b/asv_bench/benchmarks/panel_methods.py
index 90118eaf6e407..0bd572db2211a 100644
--- a/asv_bench/benchmarks/panel_methods.py
+++ b/asv_bench/benchmarks/panel_methods.py
@@ -53,4 +53,4 @@ def setup(self):
self.panel = Panel(np.random.randn(100, len(self.index), 1000))
def time_panel_shift_minor(self):
- self.panel.shift(1, axis='minor')
\ No newline at end of file
+ self.panel.shift(1, axis='minor')
diff --git a/asv_bench/benchmarks/replace.py b/asv_bench/benchmarks/replace.py
index e9f33ebfce0bd..869ddd8d6fa49 100644
--- a/asv_bench/benchmarks/replace.py
+++ b/asv_bench/benchmarks/replace.py
@@ -45,4 +45,4 @@ def setup(self):
self.ts = Series(np.random.randn(self.N), index=self.rng)
def time_replace_replacena(self):
- self.ts.replace(np.nan, 0.0, inplace=True)
\ No newline at end of file
+ self.ts.replace(np.nan, 0.0, inplace=True)
diff --git a/asv_bench/benchmarks/reshape.py b/asv_bench/benchmarks/reshape.py
index 604fa5092a231..ab235e085986c 100644
--- a/asv_bench/benchmarks/reshape.py
+++ b/asv_bench/benchmarks/reshape.py
@@ -73,4 +73,4 @@ def setup(self):
break
def time_unstack_sparse_keyspace(self):
- self.idf.unstack()
\ No newline at end of file
+ self.idf.unstack()
diff --git a/asv_bench/benchmarks/stat_ops.py b/asv_bench/benchmarks/stat_ops.py
index daf5135e64c40..12fbb2478c2a5 100644
--- a/asv_bench/benchmarks/stat_ops.py
+++ b/asv_bench/benchmarks/stat_ops.py
@@ -258,4 +258,4 @@ def time_rolling_skew(self):
rolling_skew(self.arr, self.win)
def time_rolling_kurt(self):
- rolling_kurt(self.arr, self.win)
\ No newline at end of file
+ rolling_kurt(self.arr, self.win)
diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index e4f91b1b9c0c6..d64606214ca6a 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -390,4 +390,4 @@ def time_strings_upper(self):
self.many.str.upper()
def make_series(self, letters, strlen, size):
- return Series([str(x) for x in np.fromiter(IT.cycle(letters), count=(size * strlen), dtype='|S1').view('|S{}'.format(strlen))])
\ No newline at end of file
+ return Series([str(x) for x in np.fromiter(IT.cycle(letters), count=(size * strlen), dtype='|S1').view('|S{}'.format(strlen))])
diff --git a/doc/README.rst b/doc/README.rst
index a93ad32a4c8f8..a3733846d9ed1 100644
--- a/doc/README.rst
+++ b/doc/README.rst
@@ -155,9 +155,9 @@ Where to start?
---------------
There are a number of issues listed under `Docs
-<https://github.com/pydata/pandas/issues?labels=Docs&sort=updated&state=open>`_
+<https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open>`_
and `Good as first PR
-<https://github.com/pydata/pandas/issues?labels=Good+as+first+PR&sort=updated&state=open>`_
+<https://github.com/pandas-dev/pandas/issues?labels=Good+as+first+PR&sort=updated&state=open>`_
where you could start out.
Or maybe you have an idea of your own, by using pandas, looking for something
diff --git a/doc/_templates/autosummary/accessor_attribute.rst b/doc/_templates/autosummary/accessor_attribute.rst
index e38a9f22f9d99..a2f0eb5e068c4 100644
--- a/doc/_templates/autosummary/accessor_attribute.rst
+++ b/doc/_templates/autosummary/accessor_attribute.rst
@@ -3,4 +3,4 @@
.. currentmodule:: {{ module.split('.')[0] }}
-.. autoaccessorattribute:: {{ [module.split('.')[1], objname]|join('.') }}
\ No newline at end of file
+.. autoaccessorattribute:: {{ [module.split('.')[1], objname]|join('.') }}
diff --git a/doc/_templates/autosummary/accessor_method.rst b/doc/_templates/autosummary/accessor_method.rst
index 8175d8615ceb2..43dfc3b813120 100644
--- a/doc/_templates/autosummary/accessor_method.rst
+++ b/doc/_templates/autosummary/accessor_method.rst
@@ -3,4 +3,4 @@
.. currentmodule:: {{ module.split('.')[0] }}
-.. autoaccessormethod:: {{ [module.split('.')[1], objname]|join('.') }}
\ No newline at end of file
+.. autoaccessormethod:: {{ [module.split('.')[1], objname]|join('.') }}
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index f52f72b49dd31..090998570a358 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -973,7 +973,7 @@ are not numeric data (even in the case that ``.categories`` is numeric).
print("TypeError: " + str(e))
.. note::
- If such a function works, please file a bug at https://github.com/pydata/pandas!
+ If such a function works, please file a bug at https://github.com/pandas-dev/pandas!
dtype in apply
~~~~~~~~~~~~~~
diff --git a/doc/source/comparison_with_sas.rst b/doc/source/comparison_with_sas.rst
index 85d432b546f21..7ec91d251f15d 100644
--- a/doc/source/comparison_with_sas.rst
+++ b/doc/source/comparison_with_sas.rst
@@ -116,7 +116,7 @@ Reading External Data
Like SAS, pandas provides utilities for reading in data from
many formats. The ``tips`` dataset, found within the pandas
-tests (`csv <https://raw.github.com/pydata/pandas/master/pandas/tests/data/tips.csv>`_)
+tests (`csv <https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.csv>`_)
will be used in many of the following examples.
SAS provides ``PROC IMPORT`` to read csv data into a data set.
@@ -131,7 +131,7 @@ The pandas method is :func:`read_csv`, which works similarly.
.. ipython:: python
- url = 'https://raw.github.com/pydata/pandas/master/pandas/tests/data/tips.csv'
+ url = 'https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.csv'
tips = pd.read_csv(url)
tips.head()
diff --git a/doc/source/comparison_with_sql.rst b/doc/source/comparison_with_sql.rst
index 099a0e9469058..7962e0e69faa1 100644
--- a/doc/source/comparison_with_sql.rst
+++ b/doc/source/comparison_with_sql.rst
@@ -23,7 +23,7 @@ structure.
.. ipython:: python
- url = 'https://raw.github.com/pydata/pandas/master/pandas/tests/data/tips.csv'
+ url = 'https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/tips.csv'
tips = pd.read_csv(url)
tips.head()
diff --git a/doc/source/conf.py b/doc/source/conf.py
index fd3a2493a53e8..6ccd83c741f7a 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -301,9 +301,9 @@
autosummary_generate = glob.glob("*.rst")
# extlinks alias
-extlinks = {'issue': ('https://github.com/pydata/pandas/issues/%s',
+extlinks = {'issue': ('https://github.com/pandas-dev/pandas/issues/%s',
'GH'),
- 'wiki': ('https://github.com/pydata/pandas/wiki/%s',
+ 'wiki': ('https://github.com/pandas-dev/pandas/wiki/%s',
'wiki ')}
ipython_exec_lines = [
@@ -468,10 +468,10 @@ def linkcode_resolve(domain, info):
fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__))
if '+' in pandas.__version__:
- return "http://github.com/pydata/pandas/blob/master/pandas/%s%s" % (
+ return "http://github.com/pandas-dev/pandas/blob/master/pandas/%s%s" % (
fn, linespec)
else:
- return "http://github.com/pydata/pandas/blob/v%s/pandas/%s%s" % (
+ return "http://github.com/pandas-dev/pandas/blob/v%s/pandas/%s%s" % (
pandas.__version__, fn, linespec)
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 7f336abcaa6d7..3e500291db859 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -14,11 +14,11 @@ All contributions, bug reports, bug fixes, documentation improvements,
enhancements and ideas are welcome.
If you are simply looking to start working with the *pandas* codebase, navigate to the
-`GitHub "issues" tab <https://github.com/pydata/pandas/issues>`_ and start looking through
+`GitHub "issues" tab <https://github.com/pandas-dev/pandas/issues>`_ and start looking through
interesting issues. There are a number of issues listed under `Docs
-<https://github.com/pydata/pandas/issues?labels=Docs&sort=updated&state=open>`_
+<https://github.com/pandas-dev/pandas/issues?labels=Docs&sort=updated&state=open>`_
and `Difficulty Novice
-<https://github.com/pydata/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22>`_
+<https://github.com/pandas-dev/pandas/issues?q=is%3Aopen+is%3Aissue+label%3A%22Difficulty+Novice%22>`_
where you could start out.
Or maybe through using *pandas* you have an idea of your own or are looking for something
@@ -27,7 +27,7 @@ about it!
Feel free to ask questions on the `mailing list
<https://groups.google.com/forum/?fromgroups#!forum/pydata>`_ or on `Gitter
-<https://gitter.im/pydata/pandas>`_.
+<https://gitter.im/pandas-dev/pandas>`_.
Bug reports and enhancement requests
====================================
@@ -79,7 +79,7 @@ It can very quickly become overwhelming, but sticking to the guidelines below wi
straightforward and mostly trouble free. As always, if you are having difficulties please
feel free to ask for help.
-The code is hosted on `GitHub <https://www.github.com/pydata/pandas>`_. To
+The code is hosted on `GitHub <https://www.github.com/pandas-dev/pandas>`_. To
contribute you will need to sign up for a `free GitHub account
<https://github.com/signup/free>`_. We use `Git <http://git-scm.com/>`_ for
version control to allow many people to work together on the project.
@@ -103,12 +103,12 @@ Forking
-------
You will need your own fork to work on the code. Go to the `pandas project
-page <https://github.com/pydata/pandas>`_ and hit the ``Fork`` button. You will
+page <https://github.com/pandas-dev/pandas>`_ and hit the ``Fork`` button. You will
want to clone your fork to your machine::
git clone git@github.com:your-user-name/pandas.git pandas-yourname
cd pandas-yourname
- git remote add upstream git://github.com/pydata/pandas.git
+ git remote add upstream git://github.com/pandas-dev/pandas.git
This creates the directory `pandas-yourname` and connects your repository to
the upstream (main project) *pandas* repository.
@@ -467,7 +467,7 @@ and make these changes with::
pep8radius master --diff --in-place
Additional standards are outlined on the `code style wiki
-page <https://github.com/pydata/pandas/wiki/Code-Style-and-Conventions>`_.
+page <https://github.com/pandas-dev/pandas/wiki/Code-Style-and-Conventions>`_.
Please try to maintain backward compatibility. *pandas* has lots of users with lots of
existing code, so don't break it if at all possible. If you think breakage is required,
@@ -501,7 +501,7 @@ All tests should go into the ``tests`` subdirectory of the specific package.
This folder contains many current examples of tests, and we suggest looking to these for
inspiration. If your test requires working with files or
network connectivity, there is more information on the `testing page
-<https://github.com/pydata/pandas/wiki/Testing>`_ of the wiki.
+<https://github.com/pandas-dev/pandas/wiki/Testing>`_ of the wiki.
The ``pandas.util.testing`` module has many special ``assert`` functions that
make it easier to make statements about whether Series or DataFrame objects are
@@ -639,7 +639,7 @@ on Travis-CI. The first step is to create a `service account
Integration tests for ``pandas.io.gbq`` are skipped in pull requests because
the credentials that are required for running Google BigQuery integration
tests are `encrypted <https://docs.travis-ci.com/user/encrypting-files/>`__
-on Travis-CI and are only accessible from the pydata/pandas repository. The
+on Travis-CI and are only accessible from the pandas-dev/pandas repository. The
credentials won't be available on forks of pandas. Here are the steps to run
gbq integration tests on a forked repository:
@@ -688,7 +688,7 @@ performance regressions.
You can run specific benchmarks using the ``-r`` flag, which takes a regular expression.
-See the `performance testing wiki <https://github.com/pydata/pandas/wiki/Performance-Testing>`_ for information
+See the `performance testing wiki <https://github.com/pandas-dev/pandas/wiki/Performance-Testing>`_ for information
on how to write a benchmark.
Documenting your code
@@ -712,8 +712,8 @@ directive is used. The sphinx syntax for that is:
This will put the text *New in version 0.17.0* wherever you put the sphinx
directive. This should also be put in the docstring when adding a new function
-or method (`example <https://github.com/pydata/pandas/blob/v0.16.2/pandas/core/generic.py#L1959>`__)
-or a new keyword argument (`example <https://github.com/pydata/pandas/blob/v0.16.2/pandas/core/frame.py#L1171>`__).
+or method (`example <https://github.com/pandas-dev/pandas/blob/v0.16.2/pandas/core/generic.py#L1959>`__)
+or a new keyword argument (`example <https://github.com/pandas-dev/pandas/blob/v0.16.2/pandas/core/frame.py#L1171>`__).
Contributing your changes to *pandas*
=====================================
@@ -806,8 +806,8 @@ like::
origin git@github.com:yourname/pandas.git (fetch)
origin git@github.com:yourname/pandas.git (push)
- upstream git://github.com/pydata/pandas.git (fetch)
- upstream git://github.com/pydata/pandas.git (push)
+ upstream git://github.com/pandas-dev/pandas.git (fetch)
+ upstream git://github.com/pandas-dev/pandas.git (push)
Now your code is on GitHub, but it is not yet a part of the *pandas* project. For that to
happen, a pull request needs to be submitted on GitHub.
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 38a816060e1bc..a4ba21d495790 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -200,7 +200,7 @@ The :ref:`indexing <indexing>` docs.
df[(df.AAA <= 6) & (df.index.isin([0,2,4]))]
`Use loc for label-oriented slicing and iloc positional slicing
-<https://github.com/pydata/pandas/issues/2904>`__
+<https://github.com/pandas-dev/pandas/issues/2904>`__
.. ipython:: python
@@ -410,7 +410,7 @@ Sorting
df.sort_values(by=('Labs', 'II'), ascending=False)
`Partial Selection, the need for sortedness;
-<https://github.com/pydata/pandas/issues/2995>`__
+<https://github.com/pandas-dev/pandas/issues/2995>`__
Levels
******
@@ -787,7 +787,7 @@ The :ref:`Resample <timeseries.resampling>` docs.
<http://stackoverflow.com/questions/14569223/timegrouper-pandas>`__
`Using TimeGrouper and another grouping to create subgroups, then apply a custom function
-<https://github.com/pydata/pandas/issues/3791>`__
+<https://github.com/pandas-dev/pandas/issues/3791>`__
`Resampling with custom periods
<http://stackoverflow.com/questions/15408156/resampling-with-custom-periods>`__
@@ -823,7 +823,7 @@ ignore_index is needed in pandas < v0.13, and depending on df construction
df = df1.append(df2,ignore_index=True); df
`Self Join of a DataFrame
-<https://github.com/pydata/pandas/issues/2996>`__
+<https://github.com/pandas-dev/pandas/issues/2996>`__
.. ipython:: python
@@ -936,7 +936,7 @@ using that handle to read.
<http://stackoverflow.com/questions/15555005/get-inferred-dataframe-types-iteratively-using-chunksize>`__
`Dealing with bad lines
-<http://github.com/pydata/pandas/issues/2886>`__
+<http://github.com/pandas-dev/pandas/issues/2886>`__
`Dealing with bad lines II
<http://nipunbatra.github.io/2013/06/reading-unclean-data-csv-using-pandas/>`__
@@ -1075,7 +1075,7 @@ The :ref:`HDFStores <io.hdf5>` docs
<http://stackoverflow.com/questions/13926089/selecting-columns-from-pandas-hdfstore-table>`__
`Managing heterogeneous data using a linked multiple table hierarchy
-<http://github.com/pydata/pandas/issues/3032>`__
+<http://github.com/pandas-dev/pandas/issues/3032>`__
`Merging on-disk tables with millions of rows
<http://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python/14617925#14617925>`__
@@ -1216,7 +1216,7 @@ Timedeltas
The :ref:`Timedeltas <timedeltas.timedeltas>` docs.
`Using timedeltas
-<http://github.com/pydata/pandas/pull/2899>`__
+<http://github.com/pandas-dev/pandas/pull/2899>`__
.. ipython:: python
diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index 17ebd1f163f4f..d42d1a9091421 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -143,7 +143,7 @@ both "column wise min/max and global min/max coloring."
API
-----
-`pandas-datareader <https://github.com/pydata/pandas-datareader>`__
+`pandas-datareader <https://github.com/pandas-dev/pandas-datareader>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``pandas-datareader`` is a remote data access library for pandas. ``pandas.io`` from pandas < 0.17.0 is now refactored/split-off to and importable from ``pandas_datareader`` (PyPI:``pandas-datareader``). Many/most of the supported APIs have at least a documentation paragraph in the `pandas-datareader docs <https://pandas-datareader.readthedocs.org/en/latest/>`_:
diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 99d7486cde2d0..cfac5c257184d 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -391,7 +391,7 @@ This is because ``reindex_like`` silently inserts ``NaNs`` and the ``dtype``
changes accordingly. This can cause some issues when using ``numpy`` ``ufuncs``
such as ``numpy.logical_and``.
-See the `this old issue <https://github.com/pydata/pandas/issues/2388>`__ for a more
+See the `this old issue <https://github.com/pandas-dev/pandas/issues/2388>`__ for a more
detailed discussion.
Parsing Dates from Text Files
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 6295e6f6cbb68..f1b05d3579e5c 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -13,7 +13,7 @@ This is the recommended installation method for most users.
Instructions for installing from source,
`PyPI <http://pypi.python.org/pypi/pandas>`__, various Linux distributions, or a
-`development version <http://github.com/pydata/pandas>`__ are also provided.
+`development version <http://github.com/pandas-dev/pandas>`__ are also provided.
Python version support
----------------------
diff --git a/doc/source/io.rst b/doc/source/io.rst
index c07cfe4cd5574..1a8ccdf7b2d86 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2035,7 +2035,7 @@ You can even pass in an instance of ``StringIO`` if you so desire
that having so many network-accessing functions slows down the documentation
build. If you spot an error or an example that doesn't run, please do not
hesitate to report it over on `pandas GitHub issues page
- <http://www.github.com/pydata/pandas/issues>`__.
+ <http://www.github.com/pandas-dev/pandas/issues>`__.
Read a URL and match a table that contains specific text
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index b1addddc2121d..92caeec319169 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -81,7 +81,7 @@ Getting Support
---------------
The first stop for pandas issues and ideas is the `Github Issue Tracker
-<https://github.com/pydata/pandas/issues>`__. If you have a general question,
+<https://github.com/pandas-dev/pandas/issues>`__. If you have a general question,
pandas community experts can answer through `Stack Overflow
<http://stackoverflow.com/questions/tagged/pandas>`__.
@@ -103,7 +103,7 @@ training, and consulting for pandas.
pandas is only made possible by a group of people around the world like you
who have contributed new code, bug reports, fixes, comments and ideas. A
-complete list can be found `on Github <http://www.github.com/pydata/pandas/contributors>`__.
+complete list can be found `on Github <http://www.github.com/pandas-dev/pandas/contributors>`__.
Development Team
----------------
diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index f3df1ebdf25cb..b487fbc883c72 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -71,7 +71,7 @@ The ``convert_to_r_matrix`` function can be replaced by the normal
Not all conversion functions in rpy2 are working exactly the same as the
current methods in pandas. If you experience problems or limitations in
comparison to the ones in pandas, please report this at the
- `issue tracker <https://github.com/pydata/pandas/issues>`_.
+ `issue tracker <https://github.com/pandas-dev/pandas/issues>`_.
See also the documentation of the `rpy2 <http://rpy2.bitbucket.org/>`__ project.
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 7e987fcff31b3..d210065f04459 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -20,7 +20,7 @@ Release Notes
*************
This is the list of changes to pandas between each release. For full details,
-see the commit logs at http://github.com/pydata/pandas
+see the commit logs at http://github.com/pandas-dev/pandas
**What is it**
@@ -33,7 +33,7 @@ analysis / manipulation tool available in any language.
**Where to get it**
-* Source code: http://github.com/pydata/pandas
+* Source code: http://github.com/pandas-dev/pandas
* Binary installers on PyPI: http://pypi.python.org/pypi/pandas
* Documentation: http://pandas.pydata.org
diff --git a/doc/source/remote_data.rst b/doc/source/remote_data.rst
index 019aa82fed1aa..e2c713ac8519a 100644
--- a/doc/source/remote_data.rst
+++ b/doc/source/remote_data.rst
@@ -13,7 +13,7 @@ DataReader
The sub-package ``pandas.io.data`` is removed in favor of a separately
installable `pandas-datareader package
-<https://github.com/pydata/pandas-datareader>`_. This will allow the data
+<https://github.com/pandas-dev/pandas-datareader>`_. This will allow the data
modules to be independently updated to your pandas installation. The API for
``pandas-datareader v0.1.1`` is the same as in ``pandas v0.16.1``.
(:issue:`8961`)
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 6e05c3ff0457a..e3b186abe53fc 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -892,7 +892,7 @@ for Fourier series. By coloring these curves differently for each class
it is possible to visualize data clustering. Curves belonging to samples
of the same class will usually be closer together and form larger structures.
-**Note**: The "Iris" dataset is available `here <https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv>`__.
+**Note**: The "Iris" dataset is available `here <https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv>`__.
.. ipython:: python
@@ -1044,7 +1044,7 @@ forces acting on our sample are at an equilibrium) is where a dot representing
our sample will be drawn. Depending on which class that sample belongs it will
be colored differently.
-**Note**: The "Iris" dataset is available `here <https://raw.github.com/pydata/pandas/master/pandas/tests/data/iris.csv>`__.
+**Note**: The "Iris" dataset is available `here <https://raw.github.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv>`__.
.. ipython:: python
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 990018f2f7f3b..1b8930dcae0f1 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -392,7 +392,7 @@ def __reduce__(self): # optional, for pickle support
return type(self), args, None, None, list(self.items())
-# https://github.com/pydata/pandas/pull/9123
+# https://github.com/pandas-dev/pandas/pull/9123
def is_platform_little_endian():
""" am I little endian """
return sys.byteorder == 'little'
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index 72fbc3906cafb..f480eae2dd04d 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -1693,11 +1693,11 @@ def test_result_types(self):
self.check_result_type(np.float64, np.float64)
def test_result_types2(self):
- # xref https://github.com/pydata/pandas/issues/12293
+ # xref https://github.com/pandas-dev/pandas/issues/12293
raise nose.SkipTest("unreliable tests on complex128")
# Did not test complex64 because DataFrame is converting it to
- # complex128. Due to https://github.com/pydata/pandas/issues/10952
+ # complex128. Due to https://github.com/pandas-dev/pandas/issues/10952
self.check_result_type(np.complex128, np.complex128)
def test_undefined_func(self):
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index db48f2a46eaf3..9efaff6060909 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -1681,7 +1681,7 @@ def __setitem__(self, key, value):
else:
# There is a bug in numpy, which does not accept a Series as a
# indexer
- # https://github.com/pydata/pandas/issues/6168
+ # https://github.com/pandas-dev/pandas/issues/6168
# https://github.com/numpy/numpy/issues/4240 -> fixed in numpy 1.9
# FIXME: remove when numpy 1.9 is the lowest numpy version pandas
# accepts...
@@ -1690,7 +1690,7 @@ def __setitem__(self, key, value):
lindexer = self.categories.get_indexer(rvalue)
# FIXME: the following can be removed after GH7820 is fixed:
- # https://github.com/pydata/pandas/issues/7820
+ # https://github.com/pandas-dev/pandas/issues/7820
# float categories do currently return -1 for np.nan, even if np.nan is
# included in the index -> "repair" this here
if isnull(rvalue).any() and isnull(self.categories).any():
diff --git a/pandas/io/data.py b/pandas/io/data.py
index e76790a6ab98b..09c7aef0cde1a 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -1,6 +1,6 @@
raise ImportError(
"The pandas.io.data module is moved to a separate package "
"(pandas-datareader). After installing the pandas-datareader package "
- "(https://github.com/pydata/pandas-datareader), you can change "
+ "(https://github.com/pandas-dev/pandas-datareader), you can change "
"the import ``from pandas.io import data, wb`` to "
"``from pandas_datareader import data, wb``.")
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index d6f8660f20ef6..8038cc500f6cd 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -236,7 +236,7 @@ def get_user_account_credentials(self):
return credentials
def get_service_account_credentials(self):
- # Bug fix for https://github.com/pydata/pandas/issues/12572
+ # Bug fix for https://github.com/pandas-dev/pandas/issues/12572
# We need to know that a supported version of oauth2client is installed
# Test that either of the following is installed:
# - SignedJwtAssertionCredentials from oauth2client.client
diff --git a/pandas/io/tests/json/test_pandas.py b/pandas/io/tests/json/test_pandas.py
index 47bdd25572fc7..ffac5d5f4746e 100644
--- a/pandas/io/tests/json/test_pandas.py
+++ b/pandas/io/tests/json/test_pandas.py
@@ -767,7 +767,7 @@ def test_round_trip_exception_(self):
@network
def test_url(self):
- url = 'https://api.github.com/repos/pydata/pandas/issues?per_page=5'
+ url = 'https://api.github.com/repos/pandas-dev/pandas/issues?per_page=5' # noqa
result = read_json(url, convert_dates=True)
for c in ['created_at', 'closed_at', 'updated_at']:
self.assertEqual(result[c].dtype, 'datetime64[ns]')
diff --git a/pandas/io/tests/parser/common.py b/pandas/io/tests/parser/common.py
index 0b59b695e1dca..0219e16391be8 100644
--- a/pandas/io/tests/parser/common.py
+++ b/pandas/io/tests/parser/common.py
@@ -629,7 +629,7 @@ def test_read_csv_parse_simple_list(self):
@tm.network
def test_url(self):
# HTTP(S)
- url = ('https://raw.github.com/pydata/pandas/master/'
+ url = ('https://raw.github.com/pandas-dev/pandas/master/'
'pandas/io/tests/parser/data/salary.table.csv')
url_table = self.read_table(url)
dirpath = tm.get_data_path()
diff --git a/pandas/io/tests/parser/test_network.py b/pandas/io/tests/parser/test_network.py
index 8b8a6de36fc03..7e2f039853e2f 100644
--- a/pandas/io/tests/parser/test_network.py
+++ b/pandas/io/tests/parser/test_network.py
@@ -23,7 +23,7 @@ def setUp(self):
@tm.network
def test_url_gz(self):
- url = ('https://raw.github.com/pydata/pandas/'
+ url = ('https://raw.github.com/pandas-dev/pandas/'
'master/pandas/io/tests/parser/data/salary.table.gz')
url_table = read_table(url, compression="gzip", engine="python")
tm.assert_frame_equal(url_table, self.local_table)
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index d163b05aa01d4..998e71076b7c0 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -543,7 +543,7 @@ def test_read_xlrd_Book(self):
@tm.network
def test_read_from_http_url(self):
- url = ('https://raw.github.com/pydata/pandas/master/'
+ url = ('https://raw.github.com/pandas-dev/pandas/master/'
'pandas/io/tests/data/test1' + self.ext)
url_table = read_excel(url)
local_table = self.get_exceldf('test1')
diff --git a/pandas/io/tests/test_gbq.py b/pandas/io/tests/test_gbq.py
index 0ea4b5204e150..cca1580b84195 100644
--- a/pandas/io/tests/test_gbq.py
+++ b/pandas/io/tests/test_gbq.py
@@ -150,7 +150,7 @@ def _test_imports():
raise ImportError(
"pandas requires httplib2 for Google BigQuery support")
- # Bug fix for https://github.com/pydata/pandas/issues/12572
+ # Bug fix for https://github.com/pandas-dev/pandas/issues/12572
# We need to know that a supported version of oauth2client is installed
# Test that either of the following is installed:
# - SignedJwtAssertionCredentials from oauth2client.client
@@ -651,7 +651,7 @@ def test_download_dataset_larger_than_200k_rows(self):
self.assertEqual(len(df.drop_duplicates()), test_size)
def test_zero_rows(self):
- # Bug fix for https://github.com/pydata/pandas/issues/10273
+ # Bug fix for https://github.com/pandas-dev/pandas/issues/10273
df = gbq.read_gbq("SELECT title, id "
"FROM [publicdata:samples.wikipedia] "
"WHERE timestamp=-9999999",
diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py
index cf61ad9a35935..91042775ba19d 100644
--- a/pandas/io/tests/test_packers.py
+++ b/pandas/io/tests/test_packers.py
@@ -544,7 +544,7 @@ def test_sparse_frame(self):
class TestCompression(TestPackers):
- """See https://github.com/pydata/pandas/pull/9783
+ """See https://github.com/pandas-dev/pandas/pull/9783
"""
def setUp(self):
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index 198a4017b5af7..af8989baabbc0 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -1610,7 +1610,7 @@ def test_double_precision(self):
def test_connectable_issue_example(self):
# This tests the example raised in issue
- # https://github.com/pydata/pandas/issues/10104
+ # https://github.com/pandas-dev/pandas/issues/10104
def foo(connection):
query = 'SELECT test_foo_data FROM test_foo_data'
diff --git a/pandas/io/wb.py b/pandas/io/wb.py
index 5dc4d9ce1adc4..2183290c7e074 100644
--- a/pandas/io/wb.py
+++ b/pandas/io/wb.py
@@ -1,6 +1,6 @@
raise ImportError(
"The pandas.io.wb module is moved to a separate package "
"(pandas-datareader). After installing the pandas-datareader package "
- "(https://github.com/pydata/pandas-datareader), you can change "
+ "(https://github.com/pandas-dev/pandas-datareader), you can change "
"the import ``from pandas.io import data, wb`` to "
"``from pandas_datareader import data, wb``.")
diff --git a/pandas/tests/formats/test_style.py b/pandas/tests/formats/test_style.py
index 3083750e582fc..2fec04b9c1aa3 100644
--- a/pandas/tests/formats/test_style.py
+++ b/pandas/tests/formats/test_style.py
@@ -144,7 +144,7 @@ def test_set_properties_subset(self):
self.assertEqual(result, expected)
def test_empty_index_name_doesnt_display(self):
- # https://github.com/pydata/pandas/pull/12090#issuecomment-180695902
+ # https://github.com/pandas-dev/pandas/pull/12090#issuecomment-180695902
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]})
result = df.style._translate()
@@ -175,7 +175,7 @@ def test_empty_index_name_doesnt_display(self):
self.assertEqual(result['head'], expected)
def test_index_name(self):
- # https://github.com/pydata/pandas/issues/11655
+ # https://github.com/pandas-dev/pandas/issues/11655
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]})
result = df.set_index('A').style._translate()
@@ -195,7 +195,7 @@ def test_index_name(self):
self.assertEqual(result['head'], expected)
def test_multiindex_name(self):
- # https://github.com/pydata/pandas/issues/11655
+ # https://github.com/pandas-dev/pandas/issues/11655
df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]})
result = df.set_index(['A', 'B']).style._translate()
@@ -217,7 +217,7 @@ def test_multiindex_name(self):
self.assertEqual(result['head'], expected)
def test_numeric_columns(self):
- # https://github.com/pydata/pandas/issues/12125
+ # https://github.com/pandas-dev/pandas/issues/12125
# smoke test for _translate
df = pd.DataFrame({0: [1, 2, 3]})
df.style._translate()
diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py
index 2cb62a60f885b..9ef2802cb950f 100644
--- a/pandas/tests/indexing/test_categorical.py
+++ b/pandas/tests/indexing/test_categorical.py
@@ -392,7 +392,7 @@ def test_boolean_selection(self):
def test_indexing_with_category(self):
- # https://github.com/pydata/pandas/issues/12564
+ # https://github.com/pandas-dev/pandas/issues/12564
# consistent result if comparing as Dataframe
cat = DataFrame({'A': ['foo', 'bar', 'baz']})
diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py
index 333792c5ffdb2..0916693ade2ce 100644
--- a/pandas/tests/plotting/test_boxplot_method.py
+++ b/pandas/tests/plotting/test_boxplot_method.py
@@ -100,7 +100,7 @@ def test_boxplot_return_type_none(self):
@slow
def test_boxplot_return_type_legacy(self):
- # API change in https://github.com/pydata/pandas/pull/7096
+ # API change in https://github.com/pandas-dev/pandas/pull/7096
import matplotlib as mpl # noqa
df = DataFrame(randn(6, 4),
diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py
index 4d0c1e9213b17..87cf89ebf0a9d 100644
--- a/pandas/tests/plotting/test_frame.py
+++ b/pandas/tests/plotting/test_frame.py
@@ -84,7 +84,7 @@ def test_plot(self):
# We have to redo it here because _check_plot_works does two plots,
# once without an ax kwarg and once with an ax kwarg and the new sharex
# behaviour does not remove the visibility of the latter axis (as ax is
- # present). see: https://github.com/pydata/pandas/issues/9737
+ # present). see: https://github.com/pandas-dev/pandas/issues/9737
axes = df.plot(subplots=True, title='blah')
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
@@ -927,7 +927,7 @@ def test_plot_scatter_with_c(self):
# Ensure that we can pass an np.array straight through to matplotlib,
# this functionality was accidentally removed previously.
- # See https://github.com/pydata/pandas/issues/8852 for bug report
+ # See https://github.com/pandas-dev/pandas/issues/8852 for bug report
#
# Exercise colormap path and non-colormap path as they are independent
#
@@ -2115,7 +2115,7 @@ def test_pie_df_nan(self):
self.assertEqual(result, expected)
# legend labels
# NaN's not included in legend with subplots
- # see https://github.com/pydata/pandas/issues/8390
+ # see https://github.com/pandas-dev/pandas/issues/8390
self.assertEqual([x.get_text() for x in
ax.get_legend().get_texts()],
base_expected[:i] + base_expected[i + 1:])
@@ -2336,9 +2336,9 @@ def _check_errorbar_color(containers, expected, has_err='has_xerr'):
@slow
def test_sharex_and_ax(self):
- # https://github.com/pydata/pandas/issues/9737 using gridspec, the axis
- # in fig.get_axis() are sorted differently than pandas expected them,
- # so make sure that only the right ones are removed
+ # https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
+ # the axis in fig.get_axis() are sorted differently than pandas
+ # expected them, so make sure that only the right ones are removed
import matplotlib.pyplot as plt
plt.close('all')
gs, axes = _generate_4_axes_via_gridspec()
@@ -2388,9 +2388,9 @@ def _check(axes):
@slow
def test_sharey_and_ax(self):
- # https://github.com/pydata/pandas/issues/9737 using gridspec, the axis
- # in fig.get_axis() are sorted differently than pandas expected them,
- # so make sure that only the right ones are removed
+ # https://github.com/pandas-dev/pandas/issues/9737 using gridspec,
+ # the axis in fig.get_axis() are sorted differently than pandas
+ # expected them, so make sure that only the right ones are removed
import matplotlib.pyplot as plt
gs, axes = _generate_4_axes_via_gridspec()
diff --git a/pandas/tests/series/test_datetime_values.py b/pandas/tests/series/test_datetime_values.py
index 8f2ab0ed28839..ed441f2f85572 100644
--- a/pandas/tests/series/test_datetime_values.py
+++ b/pandas/tests/series/test_datetime_values.py
@@ -273,7 +273,7 @@ def f():
self.assertRaises(com.SettingWithCopyError, f)
def test_dt_accessor_no_new_attributes(self):
- # https://github.com/pydata/pandas/issues/10673
+ # https://github.com/pandas-dev/pandas/issues/10673
s = Series(date_range('20130101', periods=5, freq='D'))
with tm.assertRaisesRegexp(AttributeError,
"You cannot add any new attribute"):
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index f688ec2d43789..086946d05d7a6 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -1412,7 +1412,7 @@ def tester(a, b):
# NotImplemented
# this is an alignment issue; these are equivalent
- # https://github.com/pydata/pandas/issues/5284
+ # https://github.com/pandas-dev/pandas/issues/5284
self.assertRaises(ValueError, lambda: d.__and__(s, axis='columns'))
self.assertRaises(ValueError, tester, s, d)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 092e02ee261a0..f89f41abd0d35 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -595,7 +595,7 @@ def test_categorical_zeroes(self):
tm.assert_series_equal(result, expected, check_index_type=True)
def test_dropna(self):
- # https://github.com/pydata/pandas/issues/9443#issuecomment-73719328
+ # https://github.com/pandas-dev/pandas/issues/9443#issuecomment-73719328
tm.assert_series_equal(
pd.Series([True, True, False]).value_counts(dropna=True),
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index a494a0d53b123..f01fff035a3c5 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -191,7 +191,7 @@ def f():
cat = pd.Categorical([1, 2, 3, np.nan], categories=[1, 2, 3])
self.assertTrue(is_integer_dtype(cat.categories))
- # https://github.com/pydata/pandas/issues/3678
+ # https://github.com/pandas-dev/pandas/issues/3678
cat = pd.Categorical([np.nan, 1, 2, 3])
self.assertTrue(is_integer_dtype(cat.categories))
@@ -618,7 +618,7 @@ def test_describe(self):
index=exp_index)
tm.assert_frame_equal(desc, expected)
- # https://github.com/pydata/pandas/issues/3678
+ # https://github.com/pandas-dev/pandas/issues/3678
# describe should work with NaN
cat = pd.Categorical([np.nan, 1, 2, 2])
desc = cat.describe()
@@ -1547,7 +1547,7 @@ def test_memory_usage(self):
self.assertTrue(abs(diff) < 100)
def test_searchsorted(self):
- # https://github.com/pydata/pandas/issues/8420
+ # https://github.com/pandas-dev/pandas/issues/8420
s1 = pd.Series(['apple', 'bread', 'bread', 'cheese', 'milk'])
s2 = pd.Series(['apple', 'bread', 'bread', 'cheese', 'milk', 'donuts'])
c1 = pd.Categorical(s1, ordered=True)
@@ -1633,7 +1633,7 @@ def test_reflected_comparison_with_scalars(self):
np.array([False, True, True]))
def test_comparison_with_unknown_scalars(self):
- # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057
+ # https://github.com/pandas-dev/pandas/issues/9836#issuecomment-92123057
# and following comparisons with scalars not in categories should raise
# for unequal comps, but not for equal/not equal
cat = pd.Categorical([1, 2, 3], ordered=True)
@@ -3829,7 +3829,7 @@ def f():
self.assertRaises(TypeError, f)
- # https://github.com/pydata/pandas/issues/9836#issuecomment-92123057
+ # https://github.com/pandas-dev/pandas/issues/9836#issuecomment-92123057
# and following comparisons with scalars not in categories should raise
# for unequal comps, but not for equal/not equal
cat = Series(Categorical(list("abc"), ordered=True))
@@ -4303,14 +4303,14 @@ def test_cat_accessor_api(self):
self.assertFalse(hasattr(invalid, 'cat'))
def test_cat_accessor_no_new_attributes(self):
- # https://github.com/pydata/pandas/issues/10673
+ # https://github.com/pandas-dev/pandas/issues/10673
c = Series(list('aabbcde')).astype('category')
with tm.assertRaisesRegexp(AttributeError,
"You cannot add any new attribute"):
c.cat.xlabel = "a"
def test_str_accessor_api_for_categorical(self):
- # https://github.com/pydata/pandas/issues/10661
+ # https://github.com/pandas-dev/pandas/issues/10661
from pandas.core.strings import StringMethods
s = Series(list('aabb'))
s = s + " " + s
@@ -4385,7 +4385,7 @@ def test_str_accessor_api_for_categorical(self):
self.assertFalse(hasattr(invalid, 'str'))
def test_dt_accessor_api_for_categorical(self):
- # https://github.com/pydata/pandas/issues/10661
+ # https://github.com/pandas-dev/pandas/issues/10661
from pandas.tseries.common import Properties
from pandas.tseries.index import date_range, DatetimeIndex
from pandas.tseries.period import period_range, PeriodIndex
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index 62ad4c5aa4338..ea226851c9101 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -427,7 +427,7 @@ def f3(key):
def test_option_context_scope(self):
# Ensure that creating a context does not affect the existing
# environment as it is supposed to be used with the `with` statement.
- # See https://github.com/pydata/pandas/issues/8514
+ # See https://github.com/pandas-dev/pandas/issues/8514
original_value = 60
context_value = 10
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 01c1d48c6d5c0..02917ab18c29f 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -6443,7 +6443,7 @@ def test_transform_doesnt_clobber_ints(self):
def test_groupby_categorical_two_columns(self):
- # https://github.com/pydata/pandas/issues/8138
+ # https://github.com/pandas-dev/pandas/issues/8138
d = {'cat':
pd.Categorical(["a", "b", "a", "b"], categories=["a", "b", "c"],
ordered=True),
diff --git a/pandas/tests/test_strings.py b/pandas/tests/test_strings.py
index 4019bbe20ea1a..9a3505c3421e0 100644
--- a/pandas/tests/test_strings.py
+++ b/pandas/tests/test_strings.py
@@ -2604,7 +2604,7 @@ def test_cat_on_filtered_index(self):
self.assertEqual(str_multiple.loc[1], '2011 2 2')
def test_str_cat_raises_intuitive_error(self):
- # https://github.com/pydata/pandas/issues/11334
+ # https://github.com/pandas-dev/pandas/issues/11334
s = Series(['a', 'b', 'c', 'd'])
message = "Did you mean to supply a `sep` keyword?"
with tm.assertRaisesRegexp(ValueError, message):
@@ -2661,7 +2661,7 @@ def test_index_str_accessor_visibility(self):
idx.str
def test_str_accessor_no_new_attributes(self):
- # https://github.com/pydata/pandas/issues/10673
+ # https://github.com/pandas-dev/pandas/issues/10673
s = Series(list('aabbcde'))
with tm.assertRaisesRegexp(AttributeError,
"You cannot add any new attribute"):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 7fd0b1044f9d7..d46dc4d355b4c 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -722,7 +722,7 @@ def parallel_coordinates(frame, class_column, cols=None, ax=None, color=None,
>>> from pandas import read_csv
>>> from pandas.tools.plotting import parallel_coordinates
>>> from matplotlib import pyplot as plt
- >>> df = read_csv('https://raw.github.com/pydata/pandas/master'
+ >>> df = read_csv('https://raw.github.com/pandas-dev/pandas/master'
'/pandas/tests/data/iris.csv')
>>> parallel_coordinates(df, 'Name', color=('#556270',
'#4ECDC4', '#C7F464'))
@@ -2773,7 +2773,7 @@ def plot_group(keys, values, ax):
if by is not None:
# Prefer array return type for 2-D plots to match the subplot layout
- # https://github.com/pydata/pandas/pull/12216#issuecomment-241175580
+ # https://github.com/pandas-dev/pandas/pull/12216#issuecomment-241175580
result = _grouped_plot_by_column(plot_group, data, columns=columns,
by=by, grid=grid, figsize=figsize,
ax=ax, layout=layout,
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index f1a209053445a..d02c403cb3c66 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -1281,7 +1281,7 @@ def _adjust_dates_anchored(first, last, offset, closed='right', base=0):
# error cause by resampling across multiple days when a one day period is
# not a multiple of the frequency.
#
- # See https://github.com/pydata/pandas/issues/8683
+ # See https://github.com/pandas-dev/pandas/issues/8683
first_tzinfo = first.tzinfo
first = first.tz_localize(None)
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index b3da62c8d2db5..1735ac4e2efa5 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -4606,7 +4606,7 @@ def test_parse_time_string(self):
self.assertEqual(reso, reso_lower)
def test_parse_time_quarter_w_dash(self):
- # https://github.com/pydata/pandas/issue/9688
+ # https://github.com/pandas-dev/pandas/issue/9688
pairs = [('1988-Q2', '1988Q2'), ('2Q-1988', '2Q1988'), ]
for dashed, normal in pairs:
diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 204808dd510a0..9d3d27f3224b4 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -1678,7 +1678,7 @@ def test_resample_anchored_multiday(self):
# start date gets used to determine the offset. Fixes issue where
# a one day period is not a multiple of the frequency.
#
- # See: https://github.com/pydata/pandas/issues/8683
+ # See: https://github.com/pandas-dev/pandas/issues/8683
index = pd.date_range(
'2014-10-14 23:06:23.206', periods=3, freq='400L'
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index a85a606075911..714a596406c03 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -903,7 +903,7 @@ def test_utc_with_system_utc(self):
def test_tz_convert_hour_overflow_dst(self):
# Regression test for:
- # https://github.com/pydata/pandas/issues/13306
+ # https://github.com/pandas-dev/pandas/issues/13306
# sorted case US/Eastern -> UTC
ts = ['2008-05-12 09:50:00',
@@ -943,7 +943,7 @@ def test_tz_convert_hour_overflow_dst(self):
def test_tz_convert_hour_overflow_dst_timestamps(self):
# Regression test for:
- # https://github.com/pydata/pandas/issues/13306
+ # https://github.com/pandas-dev/pandas/issues/13306
tz = self.tzstr('US/Eastern')
@@ -985,7 +985,7 @@ def test_tz_convert_hour_overflow_dst_timestamps(self):
def test_tslib_tz_convert_trans_pos_plus_1__bug(self):
# Regression test for tslib.tz_convert(vals, tz1, tz2).
- # See https://github.com/pydata/pandas/issues/4496 for details.
+ # See https://github.com/pandas-dev/pandas/issues/4496 for details.
for freq, n in [('H', 1), ('T', 60), ('S', 3600)]:
idx = date_range(datetime(2011, 3, 26, 23),
datetime(2011, 3, 27, 1), freq=freq)
diff --git a/scripts/find_undoc_args.py b/scripts/find_undoc_args.py
index f00273bc75199..49273bacccf98 100755
--- a/scripts/find_undoc_args.py
+++ b/scripts/find_undoc_args.py
@@ -19,7 +19,7 @@
parser.add_argument('-m', '--module', metavar='MODULE', type=str,required=True,
help='name of package to import and examine',action='store')
parser.add_argument('-G', '--github_repo', metavar='REPO', type=str,required=False,
- help='github project where the the code lives, e.g. "pydata/pandas"',
+ help='github project where the the code lives, e.g. "pandas-dev/pandas"',
default=None,action='store')
args = parser.parse_args()
diff --git a/scripts/gen_release_notes.py b/scripts/gen_release_notes.py
index 02ba4f57c189d..7e4ffca59a0ab 100644
--- a/scripts/gen_release_notes.py
+++ b/scripts/gen_release_notes.py
@@ -46,7 +46,7 @@ def get_issues():
def _get_page(page_number):
- gh_url = ('https://api.github.com/repos/pydata/pandas/issues?'
+ gh_url = ('https://api.github.com/repos/pandas-dev/pandas/issues?'
'milestone=*&state=closed&assignee=*&page=%d') % page_number
with urlopen(gh_url) as resp:
rs = resp.readlines()[0]
diff --git a/scripts/touchup_gh_issues.py b/scripts/touchup_gh_issues.py
index 96ee220f55a02..8aa6d426156f0 100755
--- a/scripts/touchup_gh_issues.py
+++ b/scripts/touchup_gh_issues.py
@@ -14,7 +14,7 @@
pat = "((?:\s*GH\s*)?)#(\d{3,4})([^_]|$)?"
rep_pat = r"\1GH\2_\3"
-anchor_pat = ".. _GH{id}: https://github.com/pydata/pandas/issues/{id}"
+anchor_pat = ".. _GH{id}: https://github.com/pandas-dev/pandas/issues/{id}"
section_pat = "^pandas\s[\d\.]+\s*$"
diff --git a/vb_suite/perf_HEAD.py b/vb_suite/perf_HEAD.py
index c14a1795f01e0..143d943b9eadf 100755
--- a/vb_suite/perf_HEAD.py
+++ b/vb_suite/perf_HEAD.py
@@ -192,7 +192,7 @@ def get_build_results(build):
return convert_json_to_df(r_url)
-def get_all_results(repo_id=53976): # travis pydata/pandas id
+def get_all_results(repo_id=53976): # travis pandas-dev/pandas id
"""Fetches the VBENCH results for all travis builds, and returns a list of result df
unsuccesful individual vbenches are dropped.
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index 70a6278c0852d..45053b6610896 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -67,7 +67,7 @@
TMP_DIR = config.get('setup', 'tmp_dir')
except:
REPO_PATH = os.path.abspath(os.path.join(os.path.dirname(__file__), "../"))
- REPO_URL = 'git@github.com:pydata/pandas.git'
+ REPO_URL = 'git@github.com:pandas-dev/pandas.git'
DB_PATH = os.path.join(REPO_PATH, 'vb_suite/benchmarks.db')
TMP_DIR = os.path.join(HOME, 'tmp/vb_pandas')
@@ -138,7 +138,7 @@ def generate_rst_files(benchmarks):
The ``.pandas_vb_common`` setup script can be found here_
-.. _here: https://github.com/pydata/pandas/tree/master/vb_suite
+.. _here: https://github.com/pandas-dev/pandas/tree/master/vb_suite
Produced on a machine with
| as main repo was updated
| https://api.github.com/repos/pandas-dev/pandas/pulls/14409 | 2016-10-12T23:46:34Z | 2016-10-13T19:59:21Z | 2016-10-13T19:59:21Z | 2016-10-13T19:59:31Z |
Convert readthedocs links for their .org -> .io migration for hosted projects | diff --git a/.github/CONTRIBUTING.md b/.github/CONTRIBUTING.md
index 352acee23df2d..cf604822d6eea 100644
--- a/.github/CONTRIBUTING.md
+++ b/.github/CONTRIBUTING.md
@@ -278,7 +278,7 @@ Please try to maintain backward compatibility. *pandas* has lots of users with l
Adding tests is one of the most common requests after code is pushed to *pandas*. Therefore, it is worth getting in the habit of writing tests ahead of time so this is never an issue.
-Like many packages, *pandas* uses the [Nose testing system](http://nose.readthedocs.org/en/latest/index.html) and the convenient extensions in [numpy.testing](http://docs.scipy.org/doc/numpy/reference/routines.testing.html).
+Like many packages, *pandas* uses the [Nose testing system](https://nose.readthedocs.io/en/latest/index.html) and the convenient extensions in [numpy.testing](http://docs.scipy.org/doc/numpy/reference/routines.testing.html).
#### Writing tests
@@ -323,7 +323,7 @@ Performance matters and it is worth considering whether your code has introduced
>
> The asv benchmark suite was translated from the previous framework, vbench, so many stylistic issues are likely a result of automated transformation of the code.
-To use asv you will need either `conda` or `virtualenv`. For more details please check the [asv installation webpage](http://asv.readthedocs.org/en/latest/installing.html).
+To use asv you will need either `conda` or `virtualenv`. For more details please check the [asv installation webpage](https://asv.readthedocs.io/en/latest/installing.html).
To install asv:
@@ -360,7 +360,7 @@ This command is equivalent to:
This will launch every test only once, display stderr from the benchmarks, and use your local `python` that comes from your `$PATH`.
-Information on how to write a benchmark can be found in the [asv documentation](http://asv.readthedocs.org/en/latest/writing_benchmarks.html).
+Information on how to write a benchmark can be found in the [asv documentation](https://asv.readthedocs.io/en/latest/writing_benchmarks.html).
#### Running the vbench performance test suite (phasing out)
diff --git a/doc/source/conf.py b/doc/source/conf.py
index fd3a2493a53e8..4f916c6ba5290 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -295,7 +295,7 @@
'python': ('http://docs.python.org/3', None),
'numpy': ('http://docs.scipy.org/doc/numpy', None),
'scipy': ('http://docs.scipy.org/doc/scipy/reference', None),
- 'py': ('http://pylib.readthedocs.org/en/latest/', None)
+ 'py': ('https://pylib.readthedocs.io/en/latest/', None)
}
import glob
autosummary_generate = glob.glob("*.rst")
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 7f336abcaa6d7..446a40a7ec4b4 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -360,7 +360,7 @@ follow the Numpy Docstring Standard (see above), but you don't need to install
this because a local copy of numpydoc is included in the *pandas* source
code.
`nbconvert <https://nbconvert.readthedocs.io/en/latest/>`_ and
-`nbformat <http://nbformat.readthedocs.io/en/latest/>`_ are required to build
+`nbformat <https://nbformat.readthedocs.io/en/latest/>`_ are required to build
the Jupyter notebooks included in the documentation.
If you have a conda environment named ``pandas_dev``, you can install the extra
@@ -490,7 +490,7 @@ Adding tests is one of the most common requests after code is pushed to *pandas*
it is worth getting in the habit of writing tests ahead of time so this is never an issue.
Like many packages, *pandas* uses the `Nose testing system
-<http://nose.readthedocs.org/en/latest/index.html>`_ and the convenient
+<https://nose.readthedocs.io/en/latest/index.html>`_ and the convenient
extensions in `numpy.testing
<http://docs.scipy.org/doc/numpy/reference/routines.testing.html>`_.
@@ -569,7 +569,7 @@ supports both python2 and python3.
To use all features of asv, you will need either ``conda`` or
``virtualenv``. For more details please check the `asv installation
-webpage <http://asv.readthedocs.org/en/latest/installing.html>`_.
+webpage <https://asv.readthedocs.io/en/latest/installing.html>`_.
To install asv::
@@ -624,7 +624,7 @@ This will display stderr from the benchmarks, and use your local
``python`` that comes from your ``$PATH``.
Information on how to write a benchmark and how to use asv can be found in the
-`asv documentation <http://asv.readthedocs.org/en/latest/writing_benchmarks.html>`_.
+`asv documentation <https://asv.readthedocs.io/en/latest/writing_benchmarks.html>`_.
.. _contributing.gbq_integration_tests:
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 38a816060e1bc..27462a08b0011 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -877,7 +877,7 @@ The :ref:`Plotting <visualization>` docs.
<http://stackoverflow.com/questions/17891493/annotating-points-from-a-pandas-dataframe-in-matplotlib-plot>`__
`Generate Embedded plots in excel files using Pandas, Vincent and xlsxwriter
-<http://pandas-xlsxwriter-charts.readthedocs.org/en/latest/introduction.html>`__
+<https://pandas-xlsxwriter-charts.readthedocs.io/>`__
`Boxplot for each quartile of a stratifying variable
<http://stackoverflow.com/questions/23232989/boxplot-stratified-by-column-in-python-pandas>`__
diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst
index 17ebd1f163f4f..087b265ee83f2 100644
--- a/doc/source/ecosystem.rst
+++ b/doc/source/ecosystem.rst
@@ -145,7 +145,7 @@ API
`pandas-datareader <https://github.com/pydata/pandas-datareader>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-``pandas-datareader`` is a remote data access library for pandas. ``pandas.io`` from pandas < 0.17.0 is now refactored/split-off to and importable from ``pandas_datareader`` (PyPI:``pandas-datareader``). Many/most of the supported APIs have at least a documentation paragraph in the `pandas-datareader docs <https://pandas-datareader.readthedocs.org/en/latest/>`_:
+``pandas-datareader`` is a remote data access library for pandas. ``pandas.io`` from pandas < 0.17.0 is now refactored/split-off to and importable from ``pandas_datareader`` (PyPI:``pandas-datareader``). Many/most of the supported APIs have at least a documentation paragraph in the `pandas-datareader docs <https://pandas-datareader.readthedocs.io/en/latest/>`_:
The following data feeds are available:
@@ -170,7 +170,7 @@ PyDatastream is a Python interface to the
SOAP API to return indexed Pandas DataFrames or Panels with financial data.
This package requires valid credentials for this API (non free).
-`pandaSDMX <http://pandasdmx.readthedocs.org>`__
+`pandaSDMX <https://pandasdmx.readthedocs.io>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
pandaSDMX is an extensible library to retrieve and acquire statistical data
and metadata disseminated in
@@ -215,7 +215,7 @@ dimensional arrays, rather than the tabular data for which pandas excels.
Out-of-core
-------------
-`Dask <https://dask.readthedocs.org/en/latest/>`__
+`Dask <https://dask.readthedocs.io/en/latest/>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Dask is a flexible parallel computing library for analytics. Dask
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 6295e6f6cbb68..73685e0be8e7e 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -189,7 +189,7 @@ pandas is equipped with an exhaustive set of unit tests covering about 97% of
the codebase as of this writing. To run it on your machine to verify that
everything is working (and you have all of the dependencies, soft and hard,
installed), make sure you have `nose
-<http://readthedocs.org/docs/nose/en/latest/>`__ and run:
+<https://nose.readthedocs.io/en/latest/>`__ and run:
::
diff --git a/doc/source/io.rst b/doc/source/io.rst
index c07cfe4cd5574..811fca4344121 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2639,8 +2639,8 @@ config options <options>` ``io.excel.xlsx.writer`` and
``io.excel.xls.writer``. pandas will fall back on `openpyxl`_ for ``.xlsx``
files if `Xlsxwriter`_ is not available.
-.. _XlsxWriter: http://xlsxwriter.readthedocs.org
-.. _openpyxl: http://openpyxl.readthedocs.org/
+.. _XlsxWriter: https://xlsxwriter.readthedocs.io
+.. _openpyxl: https://openpyxl.readthedocs.io/
.. _xlwt: http://www.python-excel.org
To specify which writer you want to use, you can pass an engine keyword
diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index f3df1ebdf25cb..bde97d88a0ee7 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -17,7 +17,7 @@ rpy2 / R interface
In v0.16.0, the ``pandas.rpy`` interface has been **deprecated and will be
removed in a future version**. Similar functionality can be accessed
- through the `rpy2 <http://rpy2.readthedocs.io/>`__ project.
+ through the `rpy2 <https://rpy2.readthedocs.io/>`__ project.
See the :ref:`updating <rpy.updating>` section for a guide to port your
code from the ``pandas.rpy`` to ``rpy2`` functions.
diff --git a/doc/source/tutorials.rst b/doc/source/tutorials.rst
index e92798ea17448..c25e734a046b2 100644
--- a/doc/source/tutorials.rst
+++ b/doc/source/tutorials.rst
@@ -138,7 +138,7 @@ Modern Pandas
Excel charts with pandas, vincent and xlsxwriter
------------------------------------------------
-- `Using Pandas and XlsxWriter to create Excel charts <http://pandas-xlsxwriter-charts.readthedocs.org/>`_
+- `Using Pandas and XlsxWriter to create Excel charts <https://pandas-xlsxwriter-charts.readthedocs.io/>`_
Various Tutorials
-----------------
diff --git a/doc/source/whatsnew/v0.14.0.txt b/doc/source/whatsnew/v0.14.0.txt
index a91e0ab9e4961..181cd401c85d6 100644
--- a/doc/source/whatsnew/v0.14.0.txt
+++ b/doc/source/whatsnew/v0.14.0.txt
@@ -401,7 +401,7 @@ through SQLAlchemy (:issue:`2717`, :issue:`4163`, :issue:`5950`, :issue:`6292`).
All databases supported by SQLAlchemy can be used, such
as PostgreSQL, MySQL, Oracle, Microsoft SQL server (see documentation of
SQLAlchemy on `included dialects
-<http://sqlalchemy.readthedocs.org/en/latest/dialects/index.html>`_).
+<https://sqlalchemy.readthedocs.io/en/latest/dialects/index.html>`_).
The functionality of providing DBAPI connection objects will only be supported
for sqlite3 in the future. The ``'mysql'`` flavor is deprecated.
diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
index fc13224d3fe6e..9cb299593076d 100644
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -141,7 +141,7 @@ as well as the ``.sum()`` operation.
Releasing of the GIL could benefit an application that uses threads for user interactions (e.g. QT_), or performing multi-threaded computations. A nice example of a library that can handle these types of computation-in-parallel is the dask_ library.
-.. _dask: https://dask.readthedocs.org/en/latest/
+.. _dask: https://dask.readthedocs.io/en/latest/
.. _QT: https://wiki.python.org/moin/PyQt
.. _whatsnew_0170.plot:
| As per [their blog post of the 27th April](https://blog.readthedocs.com/securing-subdomains/) ‘Securing subdomains’:
> Starting today, Read the Docs will start hosting projects from subdomains on the domain readthedocs.io, instead of on readthedocs.org. This change addresses some security concerns around site cookies while hosting user generated data on the same domain as our dashboard.
Test Plan: Manually visited all the links I’ve modified.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14406 | 2016-10-12T21:40:28Z | 2016-10-12T22:10:31Z | 2016-10-12T22:10:31Z | 2016-10-12T22:10:37Z |
BUG: Dataframe constructor when given dict with None value | diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index 3edb8c1fa9071..6dddebecd06e8 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -32,6 +32,7 @@ Bug Fixes
~~~~~~~~~
+- Bug in ``pd.DataFrame`` where constructor fails when given dict with ``None`` value (:issue:`14381`)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1c6b13885dd01..188204d83d985 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2915,8 +2915,8 @@ def create_from_value(value, index, dtype):
return subarr
- # scalar like
- if subarr.ndim == 0:
+ # scalar like, GH
+ if getattr(subarr, 'ndim', 0) == 0:
if isinstance(data, list): # pragma: no cover
subarr = np.array(data, dtype=object)
elif index is not None:
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index d21db5ba52a45..e55ba3e161ed9 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -259,6 +259,14 @@ def test_constructor_dict(self):
frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B'])
self.assert_index_equal(frame.index, Index([], dtype=np.int64))
+ # GH 14381
+ # Dict with None value
+ frame_none = DataFrame(dict(a=None), index=[0])
+ frame_none_list = DataFrame(dict(a=[None]), index=[0])
+ tm.assert_equal(frame_none.get_value(0, 'a'), None)
+ tm.assert_equal(frame_none_list.get_value(0, 'a'), None)
+ tm.assert_frame_equal(frame_none, frame_none_list)
+
# GH10856
# dict with scalar values should raise error, even if columns passed
with tm.assertRaises(ValueError):
| - [x] closes #14381
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14392 | 2016-10-11T01:38:12Z | 2016-10-31T20:53:51Z | 2016-10-31T20:53:51Z | 2016-10-31T20:54:03Z |
BUG: Convert float freqstrs to ints at finer resolution | diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index db5bd22393e64..545b4380d9b75 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -58,4 +58,4 @@ Bug Fixes
- Bug in ``df.groupby`` causing an ``AttributeError`` when grouping a single index frame by a column and the index level (:issue`14327`)
- Bug in ``df.groupby`` where ``TypeError`` raised when ``pd.Grouper(key=...)`` is passed in a list (:issue:`14334`)
- Bug in ``pd.pivot_table`` may raise ``TypeError`` or ``ValueError`` when ``index`` or ``columns``
- is not scalar and ``values`` is not specified (:issue:`14380`)
\ No newline at end of file
+ is not scalar and ``values`` is not specified (:issue:`14380`)
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index d0009efd2d994..5cc9d575521f3 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -52,6 +52,9 @@ Other enhancements
- ``pd.read_excel`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
+
+- Multiple offset aliases with decimal points are now supported (e.g. '0.5min' is parsed as '30s') (:issue:`8419`)
+
- New ``UnsortedIndexError`` (subclass of ``KeyError``) raised when indexing/slicing into an
unsorted MultiIndex (:issue:`11897`). This allows differentiation between errors due to lack
of sorting or an incorrect key. See :ref:`here <advanced.unsorted>`
diff --git a/pandas/src/period.pyx b/pandas/src/period.pyx
index 5565f25937394..2d92b9f192328 100644
--- a/pandas/src/period.pyx
+++ b/pandas/src/period.pyx
@@ -45,12 +45,12 @@ cdef bint PY2 = version_info[0] == 2
cdef int64_t NPY_NAT = util.get_nat()
-cdef int US_RESO = frequencies.US_RESO
-cdef int MS_RESO = frequencies.MS_RESO
-cdef int S_RESO = frequencies.S_RESO
-cdef int T_RESO = frequencies.T_RESO
-cdef int H_RESO = frequencies.H_RESO
-cdef int D_RESO = frequencies.D_RESO
+cdef int RESO_US = frequencies.RESO_US
+cdef int RESO_MS = frequencies.RESO_MS
+cdef int RESO_SEC = frequencies.RESO_SEC
+cdef int RESO_MIN = frequencies.RESO_MIN
+cdef int RESO_HR = frequencies.RESO_HR
+cdef int RESO_DAY = frequencies.RESO_DAY
cdef extern from "period_helper.h":
ctypedef struct date_info:
@@ -516,7 +516,7 @@ cpdef resolution(ndarray[int64_t] stamps, tz=None):
cdef:
Py_ssize_t i, n = len(stamps)
pandas_datetimestruct dts
- int reso = D_RESO, curr_reso
+ int reso = RESO_DAY, curr_reso
if tz is not None:
tz = maybe_get_tz(tz)
@@ -535,20 +535,20 @@ cpdef resolution(ndarray[int64_t] stamps, tz=None):
cdef inline int _reso_stamp(pandas_datetimestruct *dts):
if dts.us != 0:
if dts.us % 1000 == 0:
- return MS_RESO
- return US_RESO
+ return RESO_MS
+ return RESO_US
elif dts.sec != 0:
- return S_RESO
+ return RESO_SEC
elif dts.min != 0:
- return T_RESO
+ return RESO_MIN
elif dts.hour != 0:
- return H_RESO
- return D_RESO
+ return RESO_HR
+ return RESO_DAY
cdef _reso_local(ndarray[int64_t] stamps, object tz):
cdef:
Py_ssize_t n = len(stamps)
- int reso = D_RESO, curr_reso
+ int reso = RESO_DAY, curr_reso
ndarray[int64_t] trans, deltas, pos
pandas_datetimestruct dts
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index ac094c1f545f3..e0c602bf5a037 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -38,32 +38,55 @@ class FreqGroup(object):
FR_NS = 12000
-US_RESO = 0
-MS_RESO = 1
-S_RESO = 2
-T_RESO = 3
-H_RESO = 4
-D_RESO = 5
+RESO_NS = 0
+RESO_US = 1
+RESO_MS = 2
+RESO_SEC = 3
+RESO_MIN = 4
+RESO_HR = 5
+RESO_DAY = 6
class Resolution(object):
- # defined in period.pyx
- # note that these are different from freq codes
- RESO_US = US_RESO
- RESO_MS = MS_RESO
- RESO_SEC = S_RESO
- RESO_MIN = T_RESO
- RESO_HR = H_RESO
- RESO_DAY = D_RESO
+ RESO_US = RESO_US
+ RESO_MS = RESO_MS
+ RESO_SEC = RESO_SEC
+ RESO_MIN = RESO_MIN
+ RESO_HR = RESO_HR
+ RESO_DAY = RESO_DAY
_reso_str_map = {
+ RESO_NS: 'nanosecond',
RESO_US: 'microsecond',
RESO_MS: 'millisecond',
RESO_SEC: 'second',
RESO_MIN: 'minute',
RESO_HR: 'hour',
- RESO_DAY: 'day'}
+ RESO_DAY: 'day'
+ }
+
+ # factor to multiply a value by to convert it to the next finer grained
+ # resolution
+ _reso_mult_map = {
+ RESO_NS: None,
+ RESO_US: 1000,
+ RESO_MS: 1000,
+ RESO_SEC: 1000,
+ RESO_MIN: 60,
+ RESO_HR: 60,
+ RESO_DAY: 24
+ }
+
+ _reso_str_bump_map = {
+ 'D': 'H',
+ 'H': 'T',
+ 'T': 'S',
+ 'S': 'L',
+ 'L': 'U',
+ 'U': 'N',
+ 'N': None
+ }
_str_reso_map = dict([(v, k) for k, v in compat.iteritems(_reso_str_map)])
@@ -160,6 +183,47 @@ def get_reso_from_freq(cls, freq):
"""
return cls.get_reso(cls.get_str_from_freq(freq))
+ @classmethod
+ def get_stride_from_decimal(cls, value, freq):
+ """
+ Convert freq with decimal stride into a higher freq with integer stride
+
+ Parameters
+ ----------
+ value : integer or float
+ freq : string
+ Frequency string
+
+ Raises
+ ------
+ ValueError
+ If the float cannot be converted to an integer at any resolution.
+
+ Example
+ -------
+ >>> Resolution.get_stride_from_decimal(1.5, 'T')
+ (90, 'S')
+
+ >>> Resolution.get_stride_from_decimal(1.04, 'H')
+ (3744, 'S')
+
+ >>> Resolution.get_stride_from_decimal(1, 'D')
+ (1, 'D')
+ """
+
+ if np.isclose(value % 1, 0):
+ return int(value), freq
+ else:
+ start_reso = cls.get_reso_from_freq(freq)
+ if start_reso == 0:
+ raise ValueError(
+ "Could not convert to integer offset at any resolution"
+ )
+
+ next_value = cls._reso_mult_map[start_reso] * value
+ next_name = cls._reso_str_bump_map[freq]
+ return cls.get_stride_from_decimal(next_value, next_name)
+
def get_to_timestamp_base(base):
"""
@@ -472,12 +536,17 @@ def to_offset(freq):
splitted[2::4]):
if sep != '' and not sep.isspace():
raise ValueError('separator must be spaces')
- offset = get_offset(name)
+ prefix = _lite_rule_alias.get(name) or name
if stride_sign is None:
stride_sign = -1 if stride.startswith('-') else 1
if not stride:
stride = 1
+ if prefix in Resolution._reso_str_bump_map.keys():
+ stride, name = Resolution.get_stride_from_decimal(
+ float(stride), prefix
+ )
stride = int(stride)
+ offset = get_offset(name)
offset = offset * int(np.fabs(stride) * stride_sign)
if delta is None:
delta = offset
@@ -493,7 +562,9 @@ def to_offset(freq):
# hack to handle WOM-1MON
-opattern = re.compile(r'([\-]?\d*)\s*([A-Za-z]+([\-][\dA-Za-z\-]+)?)')
+opattern = re.compile(
+ r'([\-]?\d*|[\-]?\d*\.\d*)\s*([A-Za-z]+([\-][\dA-Za-z\-]+)?)'
+)
def _base_and_stride(freqstr):
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 5ba98f15aed8d..dfb7b26371d7a 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -39,6 +39,21 @@ def test_to_offset_multiple(self):
expected = offsets.Hour(3)
assert (result == expected)
+ freqstr = '2h 20.5min'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.Second(8430)
+ assert (result == expected)
+
+ freqstr = '1.5min'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.Second(90)
+ assert (result == expected)
+
+ freqstr = '0.5S'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.Milli(500)
+ assert (result == expected)
+
freqstr = '15l500u'
result = frequencies.to_offset(freqstr)
expected = offsets.Micro(15500)
@@ -49,6 +64,16 @@ def test_to_offset_multiple(self):
expected = offsets.Milli(10075)
assert (result == expected)
+ freqstr = '1s0.25ms'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.Micro(1000250)
+ assert (result == expected)
+
+ freqstr = '1s0.25L'
+ result = frequencies.to_offset(freqstr)
+ expected = offsets.Micro(1000250)
+ assert (result == expected)
+
freqstr = '2800N'
result = frequencies.to_offset(freqstr)
expected = offsets.Nano(2800)
@@ -107,10 +132,8 @@ def test_to_offset_invalid(self):
frequencies.to_offset('-2-3U')
with tm.assertRaisesRegexp(ValueError, 'Invalid frequency: -2D:3H'):
frequencies.to_offset('-2D:3H')
-
- # ToDo: Must be fixed in #8419
- with tm.assertRaisesRegexp(ValueError, 'Invalid frequency: .5S'):
- frequencies.to_offset('.5S')
+ with tm.assertRaisesRegexp(ValueError, 'Invalid frequency: 1.5.0S'):
+ frequencies.to_offset('1.5.0S')
# split offsets with spaces are valid
assert frequencies.to_offset('2D 3H') == offsets.Hour(51)
@@ -379,6 +402,26 @@ def test_freq_to_reso(self):
result = Reso.get_freq(Reso.get_str(Reso.get_reso_from_freq(freq)))
self.assertEqual(freq, result)
+ def test_resolution_bumping(self):
+ # GH 14378
+ Reso = frequencies.Resolution
+
+ self.assertEqual(Reso.get_stride_from_decimal(1.5, 'T'), (90, 'S'))
+ self.assertEqual(Reso.get_stride_from_decimal(62.4, 'T'), (3744, 'S'))
+ self.assertEqual(Reso.get_stride_from_decimal(1.04, 'H'), (3744, 'S'))
+ self.assertEqual(Reso.get_stride_from_decimal(1, 'D'), (1, 'D'))
+ self.assertEqual(Reso.get_stride_from_decimal(0.342931, 'H'),
+ (1234551600, 'U'))
+ self.assertEqual(Reso.get_stride_from_decimal(1.2345, 'D'),
+ (106660800, 'L'))
+
+ with self.assertRaises(ValueError):
+ Reso.get_stride_from_decimal(0.5, 'N')
+
+ # too much precision in the input can prevent
+ with self.assertRaises(ValueError):
+ Reso.get_stride_from_decimal(0.3429324798798269273987982, 'H')
+
def test_get_freq_code(self):
# freqstr
self.assertEqual(frequencies.get_freq_code('A'),
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index b45f867be65dd..58ec1561b2535 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -14,7 +14,7 @@
from pandas.tseries.index import date_range, DatetimeIndex
from pandas.tseries.frequencies import (
get_freq,
- US_RESO, MS_RESO, S_RESO, H_RESO, D_RESO, T_RESO
+ RESO_US, RESO_MS, RESO_SEC, RESO_HR, RESO_DAY, RESO_MIN
)
import pandas.tseries.tools as tools
import pandas.tseries.offsets as offsets
@@ -1528,11 +1528,11 @@ def test_resolution(self):
for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T',
'S', 'L', 'U'],
- [D_RESO, D_RESO,
- D_RESO, D_RESO,
- H_RESO, T_RESO,
- S_RESO, MS_RESO,
- US_RESO]):
+ [RESO_DAY, RESO_DAY,
+ RESO_DAY, RESO_DAY,
+ RESO_HR, RESO_MIN,
+ RESO_SEC, RESO_MS,
+ RESO_US]):
for tz in [None, 'Asia/Tokyo', 'US/Eastern',
'dateutil/US/Eastern']:
idx = date_range(start='2013-04-01', periods=30, freq=freq,
| - [x] closes #8419
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Passing `'0.5min'` as a frequency string should generate 30 second
intervals, rather than five minute intervals. By recursively increasing
resolution until one is found for which the frequency is an integer,
this commit ensures that that's the case for resolutions from days to
microsecond
| https://api.github.com/repos/pandas-dev/pandas/pulls/14378 | 2016-10-08T16:32:21Z | 2016-12-14T16:08:04Z | 2016-12-14T16:08:04Z | 2019-01-18T18:25:57Z |
DOC: add 0.19.1 whatsnew file | diff --git a/doc/source/whatsnew.rst b/doc/source/whatsnew.rst
index 77dc249aeb788..2a1f2cc47d48e 100644
--- a/doc/source/whatsnew.rst
+++ b/doc/source/whatsnew.rst
@@ -18,6 +18,8 @@ What's New
These are new features and improvements of note in each release.
+.. include:: whatsnew/v0.19.1.txt
+
.. include:: whatsnew/v0.19.0.txt
.. include:: whatsnew/v0.18.1.txt
diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
new file mode 100644
index 0000000000000..1c5f4915bb3a4
--- /dev/null
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -0,0 +1,32 @@
+.. _whatsnew_0191:
+
+v0.19.1 (????, 2016)
+---------------------
+
+This is a minor bug-fix release from 0.19.0 and includes a large number of
+bug fixes along with several new features, enhancements, and performance improvements.
+We recommend that all users upgrade to this version.
+
+Highlights include:
+
+
+.. contents:: What's new in v0.19.1
+ :local:
+ :backlinks: none
+
+
+.. _whatsnew_0191.performance:
+
+Performance Improvements
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+
+
+
+
+
+
+.. _whatsnew_0191.bug_fixes:
+
+Bug Fixes
+~~~~~~~~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/14366 | 2016-10-06T09:16:03Z | 2016-10-07T19:25:31Z | 2016-10-07T19:25:31Z | 2016-10-07T19:25:31Z | |
DOC: Remove old warning from dsintro.rst | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 6063e3e8bce45..cc69367017aed 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -41,12 +41,6 @@ categories of functionality and methods in separate sections.
Series
------
-.. warning::
-
- In 0.13.0 ``Series`` has internally been refactored to no longer sub-class ``ndarray``
- but instead subclass ``NDFrame``, similarly to the rest of the pandas containers. This should be
- a transparent change with only very limited API implications (See the :ref:`Internal Refactoring<whatsnew_0130.refactoring>`)
-
:class:`Series` is a one-dimensional labeled array capable of holding any data
type (integers, strings, floating point numbers, Python objects, etc.). The axis
labels are collectively referred to as the **index**. The basic method to create a Series is to call:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
The warning is about something that have been fixed for almost 3 years. Every time a new user excited about pandas start reading the docs, they have to waste brain-cycles ignoring that big red warning bubble.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14365 | 2016-10-06T09:00:24Z | 2016-10-06T09:25:23Z | 2016-10-06T09:25:23Z | 2016-10-06T15:46:36Z |
BLD/CI: cython cache pxd files | diff --git a/ci/prep_cython_cache.sh b/ci/prep_cython_cache.sh
index 6f16dce2fb431..cadc356b641f9 100755
--- a/ci/prep_cython_cache.sh
+++ b/ci/prep_cython_cache.sh
@@ -3,8 +3,8 @@
ls "$HOME/.cache/"
PYX_CACHE_DIR="$HOME/.cache/pyxfiles"
-pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx"`
-pyx_cache_file_list=`find ${PYX_CACHE_DIR} -name "*.pyx"`
+pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx" -o -name "*.pxd"`
+pyx_cache_file_list=`find ${PYX_CACHE_DIR} -name "*.pyx" -o -name "*.pxd"`
CACHE_File="$HOME/.cache/cython_files.tar"
diff --git a/ci/submit_cython_cache.sh b/ci/submit_cython_cache.sh
index 4f60df0ccb2d8..5c98c3df61736 100755
--- a/ci/submit_cython_cache.sh
+++ b/ci/submit_cython_cache.sh
@@ -2,7 +2,7 @@
CACHE_File="$HOME/.cache/cython_files.tar"
PYX_CACHE_DIR="$HOME/.cache/pyxfiles"
-pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx"`
+pyx_file_list=`find ${TRAVIS_BUILD_DIR} -name "*.pyx" -o -name "*.pxd"`
rm -rf $CACHE_File
rm -rf $PYX_CACHE_DIR
| Currently the cython cache on travis doesn't pick up change in `.pxd` files Most of this commit history is trial and error - but 479c311 shows this working
https://travis-ci.org/pydata/pandas/jobs/166041112
```
$ ci/prep_cython_cache.sh
cython_files.tar motd.legal-displayed pip pyxfiles
Cache available - checking pyx diff
util.pxd has changed:
--- /home/travis/build/pydata/pandas/pandas/src/util.pxd 2016-10-08 13:01:48.255250369 +0000 +++ /home/travis/.cache/pyxfiles/home/travis/build/pydata/pandas/pandas/src/util.pxd 2016-10-06 11:04:00.000000000 +0000 @@ -97,5 +97,6 @@ cdef inline bint _checknan(object val): return not cnp.PyArray_Check(val) and val != val + cdef inline bint is_period_object(object val): return getattr(val, '_typ', '_typ') == 'period'
In a PR
Rebuilding cythonized files
Use cache (Blank if not set) = true
Clear cache (1=YES) = 1
```
xref #14359
| https://api.github.com/repos/pandas-dev/pandas/pulls/14363 | 2016-10-06T01:22:24Z | 2016-10-12T09:15:49Z | 2016-10-12T09:15:49Z | 2016-11-30T01:01:34Z |
DOC: Correct uniqueness of index for Series | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1c6b13885dd01..8a98f5cdf7e21 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -102,11 +102,11 @@ class Series(base.IndexOpsMixin, strings.StringAccessorMixin,
"""
One-dimensional ndarray with axis labels (including time series).
- Labels need not be unique but must be any hashable type. The object
+ Labels need not be unique but must be a hashable type. The object
supports both integer- and label-based indexing and provides a host of
methods for performing operations involving the index. Statistical
methods from ndarray have been overridden to automatically exclude
- missing data (currently represented as NaN)
+ missing data (currently represented as NaN).
Operations between Series (+, -, /, *, **) align values based on their
associated index values-- they need not be the same length. The result
@@ -117,8 +117,8 @@ class Series(base.IndexOpsMixin, strings.StringAccessorMixin,
data : array-like, dict, or scalar value
Contains data stored in Series
index : array-like or Index (1d)
- Values must be unique and hashable, same length as data. Index
- object (or other iterable of same length as data) Will default to
+ Values must be hashable and have the same length as `data`.
+ Non-unique index values are allowed. Will default to
RangeIndex(len(data)) if not provided. If both a dict and index
sequence are used, the index will override the keys found in the
dict.
| closes #7808
Just wanted to fix the docstring to reflect the fact that the index labels neither need to be unique ~~nor hashable~~.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14344 | 2016-10-04T05:54:18Z | 2016-11-25T10:05:30Z | 2016-11-25T10:05:30Z | 2016-11-30T02:38:51Z |
BUG: astype falsely converts inf to integer, patch for Numpy (GH14265) | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index f534c67273560..8fdef39a3ae98 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -118,3 +118,5 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
+
+- Bug in ``astype()`` where ``inf`` values were incorrectly converted to integers. Now raises error now with ``astype()`` for Series and DataFrames (:issue:`14265`)
\ No newline at end of file
diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py
index 1c9b6119cf665..f210f70ad1940 100644
--- a/pandas/sparse/tests/test_array.py
+++ b/pandas/sparse/tests/test_array.py
@@ -361,7 +361,7 @@ def test_astype(self):
arr.astype('i8')
arr = SparseArray([0, np.nan, 0, 1], fill_value=0)
- msg = "Cannot convert NA to integer"
+ msg = 'Cannot convert non-finite values \(NA or inf\) to integer'
with tm.assertRaisesRegexp(ValueError, msg):
arr.astype('i8')
diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py
index 817770b9da610..61030c262a44b 100644
--- a/pandas/tests/frame/test_dtypes.py
+++ b/pandas/tests/frame/test_dtypes.py
@@ -353,9 +353,17 @@ def test_astype_with_view(self):
tf = self.frame.astype(np.float64)
casted = tf.astype(np.int64, copy=False) # noqa
- def test_astype_cast_nan_int(self):
- df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]})
- self.assertRaises(ValueError, df.astype, np.int64)
+ def test_astype_cast_nan_inf_int(self):
+ # GH14265, check nan and inf raise error when converting to int
+ types = [np.int32, np.int64]
+ values = [np.nan, np.inf]
+ msg = 'Cannot convert non-finite values \(NA or inf\) to integer'
+
+ for this_type in types:
+ for this_val in values:
+ df = DataFrame([this_val])
+ with tm.assertRaisesRegexp(ValueError, msg):
+ df.astype(this_type)
def test_astype_str(self):
# GH9757
diff --git a/pandas/tests/series/test_dtypes.py b/pandas/tests/series/test_dtypes.py
index 9a406dfa10c35..3eafbaf912797 100644
--- a/pandas/tests/series/test_dtypes.py
+++ b/pandas/tests/series/test_dtypes.py
@@ -42,9 +42,17 @@ def test_dtype(self):
assert_series_equal(self.ts.get_ftype_counts(), Series(
1, ['float64:dense']))
- def test_astype_cast_nan_int(self):
- df = Series([1.0, 2.0, 3.0, np.nan])
- self.assertRaises(ValueError, df.astype, np.int64)
+ def test_astype_cast_nan_inf_int(self):
+ # GH14265, check nan and inf raise error when converting to int
+ types = [np.int32, np.int64]
+ values = [np.nan, np.inf]
+ msg = 'Cannot convert non-finite values \(NA or inf\) to integer'
+
+ for this_type in types:
+ for this_val in values:
+ s = Series([this_val])
+ with self.assertRaisesRegexp(ValueError, msg):
+ s.astype(this_type)
def test_astype_cast_object_int(self):
arr = Series(["car", "house", "tree", "1"])
diff --git a/pandas/types/cast.py b/pandas/types/cast.py
index a79862eb195b6..d4beab5655e5c 100644
--- a/pandas/types/cast.py
+++ b/pandas/types/cast.py
@@ -527,8 +527,10 @@ def _astype_nansafe(arr, dtype, copy=True):
elif (np.issubdtype(arr.dtype, np.floating) and
np.issubdtype(dtype, np.integer)):
- if np.isnan(arr).any():
- raise ValueError('Cannot convert NA to integer')
+ if not np.isfinite(arr).all():
+ raise ValueError('Cannot convert non-finite values (NA or inf) to '
+ 'integer')
+
elif arr.dtype == np.object_ and np.issubdtype(dtype.type, np.integer):
# work around NumPy brokenness, #1987
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
| - [x] closes #14265
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Bug in Numpy causes inf values to be falsely converted to integers. I added a ValueError exception similar to the exception for trying to convert NaN to an integer.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14343 | 2016-10-04T05:09:10Z | 2016-12-11T22:23:50Z | 2016-12-11T22:23:49Z | 2016-12-14T05:10:09Z |
DOC: fix some sphinx build issues | diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index b1795cb37200c..f52f72b49dd31 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -757,12 +757,18 @@ Use ``.astype`` or ``union_categoricals`` to get ``category`` result.
Following table summarizes the results of ``Categoricals`` related concatenations.
-| arg1 | arg2 | result |
-|---------|-------------------------------------------|---------|
-| category | category (identical categories) | category |
-| category | category (different categories, both not ordered) | object (dtype is inferred) |
++----------+--------------------------------------------------------+----------------------------+
+| arg1 | arg2 | result |
++==========+========================================================+============================+
+| category | category (identical categories) | category |
++----------+--------------------------------------------------------+----------------------------+
+| category | category (different categories, both not ordered) | object (dtype is inferred) |
++----------+--------------------------------------------------------+----------------------------+
| category | category (different categories, either one is ordered) | object (dtype is inferred) |
-| category | not category | object (dtype is inferred) |
++----------+--------------------------------------------------------+----------------------------+
+| category | not category | object (dtype is inferred) |
++----------+--------------------------------------------------------+----------------------------+
+
Getting Data In/Out
-------------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 4aa1ac4a47090..697438df87d4f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3999,7 +3999,7 @@ def asfreq(self, freq, method=None, how=None, normalize=False):
converted : type of caller
To learn more about the frequency strings, please see `this link
-<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
+ <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
from pandas.tseries.resample import asfreq
return asfreq(self, freq, method=method, how=how, normalize=normalize)
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index 557b9b2b17e95..5082fc84982c6 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -1994,7 +1994,7 @@ def symmetric_difference(self, other, result_name=None):
``symmetric_difference`` contains elements that appear in either
``idx1`` or ``idx2`` but not both. Equivalent to the Index created by
``idx1.difference(idx2) | idx2.difference(idx1)`` with duplicates
- dropped.
+ dropped.
Examples
--------
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index ff6c0b85a1e5c..f68750e242f1f 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -196,6 +196,9 @@ class DatetimeIndex(DatelikeOps, TimelikeOps, DatetimeIndexOpsMixin,
name : object
Name to be stored in the index
+ Notes
+ -----
+
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py
index f1e199adeebfc..c1b0936edaff9 100644
--- a/pandas/tseries/tdi.py
+++ b/pandas/tseries/tdi.py
@@ -112,6 +112,9 @@ class TimedeltaIndex(DatetimeIndexOpsMixin, TimelikeOps, Int64Index):
name : object
Name to be stored in the index
+ Notes
+ -----
+
To learn more about the frequency strings, please see `this link
<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
| The table here: http://pandas-docs.github.io/pandas-docs-travis/categorical.html#concatenation is apparently not building well. Rst ...
| https://api.github.com/repos/pandas-dev/pandas/pulls/14332 | 2016-10-01T21:33:08Z | 2016-10-02T08:58:10Z | 2016-10-02T08:58:10Z | 2016-10-02T08:58:11Z |
TST: fix period tests for numpy 1.9.3 (GH14183) | diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 62cfcf7f1360e..e314081eac373 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -23,7 +23,8 @@
from pandas.compat.numpy import np_datetime64_compat
from pandas import (Series, DataFrame,
- _np_version_under1p9, _np_version_under1p12)
+ _np_version_under1p9, _np_version_under1p10,
+ _np_version_under1p12)
from pandas import tslib
import pandas.util.testing as tm
@@ -4177,7 +4178,7 @@ def test_pi_ops_errors(self):
with tm.assertRaises(TypeError):
np.add(obj, ng)
- if _np_version_under1p9:
+ if _np_version_under1p10:
self.assertIs(np.add(ng, obj), NotImplemented)
else:
with tm.assertRaises(TypeError):
@@ -4186,7 +4187,7 @@ def test_pi_ops_errors(self):
with tm.assertRaises(TypeError):
np.subtract(obj, ng)
- if _np_version_under1p9:
+ if _np_version_under1p10:
self.assertIs(np.subtract(ng, obj), NotImplemented)
else:
with tm.assertRaises(TypeError):
@@ -4293,7 +4294,7 @@ def test_pi_sub_period(self):
tm.assert_index_equal(result, exp)
result = np.subtract(pd.Period('2012-01', freq='M'), idx)
- if _np_version_under1p9:
+ if _np_version_under1p10:
self.assertIs(result, NotImplemented)
else:
tm.assert_index_equal(result, exp)
| Partly addresses #14183
| https://api.github.com/repos/pandas-dev/pandas/pulls/14331 | 2016-10-01T20:27:12Z | 2016-10-02T08:45:53Z | 2016-10-02T08:45:53Z | 2016-10-02T08:45:54Z |
BUG: mixed freq timeseries plotting with shared axes (GH13341) | diff --git a/doc/source/whatsnew/v0.19.2.txt b/doc/source/whatsnew/v0.19.2.txt
index 49c8330490ed1..52bcc7d054629 100644
--- a/doc/source/whatsnew/v0.19.2.txt
+++ b/doc/source/whatsnew/v0.19.2.txt
@@ -38,7 +38,8 @@ Bug Fixes
- Bug in ``pd.cut`` with negative values and a single bin (:issue:`14652`)
- Bug in ``pd.to_numeric`` where a 0 was not unsigned on a ``downcast='unsigned'`` argument (:issue:`14401`)
-
+- Bug in plotting regular and irregular timeseries using shared axes
+ (``sharex=True`` or ``ax.twinx()``) (:issue:`13341`, :issue:`14322`).
diff --git a/pandas/tests/plotting/test_datetimelike.py b/pandas/tests/plotting/test_datetimelike.py
index 0f7bc02e24915..f07aadba175f2 100644
--- a/pandas/tests/plotting/test_datetimelike.py
+++ b/pandas/tests/plotting/test_datetimelike.py
@@ -778,6 +778,41 @@ def test_mixed_freq_irreg_period(self):
irreg.plot()
ps.plot()
+ def test_mixed_freq_shared_ax(self):
+
+ # GH13341, using sharex=True
+ idx1 = date_range('2015-01-01', periods=3, freq='M')
+ idx2 = idx1[:1].union(idx1[2:])
+ s1 = Series(range(len(idx1)), idx1)
+ s2 = Series(range(len(idx2)), idx2)
+
+ fig, (ax1, ax2) = self.plt.subplots(nrows=2, sharex=True)
+ s1.plot(ax=ax1)
+ s2.plot(ax=ax2)
+
+ self.assertEqual(ax1.freq, 'M')
+ self.assertEqual(ax2.freq, 'M')
+ self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
+ ax2.lines[0].get_xydata()[0, 0])
+
+ # using twinx
+ fig, ax1 = self.plt.subplots()
+ ax2 = ax1.twinx()
+ s1.plot(ax=ax1)
+ s2.plot(ax=ax2)
+
+ self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
+ ax2.lines[0].get_xydata()[0, 0])
+
+ # TODO (GH14330, GH14322)
+ # plotting the irregular first does not yet work
+ # fig, ax1 = plt.subplots()
+ # ax2 = ax1.twinx()
+ # s2.plot(ax=ax1)
+ # s1.plot(ax=ax2)
+ # self.assertEqual(ax1.lines[0].get_xydata()[0, 0],
+ # ax2.lines[0].get_xydata()[0, 0])
+
@slow
def test_to_weekly_resampling(self):
idxh = date_range('1/1/1999', periods=52, freq='W')
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index fe64af67af0ed..89aecf2acc07e 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -162,18 +162,37 @@ def _decorate_axes(ax, freq, kwargs):
ax.date_axis_info = None
-def _get_freq(ax, series):
- # get frequency from data
- freq = getattr(series.index, 'freq', None)
- if freq is None:
- freq = getattr(series.index, 'inferred_freq', None)
-
+def _get_ax_freq(ax):
+ """
+ Get the freq attribute of the ax object if set.
+ Also checks shared axes (eg when using secondary yaxis, sharex=True
+ or twinx)
+ """
ax_freq = getattr(ax, 'freq', None)
if ax_freq is None:
+ # check for left/right ax in case of secondary yaxis
if hasattr(ax, 'left_ax'):
ax_freq = getattr(ax.left_ax, 'freq', None)
elif hasattr(ax, 'right_ax'):
ax_freq = getattr(ax.right_ax, 'freq', None)
+ if ax_freq is None:
+ # check if a shared ax (sharex/twinx) has already freq set
+ shared_axes = ax.get_shared_x_axes().get_siblings(ax)
+ if len(shared_axes) > 1:
+ for shared_ax in shared_axes:
+ ax_freq = getattr(shared_ax, 'freq', None)
+ if ax_freq is not None:
+ break
+ return ax_freq
+
+
+def _get_freq(ax, series):
+ # get frequency from data
+ freq = getattr(series.index, 'freq', None)
+ if freq is None:
+ freq = getattr(series.index, 'inferred_freq', None)
+
+ ax_freq = _get_ax_freq(ax)
# use axes freq if no data freq
if freq is None:
@@ -191,7 +210,7 @@ def _get_freq(ax, series):
def _use_dynamic_x(ax, data):
freq = _get_index_freq(data)
- ax_freq = getattr(ax, 'freq', None)
+ ax_freq = _get_ax_freq(ax)
if freq is None: # convert irregular if axes has freq info
freq = ax_freq
@@ -244,7 +263,7 @@ def _maybe_convert_index(ax, data):
freq = freq.rule_code
if freq is None:
- freq = getattr(ax, 'freq', None)
+ freq = _get_ax_freq(ax)
if freq is None:
raise ValueError('Could not get frequency alias for plotting')
| Closes #13341, partly closes #14322 (that example still does not work when first plotting the irregular series)
cc @sinhrks @TomAugspurger
| https://api.github.com/repos/pandas-dev/pandas/pulls/14330 | 2016-10-01T20:05:31Z | 2016-11-26T09:13:05Z | 2016-11-26T09:13:05Z | 2016-11-26T09:13:05Z |
to_latex encoding issue | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1cc689528caaa..6fb0090dea114 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1635,7 +1635,8 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
When set to False prevents from escaping latex special
characters in column names.
encoding : str, default None
- Default encoding is ascii in Python 2 and utf-8 in Python 3
+ A string representing the encoding to use in the output file,
+ defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
decimal : string, default '.'
Character recognized as decimal separator, e.g. ',' in Europe
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
index e5089983ac8f7..7706666142a64 100644
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -654,6 +654,9 @@ def to_latex(self, column_format=None, longtable=False, encoding=None):
latex_renderer = LatexFormatter(self, column_format=column_format,
longtable=longtable)
+ if encoding is None:
+ encoding = 'ascii' if compat.PY2 else 'utf-8'
+
if hasattr(self.buf, 'write'):
latex_renderer.write_result(self.buf)
elif isinstance(self.buf, compat.string_types):
diff --git a/pandas/tests/formats/test_format.py b/pandas/tests/formats/test_format.py
index 58e9b30e7f624..3bbfd621d2342 100644
--- a/pandas/tests/formats/test_format.py
+++ b/pandas/tests/formats/test_format.py
@@ -2823,7 +2823,7 @@ def test_to_latex_filename(self):
if compat.PY3: # python3: pandas default encoding is utf-8
with tm.ensure_clean('test.tex') as path:
df.to_latex(path)
- with codecs.open(path, 'r') as f:
+ with codecs.open(path, 'r', encoding='utf-8') as f:
self.assertEqual(df.to_latex(), f.read())
else:
# python2 default encoding is ascii, so an error should be raised
| - [x] closes #14275
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14329 | 2016-10-01T17:34:21Z | 2016-10-02T11:56:59Z | 2016-10-02T11:56:58Z | 2016-10-02T11:57:09Z |
Enforce boolean types | diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 23f2589adde89..0148a47068beb 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -247,6 +247,7 @@ Other API Changes
- ``pd.read_csv()`` will now issue a ``ParserWarning`` whenever there are conflicting values provided by the ``dialect`` parameter and the user (:issue:`14898`)
- ``pd.read_csv()`` will now raise a ``ValueError`` for the C engine if the quote character is larger than than one byte (:issue:`11592`)
+- ``inplace`` arguments now require a boolean value, else a ``ValueError`` is thrown (:issue:`14189`)
.. _whatsnew_0200.deprecations:
diff --git a/pandas/computation/eval.py b/pandas/computation/eval.py
index fffde4d9db867..a0a08e4a968cc 100644
--- a/pandas/computation/eval.py
+++ b/pandas/computation/eval.py
@@ -11,6 +11,7 @@
from pandas.computation.scope import _ensure_scope
from pandas.compat import string_types
from pandas.computation.engines import _engines
+from pandas.util.validators import validate_bool_kwarg
def _check_engine(engine):
@@ -231,6 +232,7 @@ def eval(expr, parser='pandas', engine=None, truediv=True,
pandas.DataFrame.query
pandas.DataFrame.eval
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
first_expr = True
if isinstance(expr, string_types):
_check_expression(expr)
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index ffa2cb0684b72..1b577a574350d 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -1970,6 +1970,15 @@ def test_negate_lt_eq_le():
for engine, parser in product(_engines, expr._parsers):
yield check_negate_lt_eq_le, engine, parser
+class TestValidate(tm.TestCase):
+
+ def test_validate_bool_args(self):
+ invalid_values = [1, "True", [1,2,3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ pd.eval("2+2", inplace=value)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 49e43a60403ca..77272f7721b32 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -8,6 +8,7 @@
from pandas.types.missing import isnull
from pandas.types.generic import ABCDataFrame, ABCSeries, ABCIndexClass
from pandas.types.common import is_object_dtype, is_list_like, is_scalar
+from pandas.util.validators import validate_bool_kwarg
from pandas.core import common as com
import pandas.core.nanops as nanops
@@ -1178,6 +1179,7 @@ def searchsorted(self, value, side='left', sorter=None):
False: 'first'})
@Appender(_shared_docs['drop_duplicates'] % _indexops_doc_kwargs)
def drop_duplicates(self, keep='first', inplace=False):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if isinstance(self, ABCIndexClass):
if self.is_unique:
return self._shallow_copy()
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 0562736038483..5980f872f951f 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -35,6 +35,7 @@
deprecate_kwarg, Substitution)
from pandas.util.terminal import get_terminal_size
+from pandas.util.validators import validate_bool_kwarg
from pandas.core.config import get_option
@@ -615,6 +616,7 @@ def set_ordered(self, value, inplace=False):
Whether or not to set the ordered attribute inplace or return a copy
of this categorical with ordered set to the value
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
self._validate_ordered(value)
cat = self if inplace else self.copy()
cat._ordered = value
@@ -631,6 +633,7 @@ def as_ordered(self, inplace=False):
Whether or not to set the ordered attribute inplace or return a copy
of this categorical with ordered set to True
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
return self.set_ordered(True, inplace=inplace)
def as_unordered(self, inplace=False):
@@ -643,6 +646,7 @@ def as_unordered(self, inplace=False):
Whether or not to set the ordered attribute inplace or return a copy
of this categorical with ordered set to False
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
return self.set_ordered(False, inplace=inplace)
def _get_ordered(self):
@@ -702,6 +706,7 @@ def set_categories(self, new_categories, ordered=None, rename=False,
remove_categories
remove_unused_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
new_categories = self._validate_categories(new_categories)
cat = self if inplace else self.copy()
if rename:
@@ -754,6 +759,7 @@ def rename_categories(self, new_categories, inplace=False):
remove_unused_categories
set_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
cat = self if inplace else self.copy()
cat.categories = new_categories
if not inplace:
@@ -794,6 +800,7 @@ def reorder_categories(self, new_categories, ordered=None, inplace=False):
remove_unused_categories
set_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if set(self._categories) != set(new_categories):
raise ValueError("items in new_categories are not the same as in "
"old categories")
@@ -832,6 +839,7 @@ def add_categories(self, new_categories, inplace=False):
remove_unused_categories
set_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not is_list_like(new_categories):
new_categories = [new_categories]
already_included = set(new_categories) & set(self._categories)
@@ -877,6 +885,7 @@ def remove_categories(self, removals, inplace=False):
remove_unused_categories
set_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not is_list_like(removals):
removals = [removals]
@@ -917,6 +926,7 @@ def remove_unused_categories(self, inplace=False):
remove_categories
set_categories
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
cat = self if inplace else self.copy()
idx, inv = np.unique(cat._codes, return_inverse=True)
@@ -1322,6 +1332,7 @@ def sort_values(self, inplace=False, ascending=True, na_position='last'):
[NaN, NaN, 5.0, 2.0, 2.0]
Categories (2, int64): [2, 5]
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if na_position not in ['last', 'first']:
raise ValueError('invalid na_position: {!r}'.format(na_position))
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index d96fb094f5d5c..b9290c0ce3457 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -23,8 +23,7 @@
import numpy as np
import numpy.ma as ma
-from pandas.types.cast import (_maybe_upcast,
- _infer_dtype_from_scalar,
+from pandas.types.cast import (_maybe_upcast, _infer_dtype_from_scalar,
_possibly_cast_to_datetime,
_possibly_infer_to_datetimelike,
_possibly_convert_platform,
@@ -79,6 +78,7 @@
from pandas import compat
from pandas.compat.numpy import function as nv
from pandas.util.decorators import deprecate_kwarg, Appender, Substitution
+from pandas.util.validators import validate_bool_kwarg
from pandas.tseries.period import PeriodIndex
from pandas.tseries.index import DatetimeIndex
@@ -2164,6 +2164,7 @@ def query(self, expr, inplace=False, **kwargs):
>>> df.query('a > b')
>>> df[df.a > df.b] # same result as the previous expression
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not isinstance(expr, compat.string_types):
msg = "expr must be a string to be evaluated, {0} given"
raise ValueError(msg.format(type(expr)))
@@ -2230,6 +2231,7 @@ def eval(self, expr, inplace=None, **kwargs):
>>> df.eval('a + b')
>>> df.eval('c = a + b')
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
resolvers = kwargs.pop('resolvers', None)
kwargs['level'] = kwargs.pop('level', 0) + 1
if resolvers is None:
@@ -2843,6 +2845,7 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
-------
dataframe : DataFrame
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not isinstance(keys, list):
keys = [keys]
@@ -2935,6 +2938,7 @@ def reset_index(self, level=None, drop=False, inplace=False, col_level=0,
-------
resetted : DataFrame
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if inplace:
new_obj = self
else:
@@ -3039,6 +3043,7 @@ def dropna(self, axis=0, how='any', thresh=None, subset=None,
-------
dropped : DataFrame
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if isinstance(axis, (tuple, list)):
result = self
for ax in axis:
@@ -3102,6 +3107,7 @@ def drop_duplicates(self, subset=None, keep='first', inplace=False):
-------
deduplicated : DataFrame
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
duplicated = self.duplicated(subset, keep=keep)
if inplace:
@@ -3163,7 +3169,7 @@ def f(vals):
@Appender(_shared_docs['sort_values'] % _shared_doc_kwargs)
def sort_values(self, by, axis=0, ascending=True, inplace=False,
kind='quicksort', na_position='last'):
-
+ inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
other_axis = 0 if axis == 1 else 1
@@ -3274,7 +3280,7 @@ def sort(self, columns=None, axis=0, ascending=True, inplace=False,
def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
kind='quicksort', na_position='last', sort_remaining=True,
by=None):
-
+ inplace = validate_bool_kwarg(inplace, 'inplace')
# 10726
if by is not None:
warnings.warn("by argument to sort_index is deprecated, pls use "
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 1680c061ad7d3..0b5767da74cad 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -52,6 +52,7 @@
isidentifier, set_function_name)
import pandas.core.nanops as nanops
from pandas.util.decorators import Appender, Substitution, deprecate_kwarg
+from pandas.util.validators import validate_bool_kwarg
from pandas.core import config
# goal is to be able to define the docs close to function, while still being
@@ -733,6 +734,7 @@ def rename_axis(self, mapper, axis=0, copy=True, inplace=False):
1 2 5
2 3 6
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
non_mapper = is_scalar(mapper) or (is_list_like(mapper) and not
is_dict_like(mapper))
if non_mapper:
@@ -1950,6 +1952,7 @@ def drop(self, labels, axis=0, level=None, inplace=False, errors='raise'):
-------
dropped : type of caller
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
axis_name = self._get_axis_name(axis)
axis, axis_ = self._get_axis(axis), axis
@@ -2099,6 +2102,7 @@ def sort_values(self, by, axis=0, ascending=True, inplace=False,
@Appender(_shared_docs['sort_index'] % dict(axes="axes", klass="NDFrame"))
def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
kind='quicksort', na_position='last', sort_remaining=True):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
axis_name = self._get_axis_name(axis)
labels = self._get_axis(axis)
@@ -2872,6 +2876,7 @@ def consolidate(self, inplace=False):
-------
consolidated : type of caller
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if inplace:
self._consolidate_inplace()
else:
@@ -3267,6 +3272,7 @@ def convert_objects(self, convert_dates=True, convert_numeric=False,
@Appender(_shared_docs['fillna'] % _shared_doc_kwargs)
def fillna(self, value=None, method=None, axis=None, inplace=False,
limit=None, downcast=None):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if isinstance(value, (list, tuple)):
raise TypeError('"value" parameter must be a scalar or dict, but '
'you passed a "{0}"'.format(type(value).__name__))
@@ -3479,6 +3485,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
and play with this method to gain intuition about how it works.
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not is_bool(regex) and to_replace is not None:
raise AssertionError("'to_replace' must be 'None' if 'regex' is "
"not a bool")
@@ -3714,6 +3721,7 @@ def interpolate(self, method='linear', axis=0, limit=None, inplace=False,
"""
Interpolate values according to different methods.
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if self.ndim > 2:
raise NotImplementedError("Interpolate has not been implemented "
@@ -4627,6 +4635,7 @@ def _where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
Equivalent to public method `where`, except that `other` is not
applied as a function even if callable. Used in __setitem__.
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
cond = com._apply_if_callable(cond, self)
@@ -4894,6 +4903,7 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
def mask(self, cond, other=np.nan, inplace=False, axis=None, level=None,
try_cast=False, raise_on_error=True):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
cond = com._apply_if_callable(cond, self)
return self.where(~cond, other=other, inplace=inplace, axis=axis,
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index aa865ae430d4a..289ce150eb46b 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -57,6 +57,7 @@
import pandas.tslib as tslib
import pandas.computation.expressions as expressions
from pandas.util.decorators import cache_readonly
+from pandas.util.validators import validate_bool_kwarg
from pandas.tslib import Timedelta
from pandas import compat, _np_version_under1p9
@@ -360,6 +361,7 @@ def fillna(self, value, limit=None, inplace=False, downcast=None,
""" fillna on the block with the value. If we fail, then convert to
ObjectBlock and try again
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not self._can_hold_na:
if inplace:
@@ -626,6 +628,7 @@ def replace(self, to_replace, value, inplace=False, filter=None,
compatibility.
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
original_to_replace = to_replace
mask = isnull(self.values)
# try to replace, if we raise an error, convert to ObjectBlock and
@@ -897,6 +900,9 @@ def interpolate(self, method='pad', axis=0, index=None, values=None,
inplace=False, limit=None, limit_direction='forward',
fill_value=None, coerce=False, downcast=None, mgr=None,
**kwargs):
+
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+
def check_int_bool(self, inplace):
# Only FloatBlocks will contain NaNs.
# timedelta subclasses IntBlock
@@ -944,6 +950,8 @@ def _interpolate_with_fill(self, method='pad', axis=0, inplace=False,
downcast=None, mgr=None):
""" fillna but using the interpolate machinery """
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+
# if we are coercing, then don't force the conversion
# if the block can't hold the type
if coerce:
@@ -970,6 +978,7 @@ def _interpolate(self, method=None, index=None, values=None,
mgr=None, **kwargs):
""" interpolate using scipy wrappers """
+ inplace = validate_bool_kwarg(inplace, 'inplace')
data = self.values if inplace else self.values.copy()
# only deal with floats
@@ -1514,6 +1523,7 @@ def putmask(self, mask, new, align=True, inplace=False, axis=0,
-------
a new block(s), the result of the putmask
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
# use block's copy logic.
# .values may be an Index which does shallow copy by default
@@ -1801,6 +1811,7 @@ def should_store(self, value):
def replace(self, to_replace, value, inplace=False, filter=None,
regex=False, convert=True, mgr=None):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
to_replace_values = np.atleast_1d(to_replace)
if not np.can_cast(to_replace_values, bool):
return self
@@ -1982,6 +1993,9 @@ def replace(self, to_replace, value, inplace=False, filter=None,
def _replace_single(self, to_replace, value, inplace=False, filter=None,
regex=False, convert=True, mgr=None):
+
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+
# to_replace is regex compilable
to_rep_re = regex and is_re_compilable(to_replace)
@@ -3205,6 +3219,8 @@ def replace_list(self, src_list, dest_list, inplace=False, regex=False,
mgr=None):
""" do a list replace """
+ inplace = validate_bool_kwarg(inplace, 'inplace')
+
if mgr is None:
mgr = self
diff --git a/pandas/core/series.py b/pandas/core/series.py
index f656d72296e3a..0b29e8c93a12d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -66,6 +66,7 @@
import pandas.core.nanops as nanops
import pandas.formats.format as fmt
from pandas.util.decorators import Appender, deprecate_kwarg, Substitution
+from pandas.util.validators import validate_bool_kwarg
import pandas.lib as lib
import pandas.tslib as tslib
@@ -975,6 +976,7 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
----------
resetted : DataFrame, or Series if drop == True
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if drop:
new_index = _default_index(len(self))
if level is not None and isinstance(self.index, MultiIndex):
@@ -1175,6 +1177,7 @@ def _set_name(self, name, inplace=False):
inplace : bool
whether to modify `self` directly or return a copy
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
ser = self if inplace else self.copy()
ser.name = name
return ser
@@ -1722,6 +1725,7 @@ def update(self, other):
def sort_values(self, axis=0, ascending=True, inplace=False,
kind='quicksort', na_position='last'):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
# GH 5856/5853
@@ -1774,6 +1778,7 @@ def _try_kind_sort(arr):
def sort_index(self, axis=0, level=None, ascending=True, inplace=False,
kind='quicksort', na_position='last', sort_remaining=True):
+ inplace = validate_bool_kwarg(inplace, 'inplace')
axis = self._get_axis_number(axis)
index = self.index
if level is not None:
@@ -2350,6 +2355,9 @@ def align(self, other, join='outer', axis=None, level=None, copy=True,
@Appender(generic._shared_docs['rename'] % _shared_doc_kwargs)
def rename(self, index=None, **kwargs):
+ kwargs['inplace'] = validate_bool_kwarg(kwargs.get('inplace', False),
+ 'inplace')
+
non_mapping = is_scalar(index) or (is_list_like(index) and
not is_dict_like(index))
if non_mapping:
@@ -2646,6 +2654,7 @@ def dropna(self, axis=0, inplace=False, **kwargs):
inplace : boolean, default False
Do operation in place.
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
kwargs.pop('how', None)
if kwargs:
raise TypeError('dropna() got an unexpected keyword '
diff --git a/pandas/sparse/list.py b/pandas/sparse/list.py
index 82de8cd7d3959..d294e65bbf10c 100644
--- a/pandas/sparse/list.py
+++ b/pandas/sparse/list.py
@@ -5,6 +5,7 @@
from pandas.types.common import is_scalar
from pandas.sparse.array import SparseArray
+from pandas.util.validators import validate_bool_kwarg
import pandas._sparse as splib
@@ -78,6 +79,7 @@ def consolidate(self, inplace=True):
If inplace=False, new object, otherwise reference to existing
object
"""
+ inplace = validate_bool_kwarg(inplace, 'inplace')
if not inplace:
result = self.copy()
else:
diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py
index 8a6cbe44465c1..d7466f5ede06f 100644
--- a/pandas/tests/frame/test_missing.py
+++ b/pandas/tests/frame/test_missing.py
@@ -208,7 +208,7 @@ def test_fillna(self):
# empty frame (GH #2778)
df = DataFrame(columns=['x'])
for m in ['pad', 'backfill']:
- df.x.fillna(method=m, inplace=1)
+ df.x.fillna(method=m, inplace=True)
df.x.fillna(method=m)
# with different dtype (GH3386)
diff --git a/pandas/tests/frame/test_validate.py b/pandas/tests/frame/test_validate.py
new file mode 100644
index 0000000000000..e1ef87bb3271a
--- /dev/null
+++ b/pandas/tests/frame/test_validate.py
@@ -0,0 +1,33 @@
+from unittest import TestCase
+from pandas.core.frame import DataFrame
+
+
+class TestDataFrameValidate(TestCase):
+ """Tests for error handling related to data types of method arguments."""
+ df = DataFrame({'a': [1, 2], 'b': [3, 4]})
+
+ def test_validate_bool_args(self):
+ # Tests for error handling related to boolean arguments.
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ self.df.query('a > b', inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.eval('a + b', inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.set_index(keys=['a'], inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.reset_index(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.dropna(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.drop_duplicates(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.df.sort_values(by=['a'], inplace=value)
diff --git a/pandas/tests/series/test_validate.py b/pandas/tests/series/test_validate.py
new file mode 100644
index 0000000000000..cf0482b41c80a
--- /dev/null
+++ b/pandas/tests/series/test_validate.py
@@ -0,0 +1,33 @@
+from unittest import TestCase
+from pandas.core.series import Series
+
+
+class TestSeriesValidate(TestCase):
+ """Tests for error handling related to data types of method arguments."""
+ s = Series([1, 2, 3, 4, 5])
+
+ def test_validate_bool_args(self):
+ # Tests for error handling related to boolean arguments.
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ self.s.reset_index(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s._set_name(name='hello', inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s.sort_values(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s.sort_index(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s.sort_index(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s.rename(inplace=value)
+
+ with self.assertRaises(ValueError):
+ self.s.dropna(inplace=value)
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 717eae3e59715..f750936961831 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -1050,6 +1050,13 @@ def test_searchsorted(self):
index = np.searchsorted(o, max(o), sorter=range(len(o)))
self.assertTrue(0 <= index <= len(o))
+ def test_validate_bool_args(self):
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ self.int_series.drop_duplicates(inplace=value)
+
class TestTranspose(Ops):
errmsg = "the 'axes' parameter is not supported"
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 23280395427fd..382f1dd1decfb 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -1694,6 +1694,41 @@ def test_map(self):
# GH 12766: Return an index not an array
tm.assert_index_equal(result, Index(np.array([1] * 5, dtype=np.int64)))
+ def test_validate_inplace(self):
+ cat = Categorical(['A','B','B','C','A'])
+ invalid_values = [1, "True", [1,2,3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ cat.set_ordered(value=True, inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.as_ordered(inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.as_unordered(inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.set_categories(['X','Y','Z'], rename=True, inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.rename_categories(['X','Y','Z'], inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.reorder_categories(['X','Y','Z'], ordered=True, inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.add_categories(new_categories=['D','E','F'], inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.remove_categories(removals=['D','E','F'], inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.remove_unused_categories(inplace=value)
+
+ with self.assertRaises(ValueError):
+ cat.sort_values(inplace=value)
+
class TestCategoricalAsBlock(tm.TestCase):
_multiprocess_can_split_ = True
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 3500ce913462a..f32990ff32cbe 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -642,6 +642,40 @@ def test_numpy_clip(self):
np.clip, obj,
lower, upper, out=col)
+ def test_validate_bool_args(self):
+ df = DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).rename_axis(mapper={'a': 'x', 'b': 'y'},
+ axis=1, inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).drop('a', axis=1, inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).sort_index(inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).consolidate(inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).fillna(value=0, inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).replace(to_replace=1, value=7,
+ inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).interpolate(inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df)._where(cond=df.a > 2, inplace=value)
+
+ with self.assertRaises(ValueError):
+ super(DataFrame, df).mask(cond=df.a > 2, inplace=value)
+
class TestSeries(tm.TestCase, Generic):
_typ = Series
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 32e8f44e6f258..22addd4c23817 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -859,6 +859,14 @@ def test_single_mgr_ctor(self):
mgr = create_single_mgr('f8', num_rows=5)
self.assertEqual(mgr.as_matrix().tolist(), [0., 1., 2., 3., 4.])
+ def test_validate_bool_args(self):
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+ bm1 = create_mgr('a,b,c: i8-1; d,e,f: i8-2')
+
+ for value in invalid_values:
+ with self.assertRaises(ValueError):
+ bm1.replace_list([1], [2], inplace=value)
+
class TestIndexing(object):
# Nosetests-style data-driven tests.
diff --git a/pandas/tests/test_util.py b/pandas/tests/test_util.py
index cb12048676d26..ed82604035358 100644
--- a/pandas/tests/test_util.py
+++ b/pandas/tests/test_util.py
@@ -8,7 +8,8 @@
from pandas.util._move import move_into_mutable_buffer, BadMove, stolenbuf
from pandas.util.decorators import deprecate_kwarg
from pandas.util.validators import (validate_args, validate_kwargs,
- validate_args_and_kwargs)
+ validate_args_and_kwargs,
+ validate_bool_kwarg)
import pandas.util.testing as tm
@@ -200,6 +201,22 @@ def test_validation(self):
kwargs = dict(f=None, b=1)
validate_kwargs(self.fname, kwargs, compat_args)
+ def test_validate_bool_kwarg(self):
+ arg_names = ['inplace', 'copy']
+ invalid_values = [1, "True", [1, 2, 3], 5.0]
+ valid_values = [True, False, None]
+
+ for name in arg_names:
+ for value in invalid_values:
+ with tm.assertRaisesRegexp(ValueError,
+ ("For argument \"%s\" expected "
+ "type bool, received type %s") %
+ (name, type(value).__name__)):
+ validate_bool_kwarg(value, name)
+
+ for value in valid_values:
+ tm.assert_equal(validate_bool_kwarg(value, name), value)
+
class TestValidateKwargsAndArgs(tm.TestCase):
fname = 'func'
diff --git a/pandas/util/validators.py b/pandas/util/validators.py
index 964fa9d9b38d5..f22412a2bcd17 100644
--- a/pandas/util/validators.py
+++ b/pandas/util/validators.py
@@ -215,3 +215,12 @@ def validate_args_and_kwargs(fname, args, kwargs,
kwargs.update(args_dict)
validate_kwargs(fname, kwargs, compat_args)
+
+
+def validate_bool_kwarg(value, arg_name):
+ """ Ensures that argument passed in arg_name is of type bool. """
+ if not (is_bool(value) or value is None):
+ raise ValueError('For argument "%s" expected type bool, '
+ 'received type %s.' %
+ (arg_name, type(value).__name__))
+ return value
| - [ x ] closes #14189
- [ x ] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
This PR only addresses the `inplace` argument, though there was a comment on #14189 that identifies the issue as being with the `copy` argument as well. I can build this out further to account for all of the frequently occurring boolean arguments.
I also wrote tests for the common function `_enforce_bool_type`, but didn't write individual tests for every method with an `inplace` argument. This is something I can add if it makes sense. I wanted to get something reviewed sooner rather than later to get feedback and ensure that I'm on the right track. Feedback is much appreciated -- thanks!
| https://api.github.com/repos/pandas-dev/pandas/pulls/14318 | 2016-09-29T06:10:37Z | 2017-01-06T12:33:21Z | 2017-01-06T12:33:21Z | 2017-01-06T12:33:35Z |
PERF: unnecessary materialization of a MultiIndex.values when introspecting memory_usage | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 2f8baae416dea..f4110cba68c31 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1409,7 +1409,7 @@ Performance Improvements
- Improved performance of ``factorize`` of datetime with timezone (:issue:`13750`)
- Improved performance of by lazily creating indexing hashtables on larger Indexes (:issue:`14266`)
- Improved performance of ``groupby.groups`` (:issue:`14293`)
-
+- Unecessary materializing of a MultiIndex when introspecting for memory usage (:issue:`14308`)
.. _whatsnew_0190.bug_fixes:
diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py
index e6aefaeb01a15..1ab5dbb737739 100644
--- a/pandas/indexes/multi.py
+++ b/pandas/indexes/multi.py
@@ -413,10 +413,27 @@ def _shallow_copy(self, values=None, **kwargs):
def dtype(self):
return np.dtype('O')
+ @Appender(Index.memory_usage.__doc__)
+ def memory_usage(self, deep=False):
+ # we are overwriting our base class to avoid
+ # computing .values here which could materialize
+ # a tuple representation uncessarily
+ return self._nbytes(deep)
+
@cache_readonly
def nbytes(self):
""" return the number of bytes in the underlying data """
- level_nbytes = sum((i.nbytes for i in self.levels))
+ return self._nbytes(False)
+
+ def _nbytes(self, deep=False):
+ """
+ return the number of bytes in the underlying data
+ deeply introspect the level data if deep=True
+
+ *this is in internal routine*
+
+ """
+ level_nbytes = sum((i.memory_usage(deep=deep) for i in self.levels))
label_nbytes = sum((i.nbytes for i in self.labels))
names_nbytes = sum((getsizeof(i) for i in self.names))
return level_nbytes + label_nbytes + names_nbytes
diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py
index 66e592c013fb1..5e5e9abda1200 100644
--- a/pandas/tests/frame/test_repr_info.py
+++ b/pandas/tests/frame/test_repr_info.py
@@ -381,3 +381,27 @@ def test_info_memory_usage(self):
# deep=True, and add on some GC overhead
diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df)
self.assertTrue(abs(diff) < 100)
+
+ def test_info_memory_usage_bug_on_multiindex(self):
+ # GH 14308
+ # memory usage introspection should not materialize .values
+
+ from string import ascii_uppercase as uppercase
+
+ def memory_usage(f):
+ return f.memory_usage(deep=True).sum()
+
+ N = 100
+ M = len(uppercase)
+ index = pd.MultiIndex.from_product([list(uppercase),
+ pd.date_range('20160101',
+ periods=N)],
+ names=['id', 'date'])
+ df = DataFrame({'value': np.random.randn(N * M)}, index=index)
+
+ unstacked = df.unstack('id')
+ self.assertEqual(df.values.nbytes, unstacked.values.nbytes)
+ self.assertTrue(memory_usage(df) > memory_usage(unstacked))
+
+ # high upper bound
+ self.assertTrue(memory_usage(unstacked) - memory_usage(df) < 2000)
| ```
In [2]: import string
...: import pandas as pd
...: import numpy as np
...:
...: def memory_usage(f):
...: return f.memory_usage(deep=True).sum()
...:
...: N = 100
...: M = len(string.uppercase)
...: df = pd.DataFrame({'value' : np.random.randn(N*M)},
...: index=pd.MultiIndex.from_product([list(string.uppercase),
...: pd.date_range('20160101',periods=N)],
...: names=['id','date'])
...: )
...:
...:
...: stacked = df.unstack('id')
...:
...: assert df.values.nbytes == stacked.values.nbytes
...:
In [3]: memory_usage(df)
Out[3]: 145600
In [4]: memory_usage(stacked)
Out[4]: 21600
I
n [7]: df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 2600 entries, (A, 2016-01-01 00:00:00) to (Z, 2016-04-09 00:00:00)
Data columns (total 1 columns):
value 2600 non-null float64
dtypes: float64(1)
memory usage: 142.2 KB
In [8]: stacked.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 100 entries, 2016-01-01 to 2016-04-09
Freq: D
Data columns (total 26 columns):
(value, A) 100 non-null float64
(value, B) 100 non-null float64
(value, C) 100 non-null float64
(value, D) 100 non-null float64
(value, E) 100 non-null float64
(value, F) 100 non-null float64
(value, G) 100 non-null float64
(value, H) 100 non-null float64
(value, I) 100 non-null float64
(value, J) 100 non-null float64
(value, K) 100 non-null float64
(value, L) 100 non-null float64
(value, M) 100 non-null float64
(value, N) 100 non-null float64
(value, O) 100 non-null float64
(value, P) 100 non-null float64
(value, Q) 100 non-null float64
(value, R) 100 non-null float64
(value, S) 100 non-null float64
(value, T) 100 non-null float64
(value, U) 100 non-null float64
(value, V) 100 non-null float64
(value, W) 100 non-null float64
(value, X) 100 non-null float64
(value, Y) 100 non-null float64
(value, Z) 100 non-null float64
dtypes: float64(26)
memory usage: 21.1 KB
```
with this PR
```
In [2]: memory_usage(df)
Out[2]: 27088
In [3]: memory_usage(stacked)
Out[3]: 21600
In [4]: df.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
MultiIndex: 2600 entries, (A, 2016-01-01 00:00:00) to (Z, 2016-04-09 00:00:00)
Data columns (total 1 columns):
value 2600 non-null float64
dtypes: float64(1)
memory usage: 26.5 KB
In [5]: stacked.info(memory_usage='deep')
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 100 entries, 2016-01-01 to 2016-04-09
Freq: D
Data columns (total 26 columns):
(value, A) 100 non-null float64
(value, B) 100 non-null float64
(value, C) 100 non-null float64
(value, D) 100 non-null float64
(value, E) 100 non-null float64
(value, F) 100 non-null float64
(value, G) 100 non-null float64
(value, H) 100 non-null float64
(value, I) 100 non-null float64
(value, J) 100 non-null float64
(value, K) 100 non-null float64
(value, L) 100 non-null float64
(value, M) 100 non-null float64
(value, N) 100 non-null float64
(value, O) 100 non-null float64
(value, P) 100 non-null float64
(value, Q) 100 non-null float64
(value, R) 100 non-null float64
(value, S) 100 non-null float64
(value, T) 100 non-null float64
(value, U) 100 non-null float64
(value, V) 100 non-null float64
(value, W) 100 non-null float64
(value, X) 100 non-null float64
(value, Y) 100 non-null float64
(value, Z) 100 non-null float64
dtypes: float64(26)
memory usage: 21.1 KB
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/14308 | 2016-09-27T22:37:23Z | 2016-09-28T18:01:18Z | 2016-09-28T18:01:18Z | 2016-09-28T18:01:19Z |
API: add dtype= option to python parser | diff --git a/doc/source/io.rst b/doc/source/io.rst
index ee319092c6dd5..b1c151def26af 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -157,6 +157,9 @@ dtype : Type name or dict of column -> type, default ``None``
Data type for data or columns. E.g. ``{'a': np.float64, 'b': np.int32}``
(unsupported with ``engine='python'``). Use `str` or `object` to preserve and
not interpret dtype.
+
+ .. versionadded:: 0.20.0 support for the Python parser.
+
engine : {``'c'``, ``'python'``}
Parser engine to use. The C engine is faster while the python engine is
currently more feature-complete.
@@ -473,10 +476,9 @@ However, if you wanted for all the data to be coerced, no matter the type, then
using the ``converters`` argument of :func:`~pandas.read_csv` would certainly be
worth trying.
-.. note::
- The ``dtype`` option is currently only supported by the C engine.
- Specifying ``dtype`` with ``engine`` other than 'c' raises a
- ``ValueError``.
+ .. versionadded:: 0.20.0 support for the Python parser.
+
+ The ``dtype`` option is supported by the 'python' engine
.. note::
In some cases, reading in abnormal data with columns containing mixed dtypes
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 65b62601c7022..6e3559bee728d 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -22,8 +22,17 @@ New features
~~~~~~~~~~~~
+``read_csv`` supports ``dtype`` keyword for python engine
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+The ``dtype`` keyword argument in the :func:`read_csv` function for specifying the types of parsed columns
+ is now supported with the ``'python'`` engine (:issue:`14295`). See the :ref:`io docs <io.dtypes>` for more information.
+.. ipython:: python
+
+ data = "a,b\n1,2\n3,4"
+ pd.read_csv(StringIO(data), engine='python').dtypes
+ pd.read_csv(StringIO(data), engine='python', dtype={'a':'float64', 'b':'object'}).dtypes
.. _whatsnew_0200.enhancements.other:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 929b360854d5b..0736535ce2d67 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -17,11 +17,15 @@
zip, string_types, map, u)
from pandas.types.common import (is_integer, _ensure_object,
is_list_like, is_integer_dtype,
- is_float,
- is_scalar)
+ is_float, is_dtype_equal,
+ is_object_dtype,
+ is_scalar, is_categorical_dtype)
+from pandas.types.missing import isnull
+from pandas.types.cast import _astype_nansafe
from pandas.core.index import Index, MultiIndex, RangeIndex
from pandas.core.series import Series
from pandas.core.frame import DataFrame
+from pandas.core.categorical import Categorical
from pandas.core.common import AbstractMethodError
from pandas.core.config import get_option
from pandas.io.date_converters import generic_parser
@@ -111,8 +115,9 @@
are duplicate names in the columns.
dtype : Type name or dict of column -> type, default None
Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
- (Unsupported with engine='python'). Use `str` or `object` to preserve and
- not interpret dtype.
+ Use `str` or `object` to preserve and not interpret dtype.
+ If converters are specified, they will be applied INSTEAD
+ of dtype conversion.
%s
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can either
@@ -421,6 +426,7 @@ def _read(filepath_or_buffer, kwds):
'true_values': None,
'false_values': None,
'converters': None,
+ 'dtype': None,
'skipfooter': 0,
'keep_default_na': True,
@@ -461,7 +467,6 @@ def _read(filepath_or_buffer, kwds):
'buffer_lines': None,
'error_bad_lines': True,
'warn_bad_lines': True,
- 'dtype': None,
'float_precision': None
}
@@ -476,7 +481,6 @@ def _read(filepath_or_buffer, kwds):
'buffer_lines',
'error_bad_lines',
'warn_bad_lines',
- 'dtype',
'float_precision',
])
_deprecated_args = set([
@@ -834,9 +838,6 @@ def _clean_options(self, options, engine):
" ignored as it is not supported by the 'python'"
" engine.").format(reason=fallback_reason,
option=arg)
- if arg == 'dtype':
- msg += " (Note the 'converters' option provides"\
- " similar functionality.)"
raise ValueError(msg)
del result[arg]
@@ -1285,7 +1286,7 @@ def _agg_index(self, index, try_parse_dates=True):
col_na_values, col_na_fvalues = _get_na_values(
col_name, self.na_values, self.na_fvalues)
- arr, _ = self._convert_types(arr, col_na_values | col_na_fvalues)
+ arr, _ = self._infer_types(arr, col_na_values | col_na_fvalues)
arrays.append(arr)
index = MultiIndex.from_arrays(arrays, names=self.index_names)
@@ -1293,10 +1294,15 @@ def _agg_index(self, index, try_parse_dates=True):
return index
def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
- converters=None):
+ converters=None, dtypes=None):
result = {}
for c, values in compat.iteritems(dct):
conv_f = None if converters is None else converters.get(c, None)
+ if isinstance(dtypes, dict):
+ cast_type = dtypes.get(c, None)
+ else:
+ # single dtype or None
+ cast_type = dtypes
if self.na_filter:
col_na_values, col_na_fvalues = _get_na_values(
@@ -1304,17 +1310,35 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
else:
col_na_values, col_na_fvalues = set(), set()
- coerce_type = True
if conv_f is not None:
+ # conv_f applied to data before inference
+ if cast_type is not None:
+ warnings.warn(("Both a converter and dtype were specified "
+ "for column {0} - only the converter will "
+ "be used").format(c), ParserWarning,
+ stacklevel=7)
+
try:
values = lib.map_infer(values, conv_f)
except ValueError:
mask = lib.ismember(values, na_values).view(np.uint8)
values = lib.map_infer_mask(values, conv_f, mask)
- coerce_type = False
- cvals, na_count = self._convert_types(
- values, set(col_na_values) | col_na_fvalues, coerce_type)
+ cvals, na_count = self._infer_types(
+ values, set(col_na_values) | col_na_fvalues,
+ try_num_bool=False)
+ else:
+ # skip inference if specified dtype is object
+ try_num_bool = not (cast_type and is_object_dtype(cast_type))
+
+ # general type inference and conversion
+ cvals, na_count = self._infer_types(
+ values, set(col_na_values) | col_na_fvalues,
+ try_num_bool)
+
+ # type specificed in dtype param
+ if cast_type and not is_dtype_equal(cvals, cast_type):
+ cvals = self._cast_types(cvals, cast_type, c)
if issubclass(cvals.dtype.type, np.integer) and self.compact_ints:
cvals = lib.downcast_int64(
@@ -1326,7 +1350,23 @@ def _convert_to_ndarrays(self, dct, na_values, na_fvalues, verbose=False,
print('Filled %d NA values in column %s' % (na_count, str(c)))
return result
- def _convert_types(self, values, na_values, try_num_bool=True):
+ def _infer_types(self, values, na_values, try_num_bool=True):
+ """
+ Infer types of values, possibly casting
+
+ Parameters
+ ----------
+ values : ndarray
+ na_values : set
+ try_num_bool : bool, default try
+ try to cast values to numeric (first preference) or boolean
+
+ Returns:
+ --------
+ converted : ndarray
+ na_count : int
+ """
+
na_count = 0
if issubclass(values.dtype.type, (np.number, np.bool_)):
mask = lib.ismember(values, na_values)
@@ -1340,6 +1380,7 @@ def _convert_types(self, values, na_values, try_num_bool=True):
if try_num_bool:
try:
result = lib.maybe_convert_numeric(values, na_values, False)
+ na_count = isnull(result).sum()
except Exception:
result = values
if values.dtype == np.object_:
@@ -1356,6 +1397,38 @@ def _convert_types(self, values, na_values, try_num_bool=True):
return result, na_count
+ def _cast_types(self, values, cast_type, column):
+ """
+ Cast values to specified type
+
+ Parameters
+ ----------
+ values : ndarray
+ cast_type : string or np.dtype
+ dtype to cast values to
+ column : string
+ column name - used only for error reporting
+
+ Returns
+ -------
+ converted : ndarray
+ """
+
+ if is_categorical_dtype(cast_type):
+ # XXX this is for consistency with
+ # c-parser which parses all categories
+ # as strings
+ if not is_object_dtype(values):
+ values = _astype_nansafe(values, str)
+ values = Categorical(values)
+ else:
+ try:
+ values = _astype_nansafe(values, cast_type, copy=True)
+ except ValueError:
+ raise ValueError("Unable to convert column %s to "
+ "type %s" % (column, cast_type))
+ return values
+
def _do_date_conversions(self, names, data):
# returns data, columns
if self.parse_dates is not None:
@@ -1784,6 +1857,7 @@ def __init__(self, f, **kwds):
self.verbose = kwds['verbose']
self.converters = kwds['converters']
+ self.dtype = kwds['dtype']
self.compact_ints = kwds['compact_ints']
self.use_unsigned = kwds['use_unsigned']
@@ -1982,7 +2056,7 @@ def read(self, rows=None):
# DataFrame with the right metadata, even though it's length 0
names = self._maybe_dedup_names(self.orig_names)
index, columns, col_dict = _get_empty_meta(
- names, self.index_col, self.index_names)
+ names, self.index_col, self.index_names, self.dtype)
columns = self._maybe_make_multi_index_columns(
columns, self.col_names)
return index, columns, col_dict
@@ -2033,15 +2107,25 @@ def get_chunk(self, size=None):
def _convert_data(self, data):
# apply converters
- clean_conv = {}
-
- for col, f in compat.iteritems(self.converters):
- if isinstance(col, int) and col not in self.orig_names:
- col = self.orig_names[col]
- clean_conv[col] = f
+ def _clean_mapping(mapping):
+ "converts col numbers to names"
+ clean = {}
+ for col, v in compat.iteritems(mapping):
+ if isinstance(col, int) and col not in self.orig_names:
+ col = self.orig_names[col]
+ clean[col] = v
+ return clean
+
+ clean_conv = _clean_mapping(self.converters)
+ if not isinstance(self.dtype, dict):
+ # handles single dtype applied to all columns
+ clean_dtypes = self.dtype
+ else:
+ clean_dtypes = _clean_mapping(self.dtype)
return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues,
- self.verbose, clean_conv)
+ self.verbose, clean_conv,
+ clean_dtypes)
def _to_recarray(self, data, columns):
dtypes = []
diff --git a/pandas/io/tests/parser/c_parser_only.py b/pandas/io/tests/parser/c_parser_only.py
index 9cbe88d4032a3..c781b0549ee60 100644
--- a/pandas/io/tests/parser/c_parser_only.py
+++ b/pandas/io/tests/parser/c_parser_only.py
@@ -12,10 +12,9 @@
import pandas as pd
import pandas.util.testing as tm
-from pandas import DataFrame, Series, Index, MultiIndex, Categorical
+from pandas import DataFrame
from pandas import compat
from pandas.compat import StringIO, range, lrange
-from pandas.types.dtypes import CategoricalDtype
class CParserTests(object):
@@ -100,29 +99,13 @@ def test_dtype_and_names_error(self):
self.read_csv(StringIO(data), sep=r'\s+', header=None,
names=['a', 'b'], dtype={'a': np.int32})
- def test_passing_dtype(self):
- # see gh-6607
+ def test_unsupported_dtype(self):
df = DataFrame(np.random.rand(5, 2), columns=list(
'AB'), index=['1A', '1B', '1C', '1D', '1E'])
- with tm.ensure_clean('__passing_str_as_dtype__.csv') as path:
+ with tm.ensure_clean('__unsupported_dtype__.csv') as path:
df.to_csv(path)
- # see gh-3795: passing 'str' as the dtype
- result = self.read_csv(path, dtype=str, index_col=0)
- tm.assert_series_equal(result.dtypes, Series(
- {'A': 'object', 'B': 'object'}))
-
- # we expect all object columns, so need to
- # convert to test for equivalence
- result = result.astype(float)
- tm.assert_frame_equal(result, df)
-
- # invalid dtype
- self.assertRaises(TypeError, self.read_csv, path,
- dtype={'A': 'foo', 'B': 'float64'},
- index_col=0)
-
# valid but we don't support it (date)
self.assertRaises(TypeError, self.read_csv, path,
dtype={'A': 'datetime64', 'B': 'float64'},
@@ -141,11 +124,6 @@ def test_passing_dtype(self):
dtype={'A': 'U8'},
index_col=0)
- # see gh-12048: empty frame
- actual = self.read_csv(StringIO('A,B'), dtype=str)
- expected = DataFrame({'A': [], 'B': []}, index=[], dtype=str)
- tm.assert_frame_equal(actual, expected)
-
def test_precise_conversion(self):
# see gh-8002
tm._skip_if_32bit()
@@ -178,104 +156,6 @@ def error(val):
self.assertTrue(sum(precise_errors) <= sum(normal_errors))
self.assertTrue(max(precise_errors) <= max(normal_errors))
- def test_pass_dtype(self):
- data = """\
-one,two
-1,2.5
-2,3.5
-3,4.5
-4,5.5"""
-
- result = self.read_csv(StringIO(data), dtype={'one': 'u1', 1: 'S1'})
- self.assertEqual(result['one'].dtype, 'u1')
- self.assertEqual(result['two'].dtype, 'object')
-
- def test_categorical_dtype(self):
- # GH 10153
- data = """a,b,c
-1,a,3.4
-1,a,3.4
-2,b,4.5"""
- expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
- 'b': Categorical(['a', 'a', 'b']),
- 'c': Categorical(['3.4', '3.4', '4.5'])})
- actual = self.read_csv(StringIO(data), dtype='category')
- tm.assert_frame_equal(actual, expected)
-
- actual = self.read_csv(StringIO(data), dtype=CategoricalDtype())
- tm.assert_frame_equal(actual, expected)
-
- actual = self.read_csv(StringIO(data), dtype={'a': 'category',
- 'b': 'category',
- 'c': CategoricalDtype()})
- tm.assert_frame_equal(actual, expected)
-
- actual = self.read_csv(StringIO(data), dtype={'b': 'category'})
- expected = pd.DataFrame({'a': [1, 1, 2],
- 'b': Categorical(['a', 'a', 'b']),
- 'c': [3.4, 3.4, 4.5]})
- tm.assert_frame_equal(actual, expected)
-
- actual = self.read_csv(StringIO(data), dtype={1: 'category'})
- tm.assert_frame_equal(actual, expected)
-
- # unsorted
- data = """a,b,c
-1,b,3.4
-1,b,3.4
-2,a,4.5"""
- expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
- 'b': Categorical(['b', 'b', 'a']),
- 'c': Categorical(['3.4', '3.4', '4.5'])})
- actual = self.read_csv(StringIO(data), dtype='category')
- tm.assert_frame_equal(actual, expected)
-
- # missing
- data = """a,b,c
-1,b,3.4
-1,nan,3.4
-2,a,4.5"""
- expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
- 'b': Categorical(['b', np.nan, 'a']),
- 'c': Categorical(['3.4', '3.4', '4.5'])})
- actual = self.read_csv(StringIO(data), dtype='category')
- tm.assert_frame_equal(actual, expected)
-
- def test_categorical_dtype_encoding(self):
- # GH 10153
- pth = tm.get_data_path('unicode_series.csv')
- encoding = 'latin-1'
- expected = self.read_csv(pth, header=None, encoding=encoding)
- expected[1] = Categorical(expected[1])
- actual = self.read_csv(pth, header=None, encoding=encoding,
- dtype={1: 'category'})
- tm.assert_frame_equal(actual, expected)
-
- pth = tm.get_data_path('utf16_ex.txt')
- encoding = 'utf-16'
- expected = self.read_table(pth, encoding=encoding)
- expected = expected.apply(Categorical)
- actual = self.read_table(pth, encoding=encoding, dtype='category')
- tm.assert_frame_equal(actual, expected)
-
- def test_categorical_dtype_chunksize(self):
- # GH 10153
- data = """a,b
-1,a
-1,b
-1,b
-2,c"""
- expecteds = [pd.DataFrame({'a': [1, 1],
- 'b': Categorical(['a', 'b'])}),
- pd.DataFrame({'a': [1, 2],
- 'b': Categorical(['b', 'c'])},
- index=[2, 3])]
- actuals = self.read_csv(StringIO(data), dtype={'b': 'category'},
- chunksize=2)
-
- for actual, expected in zip(actuals, expecteds):
- tm.assert_frame_equal(actual, expected)
-
def test_pass_dtype_as_recarray(self):
if compat.is_platform_windows() and self.low_memory:
raise nose.SkipTest(
@@ -295,66 +175,6 @@ def test_pass_dtype_as_recarray(self):
self.assertEqual(result['one'].dtype, 'u1')
self.assertEqual(result['two'].dtype, 'S1')
- def test_empty_pass_dtype(self):
- data = 'one,two'
- result = self.read_csv(StringIO(data), dtype={'one': 'u1'})
-
- expected = DataFrame({'one': np.empty(0, dtype='u1'),
- 'two': np.empty(0, dtype=np.object)})
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_empty_with_index_pass_dtype(self):
- data = 'one,two'
- result = self.read_csv(StringIO(data), index_col=['one'],
- dtype={'one': 'u1', 1: 'f'})
-
- expected = DataFrame({'two': np.empty(0, dtype='f')},
- index=Index([], dtype='u1', name='one'))
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_empty_with_multiindex_pass_dtype(self):
- data = 'one,two,three'
- result = self.read_csv(StringIO(data), index_col=['one', 'two'],
- dtype={'one': 'u1', 1: 'f8'})
-
- exp_idx = MultiIndex.from_arrays([np.empty(0, dtype='u1'),
- np.empty(0, dtype='O')],
- names=['one', 'two'])
- expected = DataFrame(
- {'three': np.empty(0, dtype=np.object)}, index=exp_idx)
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_empty_with_mangled_column_pass_dtype_by_names(self):
- data = 'one,one'
- result = self.read_csv(StringIO(data), dtype={
- 'one': 'u1', 'one.1': 'f'})
-
- expected = DataFrame(
- {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')})
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_empty_with_mangled_column_pass_dtype_by_indexes(self):
- data = 'one,one'
- result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'})
-
- expected = DataFrame(
- {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')})
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- def test_empty_with_dup_column_pass_dtype_by_indexes(self):
- # see gh-9424
- expected = pd.concat([Series([], name='one', dtype='u1'),
- Series([], name='one.1', dtype='f')], axis=1)
-
- data = 'one,one'
- result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'})
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
- data = ''
- result = self.read_csv(StringIO(data), names=['one', 'one'],
- dtype={0: 'u1', 1: 'f'})
- tm.assert_frame_equal(result, expected, check_index_type=False)
-
def test_usecols_dtypes(self):
data = """\
1,2,3
@@ -400,16 +220,6 @@ def test_custom_lineterminator(self):
tm.assert_frame_equal(result, expected)
- def test_raise_on_passed_int_dtype_with_nas(self):
- # see gh-2631
- data = """YEAR, DOY, a
-2001,106380451,10
-2001,,11
-2001,106380451,67"""
- self.assertRaises(ValueError, self.read_csv, StringIO(data),
- sep=",", skipinitialspace=True,
- dtype={'DOY': np.int64})
-
def test_parse_ragged_csv(self):
data = """1,2,3
1,2,3,4
@@ -561,49 +371,3 @@ def test_internal_null_byte(self):
result = self.read_csv(StringIO(data), names=names)
tm.assert_frame_equal(result, expected)
-
- def test_empty_dtype(self):
- # see gh-14712
- data = 'a,b'
-
- expected = pd.DataFrame(columns=['a', 'b'], dtype=np.float64)
- result = self.read_csv(StringIO(data), header=0, dtype=np.float64)
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame({'a': pd.Categorical([]),
- 'b': pd.Categorical([])},
- index=[])
- result = self.read_csv(StringIO(data), header=0,
- dtype='category')
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame(columns=['a', 'b'], dtype='datetime64[ns]')
- result = self.read_csv(StringIO(data), header=0,
- dtype='datetime64[ns]')
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame({'a': pd.Series([], dtype='timedelta64[ns]'),
- 'b': pd.Series([], dtype='timedelta64[ns]')},
- index=[])
- result = self.read_csv(StringIO(data), header=0,
- dtype='timedelta64[ns]')
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame(columns=['a', 'b'])
- expected['a'] = expected['a'].astype(np.float64)
- result = self.read_csv(StringIO(data), header=0,
- dtype={'a': np.float64})
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame(columns=['a', 'b'])
- expected['a'] = expected['a'].astype(np.float64)
- result = self.read_csv(StringIO(data), header=0,
- dtype={0: np.float64})
- tm.assert_frame_equal(result, expected)
-
- expected = pd.DataFrame(columns=['a', 'b'])
- expected['a'] = expected['a'].astype(np.int32)
- expected['b'] = expected['b'].astype(np.float64)
- result = self.read_csv(StringIO(data), header=0,
- dtype={'a': np.int32, 1: np.float64})
- tm.assert_frame_equal(result, expected)
diff --git a/pandas/io/tests/parser/dtypes.py b/pandas/io/tests/parser/dtypes.py
new file mode 100644
index 0000000000000..18c37b31f6480
--- /dev/null
+++ b/pandas/io/tests/parser/dtypes.py
@@ -0,0 +1,274 @@
+# -*- coding: utf-8 -*-
+
+"""
+Tests dtype specification during parsing
+for all of the parsers defined in parsers.py
+"""
+
+import numpy as np
+import pandas as pd
+import pandas.util.testing as tm
+
+from pandas import DataFrame, Series, Index, MultiIndex, Categorical
+from pandas.compat import StringIO
+from pandas.types.dtypes import CategoricalDtype
+from pandas.io.common import ParserWarning
+
+
+class DtypeTests(object):
+ def test_passing_dtype(self):
+ # see gh-6607
+ df = DataFrame(np.random.rand(5, 2).round(4), columns=list(
+ 'AB'), index=['1A', '1B', '1C', '1D', '1E'])
+
+ with tm.ensure_clean('__passing_str_as_dtype__.csv') as path:
+ df.to_csv(path)
+
+ # see gh-3795: passing 'str' as the dtype
+ result = self.read_csv(path, dtype=str, index_col=0)
+ expected = df.astype(str)
+ tm.assert_frame_equal(result, expected)
+
+ # for parsing, interpret object as str
+ result = self.read_csv(path, dtype=object, index_col=0)
+ tm.assert_frame_equal(result, expected)
+
+ # we expect all object columns, so need to
+ # convert to test for equivalence
+ result = result.astype(float)
+ tm.assert_frame_equal(result, df)
+
+ # invalid dtype
+ self.assertRaises(TypeError, self.read_csv, path,
+ dtype={'A': 'foo', 'B': 'float64'},
+ index_col=0)
+
+ # see gh-12048: empty frame
+ actual = self.read_csv(StringIO('A,B'), dtype=str)
+ expected = DataFrame({'A': [], 'B': []}, index=[], dtype=str)
+ tm.assert_frame_equal(actual, expected)
+
+ def test_pass_dtype(self):
+ data = """\
+one,two
+1,2.5
+2,3.5
+3,4.5
+4,5.5"""
+
+ result = self.read_csv(StringIO(data), dtype={'one': 'u1', 1: 'S1'})
+ self.assertEqual(result['one'].dtype, 'u1')
+ self.assertEqual(result['two'].dtype, 'object')
+
+ def test_categorical_dtype(self):
+ # GH 10153
+ data = """a,b,c
+1,a,3.4
+1,a,3.4
+2,b,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['a', 'a', 'b']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype=CategoricalDtype())
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={'a': 'category',
+ 'b': 'category',
+ 'c': CategoricalDtype()})
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={'b': 'category'})
+ expected = pd.DataFrame({'a': [1, 1, 2],
+ 'b': Categorical(['a', 'a', 'b']),
+ 'c': [3.4, 3.4, 4.5]})
+ tm.assert_frame_equal(actual, expected)
+
+ actual = self.read_csv(StringIO(data), dtype={1: 'category'})
+ tm.assert_frame_equal(actual, expected)
+
+ # unsorted
+ data = """a,b,c
+1,b,3.4
+1,b,3.4
+2,a,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['b', 'b', 'a']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ # missing
+ data = """a,b,c
+1,b,3.4
+1,nan,3.4
+2,a,4.5"""
+ expected = pd.DataFrame({'a': Categorical(['1', '1', '2']),
+ 'b': Categorical(['b', np.nan, 'a']),
+ 'c': Categorical(['3.4', '3.4', '4.5'])})
+ actual = self.read_csv(StringIO(data), dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ def test_categorical_dtype_encoding(self):
+ # GH 10153
+ pth = tm.get_data_path('unicode_series.csv')
+ encoding = 'latin-1'
+ expected = self.read_csv(pth, header=None, encoding=encoding)
+ expected[1] = Categorical(expected[1])
+ actual = self.read_csv(pth, header=None, encoding=encoding,
+ dtype={1: 'category'})
+ tm.assert_frame_equal(actual, expected)
+
+ pth = tm.get_data_path('utf16_ex.txt')
+ encoding = 'utf-16'
+ expected = self.read_table(pth, encoding=encoding)
+ expected = expected.apply(Categorical)
+ actual = self.read_table(pth, encoding=encoding, dtype='category')
+ tm.assert_frame_equal(actual, expected)
+
+ def test_categorical_dtype_chunksize(self):
+ # GH 10153
+ data = """a,b
+1,a
+1,b
+1,b
+2,c"""
+ expecteds = [pd.DataFrame({'a': [1, 1],
+ 'b': Categorical(['a', 'b'])}),
+ pd.DataFrame({'a': [1, 2],
+ 'b': Categorical(['b', 'c'])},
+ index=[2, 3])]
+ actuals = self.read_csv(StringIO(data), dtype={'b': 'category'},
+ chunksize=2)
+
+ for actual, expected in zip(actuals, expecteds):
+ tm.assert_frame_equal(actual, expected)
+
+ def test_empty_pass_dtype(self):
+ data = 'one,two'
+ result = self.read_csv(StringIO(data), dtype={'one': 'u1'})
+
+ expected = DataFrame({'one': np.empty(0, dtype='u1'),
+ 'two': np.empty(0, dtype=np.object)})
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_empty_with_index_pass_dtype(self):
+ data = 'one,two'
+ result = self.read_csv(StringIO(data), index_col=['one'],
+ dtype={'one': 'u1', 1: 'f'})
+
+ expected = DataFrame({'two': np.empty(0, dtype='f')},
+ index=Index([], dtype='u1', name='one'))
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_empty_with_multiindex_pass_dtype(self):
+ data = 'one,two,three'
+ result = self.read_csv(StringIO(data), index_col=['one', 'two'],
+ dtype={'one': 'u1', 1: 'f8'})
+
+ exp_idx = MultiIndex.from_arrays([np.empty(0, dtype='u1'),
+ np.empty(0, dtype='O')],
+ names=['one', 'two'])
+ expected = DataFrame(
+ {'three': np.empty(0, dtype=np.object)}, index=exp_idx)
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_empty_with_mangled_column_pass_dtype_by_names(self):
+ data = 'one,one'
+ result = self.read_csv(StringIO(data), dtype={
+ 'one': 'u1', 'one.1': 'f'})
+
+ expected = DataFrame(
+ {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')})
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_empty_with_mangled_column_pass_dtype_by_indexes(self):
+ data = 'one,one'
+ result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'})
+
+ expected = DataFrame(
+ {'one': np.empty(0, dtype='u1'), 'one.1': np.empty(0, dtype='f')})
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_empty_with_dup_column_pass_dtype_by_indexes(self):
+ # see gh-9424
+ expected = pd.concat([Series([], name='one', dtype='u1'),
+ Series([], name='one.1', dtype='f')], axis=1)
+
+ data = 'one,one'
+ result = self.read_csv(StringIO(data), dtype={0: 'u1', 1: 'f'})
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ data = ''
+ result = self.read_csv(StringIO(data), names=['one', 'one'],
+ dtype={0: 'u1', 1: 'f'})
+ tm.assert_frame_equal(result, expected, check_index_type=False)
+
+ def test_raise_on_passed_int_dtype_with_nas(self):
+ # see gh-2631
+ data = """YEAR, DOY, a
+2001,106380451,10
+2001,,11
+2001,106380451,67"""
+ self.assertRaises(ValueError, self.read_csv, StringIO(data),
+ sep=",", skipinitialspace=True,
+ dtype={'DOY': np.int64})
+
+ def test_dtype_with_converter(self):
+ data = """a,b
+1.1,2.2
+1.2,2.3"""
+ # dtype spec ignored if converted specified
+ with tm.assert_produces_warning(ParserWarning):
+ result = self.read_csv(StringIO(data), dtype={'a': 'i8'},
+ converters={'a': lambda x: str(x)})
+ expected = DataFrame({'a': ['1.1', '1.2'], 'b': [2.2, 2.3]})
+ tm.assert_frame_equal(result, expected)
+
+ def test_empty_dtype(self):
+ # see gh-14712
+ data = 'a,b'
+
+ expected = pd.DataFrame(columns=['a', 'b'], dtype=np.float64)
+ result = self.read_csv(StringIO(data), header=0, dtype=np.float64)
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame({'a': pd.Categorical([]),
+ 'b': pd.Categorical([])},
+ index=[])
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='category')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'], dtype='datetime64[ns]')
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='datetime64[ns]')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame({'a': pd.Series([], dtype='timedelta64[ns]'),
+ 'b': pd.Series([], dtype='timedelta64[ns]')},
+ index=[])
+ result = self.read_csv(StringIO(data), header=0,
+ dtype='timedelta64[ns]')
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={'a': np.float64})
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={0: np.float64})
+ tm.assert_frame_equal(result, expected)
+
+ expected = pd.DataFrame(columns=['a', 'b'])
+ expected['a'] = expected['a'].astype(np.int32)
+ expected['b'] = expected['b'].astype(np.float64)
+ result = self.read_csv(StringIO(data), header=0,
+ dtype={'a': np.int32, 1: np.float64})
+ tm.assert_frame_equal(result, expected)
diff --git a/pandas/io/tests/parser/test_parsers.py b/pandas/io/tests/parser/test_parsers.py
index 6001c85ae76b1..6cca2e35e1135 100644
--- a/pandas/io/tests/parser/test_parsers.py
+++ b/pandas/io/tests/parser/test_parsers.py
@@ -22,6 +22,7 @@
from .compression import CompressionTests
from .multithread import MultithreadTests
from .python_parser_only import PythonParserTests
+from .dtypes import DtypeTests
class BaseParser(CommentTests, CompressionTests,
@@ -29,7 +30,8 @@ class BaseParser(CommentTests, CompressionTests,
IndexColTests, MultithreadTests,
NAvaluesTests, ParseDatesTests,
ParserTests, SkipRowsTests,
- UsecolsTests, QuotingTests):
+ UsecolsTests, QuotingTests,
+ DtypeTests):
def read_csv(self, *args, **kwargs):
raise NotImplementedError
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index 5d60c20854a83..ffd1cfa9a2538 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -44,16 +44,6 @@ def test_c_engine(self):
data = 'a b c\n1 2 3'
msg = 'does not support'
- # specify C-unsupported options with python-unsupported option
- # (options will be ignored on fallback, raise)
- with tm.assertRaisesRegexp(ValueError, msg):
- read_table(StringIO(data), sep=None,
- delim_whitespace=False, dtype={'a': float})
- with tm.assertRaisesRegexp(ValueError, msg):
- read_table(StringIO(data), sep=r'\s', dtype={'a': float})
- with tm.assertRaisesRegexp(ValueError, msg):
- read_table(StringIO(data), skipfooter=1, dtype={'a': float})
-
# specify C engine with unsupported options (raise)
with tm.assertRaisesRegexp(ValueError, msg):
read_table(StringIO(data), engine='c',
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 6b43dfbabc4a0..6760e822960f1 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -13,7 +13,7 @@ from cpython cimport (PyObject, PyBytes_FromString,
PyUnicode_Check, PyUnicode_AsUTF8String,
PyErr_Occurred, PyErr_Fetch)
from cpython.ref cimport PyObject, Py_XDECREF
-from io.common import ParserError, DtypeWarning, EmptyDataError
+from io.common import ParserError, DtypeWarning, EmptyDataError, ParserWarning
# Import CParserError as alias of ParserError for backwards compatibility.
# Ultimately, we want to remove this import. See gh-12665 and gh-14479.
@@ -987,7 +987,7 @@ cdef class TextReader:
Py_ssize_t i, nused
kh_str_t *na_hashset = NULL
int start, end
- object name, na_flist
+ object name, na_flist, col_dtype = None
bint na_filter = 0
Py_ssize_t num_cols
@@ -1043,14 +1043,34 @@ cdef class TextReader:
else:
na_filter = 0
+ col_dtype = None
+ if self.dtype is not None:
+ if isinstance(self.dtype, dict):
+ if name in self.dtype:
+ col_dtype = self.dtype[name]
+ elif i in self.dtype:
+ col_dtype = self.dtype[i]
+ else:
+ if self.dtype.names:
+ # structured array
+ col_dtype = np.dtype(self.dtype.descr[i][1])
+ else:
+ col_dtype = self.dtype
+
if conv:
+ if col_dtype is not None:
+ warnings.warn(("Both a converter and dtype were specified "
+ "for column {0} - only the converter will "
+ "be used").format(name), ParserWarning,
+ stacklevel=5)
results[i] = _apply_converter(conv, self.parser, i, start, end,
self.c_encoding)
continue
# Should return as the desired dtype (inferred or specified)
col_res, na_count = self._convert_tokens(
- i, start, end, name, na_filter, na_hashset, na_flist)
+ i, start, end, name, na_filter, na_hashset,
+ na_flist, col_dtype)
if na_filter:
self._free_na_set(na_hashset)
@@ -1075,32 +1095,17 @@ cdef class TextReader:
cdef inline _convert_tokens(self, Py_ssize_t i, int start, int end,
object name, bint na_filter,
kh_str_t *na_hashset,
- object na_flist):
- cdef:
- object col_dtype = None
-
- if self.dtype is not None:
- if isinstance(self.dtype, dict):
- if name in self.dtype:
- col_dtype = self.dtype[name]
- elif i in self.dtype:
- col_dtype = self.dtype[i]
- else:
- if self.dtype.names:
- # structured array
- col_dtype = np.dtype(self.dtype.descr[i][1])
- else:
- col_dtype = self.dtype
+ object na_flist, object col_dtype):
- if col_dtype is not None:
- col_res, na_count = self._convert_with_dtype(
- col_dtype, i, start, end, na_filter,
- 1, na_hashset, na_flist)
+ if col_dtype is not None:
+ col_res, na_count = self._convert_with_dtype(
+ col_dtype, i, start, end, na_filter,
+ 1, na_hashset, na_flist)
- # Fallback on the parse (e.g. we requested int dtype,
- # but its actually a float).
- if col_res is not None:
- return col_res, na_count
+ # Fallback on the parse (e.g. we requested int dtype,
+ # but its actually a float).
+ if col_res is not None:
+ return col_res, na_count
if i in self.noconvert:
return self._string_convert(i, start, end, na_filter, na_hashset)
| - [x] part of #12686
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
Ultimately I'm working towards #8212 (types in excel parser), which should be pretty straightforward after this.
Right now the tests are moved from `c_parser_only.py`, may need to add some more
cc @gfyoung
| https://api.github.com/repos/pandas-dev/pandas/pulls/14295 | 2016-09-24T18:12:19Z | 2016-11-26T09:12:22Z | 2016-11-26T09:12:22Z | 2016-11-30T01:00:57Z |
DOC: Typo fix in ordered_merge warning | diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 6521acbd0b733..8cdde8d92b28f 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -146,7 +146,7 @@ def ordered_merge(left, right, on=None,
left_by=None, right_by=None,
fill_method=None, suffixes=('_x', '_y')):
- warnings.warn("ordered_merge is deprecated and replace by merged_ordered",
+ warnings.warn("ordered_merge is deprecated and replaced by merge_ordered",
FutureWarning, stacklevel=2)
return merge_ordered(left, right, on=on,
left_on=left_on, right_on=right_on,
| https://api.github.com/repos/pandas-dev/pandas/pulls/14271 | 2016-09-21T13:52:06Z | 2016-09-22T16:39:03Z | 2016-09-22T16:39:03Z | 2016-09-24T11:41:21Z | |
Update Github issue template | diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
index 8a9f717e1c428..6f91eba1ad239 100644
--- a/.github/ISSUE_TEMPLATE.md
+++ b/.github/ISSUE_TEMPLATE.md
@@ -1,6 +1,15 @@
-#### Code Sample, a copy-pastable example if possible
+#### A small, complete example of the issue
+
+```python
+# Your code here
+
+```
#### Expected Output
-#### output of ``pd.show_versions()``
+#### Output of ``pd.show_versions()``
+
+<details>
+# Paste the output here
+</details>
| Mostly just using the `details` tag to reduce some clutter in all the issues, e.g.
---
#### A small, complete example of the issue
``` python
# Your code here
def fib(n):
return fib(n-1) + fib(n-2)
```
#### Expected Output
Nothing
#### Output of `pd.show_versions()`
<details>
## INSTALLED VERSIONS
commit: None
python: 3.5.2.final.0
python-bits: 64
OS: Darwin
OS-release: 15.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.19.0rc1+21.ge596cbf
nose: 1.3.7
pip: 8.1.2
setuptools: 26.1.1
Cython: 0.25a0
numpy: 1.11.1
scipy: 0.18.0
statsmodels: 0.8.0rc1
xarray: 0.7.2
IPython: 5.1.0
sphinx: 1.4.6
patsy: 0.4.1
dateutil: 2.5.3
pytz: 2016.6.1
blosc: 1.4.1
bottleneck: 1.0.0
tables: 3.2.2
numexpr: 2.6.1
matplotlib: 1.5.2
openpyxl: 2.3.5
xlrd: 0.9.4
xlwt: 1.0.0
xlsxwriter: None
lxml: 3.4.4
bs4: 4.4.1
html5lib: 0.9999999
httplib2: 0.9.2
apiclient: 1.5.1
sqlalchemy: 1.0.12
pymysql: 0.7.6.None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: 2.8
boto: 2.39.0
pandas_datareader: None
</details>
| https://api.github.com/repos/pandas-dev/pandas/pulls/14268 | 2016-09-21T12:22:46Z | 2016-09-23T21:38:17Z | 2016-09-23T21:38:17Z | 2017-04-05T02:08:33Z |
TST: Fix generator tests to run | diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index c50944f0a4d3b..02ed11c65706c 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -758,21 +758,21 @@ def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
# typecasting rules consistency with python
# issue #12388
-class TestTypeCasting(tm.TestCase):
+class TestTypeCasting(object):
def check_binop_typecasting(self, engine, parser, op, dt):
tm.skip_if_no_ne(engine)
df = mkdf(5, 3, data_gen_f=f, dtype=dt)
s = 'df {} 3'.format(op)
res = pd.eval(s, engine=engine, parser=parser)
- self.assertTrue(df.values.dtype == dt)
- self.assertTrue(res.values.dtype == dt)
+ assert df.values.dtype == dt
+ assert res.values.dtype == dt
assert_frame_equal(res, eval(s))
s = '3 {} df'.format(op)
res = pd.eval(s, engine=engine, parser=parser)
- self.assertTrue(df.values.dtype == dt)
- self.assertTrue(res.values.dtype == dt)
+ assert df.values.dtype == dt
+ assert res.values.dtype == dt
assert_frame_equal(res, eval(s))
def test_binop_typecasting(self):
| Closes #14244
This class inheriting from TestCase caused
the tests the yield fixture tests to not actually
run. The new output is
```
nosetests pandas/computation/tests/test_eval.py:TestTypeCasting
........................................
----------------------------------------------------------------------
Ran 40 tests in 0.264s
OK
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/14245 | 2016-09-18T14:38:51Z | 2016-09-19T20:11:32Z | 2016-09-19T20:11:32Z | 2017-04-05T02:08:32Z |
TST/TEMP: fix pyqt to 4.x for plotting tests | diff --git a/ci/requirements-2.7-64.run b/ci/requirements-2.7-64.run
index 42b5a789ae31a..ce085a6ebf91c 100644
--- a/ci/requirements-2.7-64.run
+++ b/ci/requirements-2.7-64.run
@@ -16,3 +16,4 @@ bottleneck
html5lib
beautiful-soup
jinja2=2.8
+pyqt=4.11.4
diff --git a/ci/requirements-2.7.run b/ci/requirements-2.7.run
index 560d6571b8771..eec7886fed38d 100644
--- a/ci/requirements-2.7.run
+++ b/ci/requirements-2.7.run
@@ -21,3 +21,4 @@ beautiful-soup=4.2.1
statsmodels
jinja2=2.8
xarray
+pyqt=4.11.4
diff --git a/ci/requirements-3.5-64.run b/ci/requirements-3.5-64.run
index 96de21e3daa5e..1dc88ed2c94af 100644
--- a/ci/requirements-3.5-64.run
+++ b/ci/requirements-3.5-64.run
@@ -10,3 +10,4 @@ numexpr
pytables
matplotlib
blosc
+pyqt=4.11.4
diff --git a/ci/requirements-3.5.run b/ci/requirements-3.5.run
index 333641caf26c4..d9ce708585a33 100644
--- a/ci/requirements-3.5.run
+++ b/ci/requirements-3.5.run
@@ -18,6 +18,7 @@ pymysql
psycopg2
xarray
boto
+pyqt=4.11.4
# incompat with conda ATM
# beautiful-soup
| To fix breaking tests (latest travis build on master broken, and also all recent PR builds), see https://github.com/matplotlib/matplotlib/issues/7124
| https://api.github.com/repos/pandas-dev/pandas/pulls/14240 | 2016-09-17T10:05:09Z | 2016-09-18T09:39:00Z | 2016-09-18T09:39:00Z | 2016-09-18T09:39:00Z |
BUG: set_levels set illegal levels. | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 60847469aa02c..8e7e95c071ea4 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1560,6 +1560,6 @@ Bug Fixes
- Bug in ``.to_string()`` when called with an integer ``line_width`` and ``index=False`` raises an UnboundLocalError exception because ``idx`` referenced before assignment.
- Bug in ``eval()`` where the ``resolvers`` argument would not accept a list (:issue:`14095`)
- Bugs in ``stack``, ``get_dummies``, ``make_axis_dummies`` which don't preserve categorical dtypes in (multi)indexes (:issue:`13854`)
-- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
+- ``PeriodIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
- Bug in ``df.groupby`` where ``.median()`` returns arbitrary values if grouped dataframe contains empty bins (:issue:`13629`)
- Bug in ``Index.copy()`` where ``name`` parameter was ignored (:issue:`14302`)
diff --git a/doc/source/whatsnew/v0.19.1.txt b/doc/source/whatsnew/v0.19.1.txt
index 21b9fac6ffacf..92c7746d1c023 100644
--- a/doc/source/whatsnew/v0.19.1.txt
+++ b/doc/source/whatsnew/v0.19.1.txt
@@ -31,3 +31,4 @@ Performance Improvements
Bug Fixes
~~~~~~~~~
- Bug in ``pd.concat`` where names of the ``keys`` were not propagated to the resulting ``MultiIndex`` (:issue:`14252`)
+- Bug in ``MultiIndex.set_levels`` where illegal level values were still set after raising an error (:issue:`13754`)
diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py
index 1ab5dbb737739..0c465da24a17e 100644
--- a/pandas/indexes/multi.py
+++ b/pandas/indexes/multi.py
@@ -116,12 +116,27 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
return result
- def _verify_integrity(self):
- """Raises ValueError if length of levels and labels don't match or any
- label would exceed level bounds"""
+ def _verify_integrity(self, labels=None, levels=None):
+ """
+
+ Parameters
+ ----------
+ labels : optional list
+ Labels to check for validity. Defaults to current labels.
+ levels : optional list
+ Levels to check for validity. Defaults to current levels.
+
+ Raises
+ ------
+ ValueError
+ * if length of levels and labels don't match or any label would
+ exceed level bounds
+ """
# NOTE: Currently does not check, among other things, that cached
# nlevels matches nor that sortorder matches actually sortorder.
- labels, levels = self.labels, self.levels
+ labels = labels or self.labels
+ levels = levels or self.levels
+
if len(levels) != len(labels):
raise ValueError("Length of levels and labels must match. NOTE:"
" this index is in an inconsistent state.")
@@ -162,6 +177,9 @@ def _set_levels(self, levels, level=None, copy=False, validate=True,
new_levels[l] = _ensure_index(v, copy=copy)._shallow_copy()
new_levels = FrozenList(new_levels)
+ if verify_integrity:
+ self._verify_integrity(levels=new_levels)
+
names = self.names
self._levels = new_levels
if any(names):
@@ -170,9 +188,6 @@ def _set_levels(self, levels, level=None, copy=False, validate=True,
self._tuples = None
self._reset_cache()
- if verify_integrity:
- self._verify_integrity()
-
def set_levels(self, levels, level=None, inplace=False,
verify_integrity=True):
"""
@@ -268,13 +283,13 @@ def _set_labels(self, labels, level=None, copy=False, validate=True,
lab, lev, copy=copy)._shallow_copy()
new_labels = FrozenList(new_labels)
+ if verify_integrity:
+ self._verify_integrity(labels=new_labels)
+
self._labels = new_labels
self._tuples = None
self._reset_cache()
- if verify_integrity:
- self._verify_integrity()
-
def set_labels(self, labels, level=None, inplace=False,
verify_integrity=True):
"""
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index cd9ce0102ca1e..fdc5a2eaec812 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -149,14 +149,14 @@ def test_set_levels(self):
levels = self.index.levels
new_levels = [[lev + 'a' for lev in level] for level in levels]
- def assert_matching(actual, expected):
+ def assert_matching(actual, expected, check_dtype=False):
# avoid specifying internal representation
# as much as possible
self.assertEqual(len(actual), len(expected))
for act, exp in zip(actual, expected):
act = np.asarray(act)
- exp = np.asarray(exp, dtype=np.object_)
- tm.assert_numpy_array_equal(act, exp)
+ exp = np.asarray(exp)
+ tm.assert_numpy_array_equal(act, exp, check_dtype=check_dtype)
# level changing [w/o mutation]
ind2 = self.index.set_levels(new_levels)
@@ -204,6 +204,31 @@ def assert_matching(actual, expected):
assert_matching(ind2.levels, new_levels)
assert_matching(self.index.levels, levels)
+ # illegal level changing should not change levels
+ # GH 13754
+ original_index = self.index.copy()
+ for inplace in [True, False]:
+ with assertRaisesRegexp(ValueError, "^On"):
+ self.index.set_levels(['c'], level=0, inplace=inplace)
+ assert_matching(self.index.levels, original_index.levels,
+ check_dtype=True)
+
+ with assertRaisesRegexp(ValueError, "^On"):
+ self.index.set_labels([0, 1, 2, 3, 4, 5], level=0,
+ inplace=inplace)
+ assert_matching(self.index.labels, original_index.labels,
+ check_dtype=True)
+
+ with assertRaisesRegexp(TypeError, "^Levels"):
+ self.index.set_levels('c', level=0, inplace=inplace)
+ assert_matching(self.index.levels, original_index.levels,
+ check_dtype=True)
+
+ with assertRaisesRegexp(TypeError, "^Labels"):
+ self.index.set_labels(1, level=0, inplace=inplace)
+ assert_matching(self.index.labels, original_index.labels,
+ check_dtype=True)
+
def test_set_labels(self):
# side note - you probably wouldn't want to use levels and labels
# directly like this - but it is possible.
| - [x] closes #13754
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
`MultiIndex.set_levels`, when given illegal level values, raises an error.
When `inplace=True`, though, the illegal level values are still accepted. This
commit fixes that behavior by checking that the proposed level values are legal
before setting them.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14236 | 2016-09-16T18:53:30Z | 2016-10-10T12:30:22Z | 2016-10-10T12:30:22Z | 2016-10-10T12:30:37Z |
ENH: Allow usecols to accept callable (GH14154) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index f22374553e9c3..75f36c5274cd2 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -126,13 +126,23 @@ index_col : int or sequence or ``False``, default ``None``
MultiIndex is used. If you have a malformed file with delimiters at the end of
each line, you might consider ``index_col=False`` to force pandas to *not* use
the first column as the index (row names).
-usecols : array-like, default ``None``
- Return a subset of the columns. All elements in this array must either
+usecols : array-like or callable, default ``None``
+ Return a subset of the columns. If array-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in `names` or
- inferred from the document header row(s). For example, a valid `usecols`
- parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Using this parameter
- results in much faster parsing time and lower memory usage.
+ inferred from the document header row(s). For example, a valid array-like
+ `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
+
+ If callable, the callable function will be evaluated against the column names,
+ returning names where the callable function evaluates to True:
+
+ .. ipython:: python
+
+ data = 'col1,col2,col3\na,b,1\na,b,2\nc,d,3'
+ pd.read_csv(StringIO(data))
+ pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ['COL1', 'COL3'])
+
+ Using this parameter results in much faster parsing time and lower memory usage.
as_recarray : boolean, default ``False``
DEPRECATED: this argument will be removed in a future version. Please call
``pd.read_csv(...).to_records()`` instead.
@@ -617,7 +627,9 @@ Filtering columns (``usecols``)
+++++++++++++++++++++++++++++++
The ``usecols`` argument allows you to select any subset of the columns in a
-file, either using the column names or position numbers:
+file, either using the column names, position numbers or a callable:
+
+.. versionadded:: 0.20.0 support for callable `usecols` arguments
.. ipython:: python
@@ -625,6 +637,7 @@ file, either using the column names or position numbers:
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), usecols=['b', 'd'])
pd.read_csv(StringIO(data), usecols=[0, 2, 3])
+ pd.read_csv(StringIO(data), usecols=lambda x: x.upper() in ['A', 'C'])
Comments and Empty Lines
''''''''''''''''''''''''
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index 9774c3ec9cc7f..0bfd755aae40c 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -52,6 +52,7 @@ Other enhancements
- ``pd.read_excel`` now preserves sheet order when using ``sheetname=None`` (:issue:`9930`)
- ``pd.cut`` and ``pd.qcut`` now support datetime64 and timedelta64 dtypes (issue:`14714`)
- ``Series`` provides a ``to_excel`` method to output Excel files (:issue:`8825`)
+- The ``usecols`` argument in ``pd.read_csv`` now accepts a callable function as a value (:issue:`14154`)
.. _whatsnew_0200.api_breaking:
@@ -106,4 +107,4 @@ Performance Improvements
.. _whatsnew_0200.bug_fixes:
Bug Fixes
-~~~~~~~~~
\ No newline at end of file
+~~~~~~~~~
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index ef839297c80d3..30443f894a64d 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -90,13 +90,18 @@
MultiIndex is used. If you have a malformed file with delimiters at the end
of each line, you might consider index_col=False to force pandas to _not_
use the first column as the index (row names)
-usecols : array-like, default None
- Return a subset of the columns. All elements in this array must either
+usecols : array-like or callable, default None
+ Return a subset of the columns. If array-like, all elements must either
be positional (i.e. integer indices into the document columns) or strings
that correspond to column names provided either by the user in `names` or
- inferred from the document header row(s). For example, a valid `usecols`
- parameter would be [0, 1, 2] or ['foo', 'bar', 'baz']. Using this parameter
- results in much faster parsing time and lower memory usage.
+ inferred from the document header row(s). For example, a valid array-like
+ `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
+
+ If callable, the callable function will be evaluated against the column
+ names, returning names where the callable function evaluates to True. An
+ example of a valid callable argument would be ``lambda x: x.upper() in
+ ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
+ parsing time and lower memory usage.
as_recarray : boolean, default False
DEPRECATED: this argument will be removed in a future version. Please call
`pd.read_csv(...).to_records()` instead.
@@ -977,17 +982,33 @@ def _is_index_col(col):
return col is not None and col is not False
+def _evaluate_usecols(usecols, names):
+ """
+ Check whether or not the 'usecols' parameter
+ is a callable. If so, enumerates the 'names'
+ parameter and returns a set of indices for
+ each entry in 'names' that evaluates to True.
+ If not a callable, returns 'usecols'.
+ """
+ if callable(usecols):
+ return set([i for i, name in enumerate(names)
+ if usecols(name)])
+ return usecols
+
+
def _validate_usecols_arg(usecols):
"""
Check whether or not the 'usecols' parameter
- contains all integers (column selection by index)
- or strings (column by name). Raises a ValueError
- if that is not the case.
+ contains all integers (column selection by index),
+ strings (column by name) or is a callable. Raises
+ a ValueError if that is not the case.
"""
- msg = ("The elements of 'usecols' must "
- "either be all strings, all unicode, or all integers")
+ msg = ("'usecols' must either be all strings, all unicode, "
+ "all integers or a callable")
if usecols is not None:
+ if callable(usecols):
+ return usecols
usecols_dtype = lib.infer_dtype(usecols)
if usecols_dtype not in ('empty', 'integer',
'string', 'unicode'):
@@ -1499,11 +1520,12 @@ def __init__(self, src, **kwds):
self.orig_names = self.names[:]
if self.usecols:
- if len(self.names) > len(self.usecols):
+ usecols = _evaluate_usecols(self.usecols, self.orig_names)
+ if len(self.names) > len(usecols):
self.names = [n for i, n in enumerate(self.names)
- if (i in self.usecols or n in self.usecols)]
+ if (i in usecols or n in usecols)]
- if len(self.names) < len(self.usecols):
+ if len(self.names) < len(usecols):
raise ValueError("Usecols do not match names.")
self._set_noconvert_columns()
@@ -1665,9 +1687,10 @@ def read(self, nrows=None):
def _filter_usecols(self, names):
# hackish
- if self.usecols is not None and len(names) != len(self.usecols):
+ usecols = _evaluate_usecols(self.usecols, names)
+ if usecols is not None and len(names) != len(usecols):
names = [name for i, name in enumerate(names)
- if i in self.usecols or name in self.usecols]
+ if i in usecols or name in usecols]
return names
def _get_index_names(self):
@@ -2291,7 +2314,9 @@ def _handle_usecols(self, columns, usecols_key):
usecols_key is used if there are string usecols.
"""
if self.usecols is not None:
- if any([isinstance(col, string_types) for col in self.usecols]):
+ if callable(self.usecols):
+ col_indices = _evaluate_usecols(self.usecols, usecols_key)
+ elif any([isinstance(u, string_types) for u in self.usecols]):
if len(columns) > 1:
raise ValueError("If using multiple headers, usecols must "
"be integers.")
diff --git a/pandas/io/tests/parser/usecols.py b/pandas/io/tests/parser/usecols.py
index 5051171ccb8f0..26b4b5b8ec7d1 100644
--- a/pandas/io/tests/parser/usecols.py
+++ b/pandas/io/tests/parser/usecols.py
@@ -23,8 +23,9 @@ def test_raise_on_mixed_dtype_usecols(self):
1000,2000,3000
4000,5000,6000
"""
- msg = ("The elements of 'usecols' must "
- "either be all strings, all unicode, or all integers")
+
+ msg = ("'usecols' must either be all strings, all unicode, "
+ "all integers or a callable")
usecols = [0, 'b', 2]
with tm.assertRaisesRegexp(ValueError, msg):
@@ -302,8 +303,8 @@ def test_usecols_with_mixed_encoding_strings(self):
3.568935038,7,False,a
'''
- msg = ("The elements of 'usecols' must "
- "either be all strings, all unicode, or all integers")
+ msg = ("'usecols' must either be all strings, all unicode, "
+ "all integers or a callable")
with tm.assertRaisesRegexp(ValueError, msg):
self.read_csv(StringIO(s), usecols=[u'AAA', b'BBB'])
@@ -366,3 +367,31 @@ def test_np_array_usecols(self):
expected = DataFrame([[1, 2]], columns=usecols)
result = self.read_csv(StringIO(data), usecols=usecols)
tm.assert_frame_equal(result, expected)
+
+ def test_callable_usecols(self):
+ # See gh-14154
+ s = '''AaA,bBb,CCC,ddd
+ 0.056674973,8,True,a
+ 2.613230982,2,False,b
+ 3.568935038,7,False,a
+ '''
+
+ data = {
+ 'AaA': {
+ 0: 0.056674972999999997,
+ 1: 2.6132309819999997,
+ 2: 3.5689350380000002
+ },
+ 'bBb': {0: 8, 1: 2, 2: 7},
+ 'ddd': {0: 'a', 1: 'b', 2: 'a'}
+ }
+ expected = DataFrame(data)
+ df = self.read_csv(StringIO(s), usecols=lambda x:
+ x.upper() in ['AAA', 'BBB', 'DDD'])
+ tm.assert_frame_equal(df, expected)
+
+ # Check that a callable returning only False returns
+ # an empty DataFrame
+ expected = DataFrame()
+ df = self.read_csv(StringIO(s), usecols=lambda x: False)
+ tm.assert_frame_equal(df, expected)
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 6760e822960f1..d94a4ef278dee 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -300,8 +300,9 @@ cdef class TextReader:
object compression
object mangle_dupe_cols
object tupleize_cols
+ object usecols
list dtype_cast_order
- set noconvert, usecols
+ set noconvert
def __cinit__(self, source,
delimiter=b',',
@@ -437,7 +438,10 @@ cdef class TextReader:
# suboptimal
if usecols is not None:
self.has_usecols = 1
- self.usecols = set(usecols)
+ if callable(usecols):
+ self.usecols = usecols
+ else:
+ self.usecols = set(usecols)
# XXX
if skipfooter > 0:
@@ -701,7 +705,6 @@ cdef class TextReader:
cdef StringPath path = _string_path(self.c_encoding)
header = []
-
if self.parser.header_start >= 0:
# Header is in the file
@@ -821,7 +824,8 @@ cdef class TextReader:
# 'data has %d fields'
# % (passed_count, field_count))
- if self.has_usecols and self.allow_leading_cols:
+ if self.has_usecols and self.allow_leading_cols and \
+ not callable(self.usecols):
nuse = len(self.usecols)
if nuse == passed_count:
self.leading_cols = 0
@@ -1019,13 +1023,20 @@ cdef class TextReader:
if i < self.leading_cols:
# Pass through leading columns always
name = i
- elif self.usecols and nused == len(self.usecols):
+ elif self.usecols and not callable(self.usecols) and \
+ nused == len(self.usecols):
# Once we've gathered all requested columns, stop. GH5766
break
else:
name = self._get_column_name(i, nused)
- if self.has_usecols and not (i in self.usecols or
- name in self.usecols):
+ usecols = set()
+ if callable(self.usecols):
+ if self.usecols(name):
+ usecols = set([i])
+ else:
+ usecols = self.usecols
+ if self.has_usecols and not (i in usecols or
+ name in usecols):
continue
nused += 1
| - [ X] closes #14154
- [ X] tests added / passed
- [ X] passes `git diff upstream/master | flake8 --diff`
- [X ] whatsnew entry
<img width="1515" alt="asv_bench" src="https://cloud.githubusercontent.com/assets/609873/18575075/942a9240-7ba1-11e6-9dca-bab8b9987f31.png">
| https://api.github.com/repos/pandas-dev/pandas/pulls/14234 | 2016-09-16T04:08:04Z | 2016-12-06T11:38:06Z | 2016-12-06T11:38:06Z | 2017-12-12T15:39:47Z |
DOC: added example to Series.map showing use of na_action parameter (GH14231) | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8379c8bcdcae8..1c6b13885dd01 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2099,10 +2099,19 @@ def map(self, arg, na_action=None):
----------
arg : function, dict, or Series
na_action : {None, 'ignore'}
- If 'ignore', propagate NA values
+ If 'ignore', propagate NA values, without passing them to the
+ mapping function
+
+ Returns
+ -------
+ y : Series
+ same index as caller
Examples
--------
+
+ Map inputs to outputs
+
>>> x
one 1
two 2
@@ -2118,10 +2127,27 @@ def map(self, arg, na_action=None):
two bar
three baz
- Returns
- -------
- y : Series
- same index as caller
+ Use na_action to control whether NA values are affected by the mapping
+ function.
+
+ >>> s = pd.Series([1, 2, 3, np.nan])
+
+ >>> s2 = s.map(lambda x: 'this is a string {}'.format(x),
+ na_action=None)
+ 0 this is a string 1.0
+ 1 this is a string 2.0
+ 2 this is a string 3.0
+ 3 this is a string nan
+ dtype: object
+
+ >>> s3 = s.map(lambda x: 'this is a string {}'.format(x),
+ na_action='ignore')
+ 0 this is a string 1.0
+ 1 this is a string 2.0
+ 2 this is a string 3.0
+ 3 NaN
+ dtype: object
+
"""
if is_extension_type(self.dtype):
| - [x] closes #14231
- [x] tests passed
- [x] passes `git diff upstream/master | flake8 --diff`
Added example to Series.map showing use of na_action parameter.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14232 | 2016-09-15T23:16:35Z | 2016-09-16T23:20:39Z | 2016-09-16T23:20:39Z | 2016-12-14T05:11:15Z |
BUG: fix alignment in series ops (GH14227) | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index a3cac2d6f9f2f..4aa1ac4a47090 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -20,6 +20,7 @@
is_numeric_dtype,
is_datetime64_dtype,
is_timedelta64_dtype,
+ is_datetime64tz_dtype,
is_list_like,
is_dict_like,
is_re_compilable)
@@ -4438,13 +4439,23 @@ def _align_frame(self, other, join='outer', axis=None, level=None,
left = left.fillna(axis=fill_axis, method=method, limit=limit)
right = right.fillna(axis=fill_axis, method=method, limit=limit)
+ # if DatetimeIndex have different tz, convert to UTC
+ if is_datetime64tz_dtype(left.index):
+ if left.index.tz != right.index.tz:
+ if join_index is not None:
+ left.index = join_index
+ right.index = join_index
+
return left.__finalize__(self), right.__finalize__(other)
def _align_series(self, other, join='outer', axis=None, level=None,
copy=True, fill_value=None, method=None, limit=None,
fill_axis=0):
+
+ is_series = isinstance(self, ABCSeries)
+
# series/series compat, other must always be a Series
- if isinstance(self, ABCSeries):
+ if is_series:
if axis:
raise ValueError('cannot align series to a series other than '
'axis 0')
@@ -4503,6 +4514,15 @@ def _align_series(self, other, join='outer', axis=None, level=None,
left = left.fillna(fill_value, method=method, limit=limit,
axis=fill_axis)
right = right.fillna(fill_value, method=method, limit=limit)
+
+ # if DatetimeIndex have different tz, convert to UTC
+ if is_series or (not is_series and axis == 0):
+ if is_datetime64tz_dtype(left.index):
+ if left.index.tz != right.index.tz:
+ if join_index is not None:
+ left.index = join_index
+ right.index = join_index
+
return left.__finalize__(self), right.__finalize__(other)
def _where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 237b9394dfc25..7cff1104c50be 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -622,12 +622,6 @@ def _align_method_SERIES(left, right, align_asobject=False):
left, right = left.align(right, copy=False)
- index, lidx, ridx = left.index.join(right.index, how='outer',
- return_indexers=True)
- # if DatetimeIndex have different tz, convert to UTC
- left.index = index
- right.index = index
-
return left, right
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 24c26276ea24d..f688ec2d43789 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -1810,3 +1810,11 @@ def test_dti_tz_convert_to_utc(self):
res = Series([1, 2], index=idx1) + Series([1, 1], index=idx2)
assert_series_equal(res, Series([np.nan, 3, np.nan], index=base))
+
+ def test_op_duplicate_index(self):
+ # GH14227
+ s1 = Series([1, 2], index=[1, 1])
+ s2 = Series([10, 10], index=[1, 2])
+ result = s1 + s2
+ expected = pd.Series([11, 12, np.nan], index=[1, 1, 2])
+ assert_series_equal(result, expected)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index b8247fe01b3f2..a85a606075911 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -1290,6 +1290,28 @@ def test_align_aware(self):
self.assertEqual(df1.index.tz, new1.index.tz)
self.assertEqual(df2.index.tz, new2.index.tz)
+ # # different timezones convert to UTC
+
+ # frame
+ df1_central = df1.tz_convert('US/Central')
+ new1, new2 = df1.align(df1_central)
+ self.assertEqual(new1.index.tz, pytz.UTC)
+ self.assertEqual(new2.index.tz, pytz.UTC)
+
+ # series
+ new1, new2 = df1[0].align(df1_central[0])
+ self.assertEqual(new1.index.tz, pytz.UTC)
+ self.assertEqual(new2.index.tz, pytz.UTC)
+
+ # combination
+ new1, new2 = df1.align(df1_central[0], axis=0)
+ self.assertEqual(new1.index.tz, pytz.UTC)
+ self.assertEqual(new2.index.tz, pytz.UTC)
+
+ df1[0].align(df1_central, axis=0)
+ self.assertEqual(new1.index.tz, pytz.UTC)
+ self.assertEqual(new2.index.tz, pytz.UTC)
+
def test_append_aware(self):
rng1 = date_range('1/1/2011 01:00', periods=1, freq='H',
tz='US/Eastern')
| - [x] closes #14227
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14230 | 2016-09-15T17:55:24Z | 2016-09-30T21:23:35Z | 2016-09-30T21:23:35Z | 2016-09-30T21:23:56Z |
DOC: #14195. to_csv warns regarding quoting behaviour for floats | diff --git a/doc/source/io.rst b/doc/source/io.rst
index d436fa52918d3..3661d4b4cdff7 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1481,7 +1481,7 @@ function takes a number of arguments. Only the first is required.
- ``encoding``: a string representing the encoding to use if the contents are
non-ASCII, for python versions prior to 3
- ``line_terminator``: Character sequence denoting line end (default '\\n')
- - ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL)
+ - ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL). Note that if you have set a `float_format` then floats are converted to strings and csv.QUOTE_NONNUMERIC will treat them as non-numeric
- ``quotechar``: Character used to quote fields (default '"')
- ``doublequote``: Control quoting of ``quotechar`` in fields (default True)
- ``escapechar``: Character used to escape ``sep`` and ``quotechar`` when
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1cc689528caaa..0b446c26c977d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1345,7 +1345,9 @@ def to_csv(self, path_or_buf=None, sep=",", na_rep='', float_format=None,
The newline character or character sequence to use in the output
file
quoting : optional constant from csv module
- defaults to csv.QUOTE_MINIMAL
+ defaults to csv.QUOTE_MINIMAL. If you have set a `float_format`
+ then floats are comverted to strings and thus csv.QUOTE_NONNUMERIC
+ will treat them as non-numeric
quotechar : string (length 1), default '\"'
character used to quote fields
doublequote : boolean, default True
| - [x] closes #14195
- [ ] passes `git diff upstream/master | flake8 --diff`
Added a small warning that if `float_format` is set then floats will be quoted even if csv.QUOTE_NONNUMERIC is set
| https://api.github.com/repos/pandas-dev/pandas/pulls/14228 | 2016-09-15T16:45:57Z | 2016-10-06T16:04:57Z | 2016-10-06T16:04:57Z | 2016-10-06T16:04:58Z |
BUG: GH13629 Binned groupby median function calculates median on empt… | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index d3239c4562765..6933cbedb5d67 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1572,3 +1572,4 @@ Bug Fixes
- Bug in ``eval()`` where the ``resolvers`` argument would not accept a list (:issue:`14095`)
- Bugs in ``stack``, ``get_dummies``, ``make_axis_dummies`` which don't preserve categorical dtypes in (multi)indexes (:issue:`13854`)
- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
+- Bug in ``df.groupby`` where ``.median()`` returns arbitrary values if grouped dataframe contains empty bins (:issue:`13629`)
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index de5c5fc661d4d..8710ef34504d1 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -992,7 +992,7 @@ def is_lexsorted(list list_of_arrays):
def groupby_indices(dict ids, ndarray[int64_t] labels,
ndarray[int64_t] counts):
"""
- turn group_labels output into a combined indexer maping the labels to
+ turn group_labels output into a combined indexer mapping the labels to
indexers
Parameters
@@ -1313,6 +1313,9 @@ cdef inline float64_t _median_linear(float64_t* a, int n):
cdef float64_t result
cdef float64_t* tmp
+ if n == 0:
+ return NaN
+
# count NAs
for i in range(n):
if a[i] != a[i]:
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 66e30229cd52b..7ed84b970d9c3 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -4424,12 +4424,13 @@ def _reorder_by_uniques(uniques, labels):
def _groupby_indices(values):
if is_categorical_dtype(values):
-
# we have a categorical, so we can do quite a bit
# bit better than factorizing again
reverse = dict(enumerate(values.categories))
codes = values.codes.astype('int64')
- _, counts = _hash.value_count_int64(codes, False)
+
+ mask = 0 <= codes
+ counts = np.bincount(codes[mask], minlength=values.categories.size)
else:
reverse, codes, counts = _algos.group_labels(
_values_from_object(_ensure_object(values)))
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 9d8873d843642..492326d0898f0 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -799,6 +799,17 @@ def test_get_group(self):
self.assertRaises(ValueError,
lambda: g.get_group(('foo', 'bar', 'baz')))
+ def test_get_group_empty_bins(self):
+ d = pd.DataFrame([3, 1, 7, 6])
+ bins = [0, 5, 10, 15]
+ g = d.groupby(pd.cut(d[0], bins))
+
+ result = g.get_group('(0, 5]')
+ expected = DataFrame([3, 1], index=[0, 1])
+ assert_frame_equal(result, expected)
+
+ self.assertRaises(KeyError, lambda: g.get_group('(10, 15]'))
+
def test_get_group_grouped_by_tuple(self):
# GH 8121
df = DataFrame([[(1, ), (1, 2), (1, ), (1, 2)]], index=['ids']).T
@@ -4415,6 +4426,16 @@ def test_cython_median(self):
xp = df.groupby(labels).median()
assert_frame_equal(rs, xp)
+ def test_median_empty_bins(self):
+ df = pd.DataFrame(np.random.randint(0, 44, 500))
+
+ grps = range(0, 55, 5)
+ bins = pd.cut(df[0], grps)
+
+ result = df.groupby(bins).median()
+ expected = df.groupby(bins).agg(lambda x: x.median())
+ assert_frame_equal(result, expected)
+
def test_groupby_categorical_no_compress(self):
data = Series(np.random.randn(9))
@@ -6123,6 +6144,27 @@ def test__cython_agg_general(self):
exc.args += ('operation: %s' % op, )
raise
+ def test_cython_agg_empty_buckets(self):
+ ops = [('mean', np.mean),
+ ('median', np.median),
+ ('var', lambda x: np.var(x, ddof=1)),
+ ('add', lambda x: np.sum(x) if len(x) > 0 else np.nan),
+ ('prod', np.prod),
+ ('min', np.min),
+ ('max', np.max), ]
+
+ df = pd.DataFrame([11, 12, 13])
+ grps = range(0, 55, 5)
+
+ for op, targop in ops:
+ result = df.groupby(pd.cut(df[0], grps))._cython_agg_general(op)
+ expected = df.groupby(pd.cut(df[0], grps)).agg(lambda x: targop(x))
+ try:
+ tm.assert_frame_equal(result, expected)
+ except BaseException as exc:
+ exc.args += ('operation: %s' % op,)
+ raise
+
def test_cython_group_transform_algos(self):
# GH 4095
dtypes = [np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint32,
| - [ x] closes #13629
- [ x] tests added / passed
- [ x] passes `git diff upstream/master | flake8 --diff`
- [x ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14225 | 2016-09-15T01:57:41Z | 2016-09-18T15:57:27Z | 2016-09-18T15:57:27Z | 2016-09-18T22:36:27Z |
DOC: fix incorrect example in unstack docstring (GH14206) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index de74b70cdfaac..e46d4c6b928a9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3805,28 +3805,29 @@ def unstack(self, level=-1):
... ('two', 'a'), ('two', 'b')])
>>> s = pd.Series(np.arange(1.0, 5.0), index=index)
>>> s
- one a 1
- b 2
- two a 3
- b 4
+ one a 1.0
+ b 2.0
+ two a 3.0
+ b 4.0
dtype: float64
>>> s.unstack(level=-1)
a b
- one 1 2
- two 3 4
+ one 1.0 2.0
+ two 3.0 4.0
>>> s.unstack(level=0)
one two
- a 1 3
- b 2 4
+ a 1.0 3.0
+ b 2.0 4.0
>>> df = s.unstack(level=0)
>>> df.unstack()
- one a 1.
- b 3.
- two a 2.
- b 4.
+ one a 1.0
+ b 2.0
+ two a 3.0
+ b 4.0
+ dtype: float64
Returns
-------
| - [x] closes #14206
The following PR closes #14206 and corrects the Docs as suggested in the issue thread.
@jorisvandenbossche Kindly review.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14211 | 2016-09-13T08:13:44Z | 2016-09-13T23:00:15Z | 2016-09-13T23:00:15Z | 2016-09-14T04:39:58Z |
ENH: Add divmod to series. | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 1f670fb7fb593..19318aad3d53d 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -188,6 +188,32 @@ And similarly for ``axis="items"`` and ``axis="minor"``.
match the broadcasting behavior of Panel. Though it would require a
transition period so users can change their code...
+Series and Index also support the :func:`divmod` builtin. This function takes
+the floor division and modulo operation at the same time returning a two-tuple
+of the same type as the left hand side. For example:
+
+.. ipython:: python
+
+ s = pd.Series(np.arange(10))
+ s
+ div, rem = divmod(s, 3)
+ div
+ rem
+
+ idx = pd.Index(np.arange(10))
+ idx
+ div, rem = divmod(idx, 3)
+ div
+ rem
+
+We can also do elementwise :func:`divmod`:
+
+.. ipython:: python
+
+ div, rem = divmod(s, [2, 2, 3, 3, 4, 4, 5, 5, 6, 6])
+ div
+ rem
+
Missing data / operations with fill values
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index f3a6736ff9920..ffb6e72019602 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1328,6 +1328,9 @@ Other API Changes
- ``pd.read_csv()`` in the C engine will now issue a ``ParserWarning`` or raise a ``ValueError`` when ``sep`` encoded is more than one character long (:issue:`14065`)
- ``DataFrame.values`` will now return ``float64`` with a ``DataFrame`` of mixed ``int64`` and ``uint64`` dtypes, conforming to ``np.find_common_type`` (:issue:`10364`, :issue:`13917`)
- ``pd.read_stata()`` can now handle some format 111 files, which are produced by SAS when generating Stata dta files (:issue:`11526`)
+- ``Series`` and ``Index`` now support ``divmod`` which will return a tuple of
+ series or indices. This behaves like a standard binary operator with regards
+ to broadcasting rules (:issue:`14208`).
.. _whatsnew_0190.deprecations:
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index b81d62c3cda18..237b9394dfc25 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -39,7 +39,8 @@
def _create_methods(arith_method, comp_method, bool_method,
- use_numexpr, special=False, default_axis='columns'):
+ use_numexpr, special=False, default_axis='columns',
+ have_divmod=False):
# creates actual methods based upon arithmetic, comp and bool method
# constructors.
@@ -127,6 +128,15 @@ def names(x):
names('ror_'), op('|')),
rxor=bool_method(lambda x, y: operator.xor(y, x),
names('rxor'), op('^'))))
+ if have_divmod:
+ # divmod doesn't have an op that is supported by numexpr
+ new_methods['divmod'] = arith_method(
+ divmod,
+ names('divmod'),
+ None,
+ default_axis=default_axis,
+ construct_result=_construct_divmod_result,
+ )
new_methods = dict((names(k), v) for k, v in new_methods.items())
return new_methods
@@ -156,7 +166,7 @@ def add_methods(cls, new_methods, force, select, exclude):
def add_special_arithmetic_methods(cls, arith_method=None,
comp_method=None, bool_method=None,
use_numexpr=True, force=False, select=None,
- exclude=None):
+ exclude=None, have_divmod=False):
"""
Adds the full suite of special arithmetic methods (``__add__``,
``__sub__``, etc.) to the class.
@@ -177,6 +187,9 @@ def add_special_arithmetic_methods(cls, arith_method=None,
if passed, only sets functions with names in select
exclude : iterable of strings (optional)
if passed, will not set functions with names in exclude
+ have_divmod : bool, (optional)
+ should a divmod method be added? this method is special because it
+ returns a tuple of cls instead of a single element of type cls
"""
# in frame, special methods have default_axis = None, comp methods use
@@ -184,7 +197,7 @@ def add_special_arithmetic_methods(cls, arith_method=None,
new_methods = _create_methods(arith_method, comp_method,
bool_method, use_numexpr, default_axis=None,
- special=True)
+ special=True, have_divmod=have_divmod)
# inplace operators (I feel like these should get passed an `inplace=True`
# or just be removed
@@ -618,8 +631,22 @@ def _align_method_SERIES(left, right, align_asobject=False):
return left, right
+def _construct_result(left, result, index, name, dtype):
+ return left._constructor(result, index=index, name=name, dtype=dtype)
+
+
+def _construct_divmod_result(left, result, index, name, dtype):
+ """divmod returns a tuple of like indexed series instead of a single series.
+ """
+ constructor = left._constructor
+ return (
+ constructor(result[0], index=index, name=name, dtype=dtype),
+ constructor(result[1], index=index, name=name, dtype=dtype),
+ )
+
+
def _arith_method_SERIES(op, name, str_rep, fill_zeros=None, default_axis=None,
- **eval_kwargs):
+ construct_result=_construct_result, **eval_kwargs):
"""
Wrapper function for Series arithmetic operations, to avoid
code duplication.
@@ -692,8 +719,14 @@ def wrapper(left, right, name=name, na_op=na_op):
lvalues = lvalues.values
result = wrap_results(safe_na_op(lvalues, rvalues))
- return left._constructor(result, index=left.index,
- name=name, dtype=dtype)
+ return construct_result(
+ left,
+ result,
+ index=left.index,
+ name=name,
+ dtype=dtype,
+ )
+
return wrapper
@@ -933,6 +966,10 @@ def wrapper(self, other):
'desc': 'Integer division',
'reversed': False,
'reverse': 'rfloordiv'},
+ 'divmod': {'op': 'divmod',
+ 'desc': 'Integer division and modulo',
+ 'reversed': False,
+ 'reverse': None},
'eq': {'op': '==',
'desc': 'Equal to',
@@ -1033,7 +1070,8 @@ def flex_wrapper(self, other, level=None, fill_value=None, axis=0):
series_special_funcs = dict(arith_method=_arith_method_SERIES,
comp_method=_comp_method_SERIES,
- bool_method=_bool_method_SERIES)
+ bool_method=_bool_method_SERIES,
+ have_divmod=True)
_arith_doc_FRAME = """
Binary operator %s with support to substitute a fill_value for missing data in
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index d4ca18a6713b5..f430305f5cb91 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -3426,7 +3426,7 @@ def _validate_for_numeric_binop(self, other, op, opstr):
def _add_numeric_methods_binary(cls):
""" add in numeric methods """
- def _make_evaluate_binop(op, opstr, reversed=False):
+ def _make_evaluate_binop(op, opstr, reversed=False, constructor=Index):
def _evaluate_numeric_binop(self, other):
from pandas.tseries.offsets import DateOffset
@@ -3448,7 +3448,7 @@ def _evaluate_numeric_binop(self, other):
attrs = self._maybe_update_attributes(attrs)
with np.errstate(all='ignore'):
result = op(values, other)
- return Index(result, **attrs)
+ return constructor(result, **attrs)
return _evaluate_numeric_binop
@@ -3478,6 +3478,15 @@ def _evaluate_numeric_binop(self, other):
cls.__rdiv__ = _make_evaluate_binop(
operator.div, '__div__', reversed=True)
+ cls.__divmod__ = _make_evaluate_binop(
+ divmod,
+ '__divmod__',
+ constructor=lambda result, **attrs: (
+ Index(result[0], **attrs),
+ Index(result[1], **attrs),
+ ),
+ )
+
@classmethod
def _add_numeric_methods_unary(cls):
""" add in numeric unary methods """
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/test_numeric.py
index d3a89b301ae46..51d8c95f9d783 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/test_numeric.py
@@ -73,6 +73,30 @@ def test_numeric_compat(self):
self.assertRaises(ValueError, lambda: idx * idx[0:3])
self.assertRaises(ValueError, lambda: idx * np.array([1, 2]))
+ result = divmod(idx, 2)
+ with np.errstate(all='ignore'):
+ div, mod = divmod(idx.values, 2)
+ expected = Index(div), Index(mod)
+ for r, e in zip(result, expected):
+ tm.assert_index_equal(r, e)
+
+ result = divmod(idx, np.full_like(idx.values, 2))
+ with np.errstate(all='ignore'):
+ div, mod = divmod(idx.values, np.full_like(idx.values, 2))
+ expected = Index(div), Index(mod)
+ for r, e in zip(result, expected):
+ tm.assert_index_equal(r, e)
+
+ result = divmod(idx, Series(np.full_like(idx.values, 2)))
+ with np.errstate(all='ignore'):
+ div, mod = divmod(
+ idx.values,
+ np.full_like(idx.values, 2),
+ )
+ expected = Index(div), Index(mod)
+ for r, e in zip(result, expected):
+ tm.assert_index_equal(r, e)
+
def test_explicit_conversions(self):
# GH 8608
diff --git a/pandas/tests/series/test_operators.py b/pandas/tests/series/test_operators.py
index 197311868b768..24c26276ea24d 100644
--- a/pandas/tests/series/test_operators.py
+++ b/pandas/tests/series/test_operators.py
@@ -1,6 +1,7 @@
# coding=utf-8
# pylint: disable-msg=E1101,W0612
+from collections import Iterable
from datetime import datetime, timedelta
import operator
from itertools import product, starmap
@@ -19,7 +20,7 @@
from pandas.compat import range, zip
from pandas import compat
from pandas.util.testing import (assert_series_equal, assert_almost_equal,
- assert_frame_equal)
+ assert_frame_equal, assert_index_equal)
import pandas.util.testing as tm
from .common import TestData
@@ -185,6 +186,34 @@ def check_comparators(series, other, check_dtype=True):
check_comparators(self.ts, 5)
check_comparators(self.ts, self.ts + 1, check_dtype=False)
+ def test_divmod(self):
+ def check(series, other):
+ results = divmod(series, other)
+ if isinstance(other, Iterable) and len(series) != len(other):
+ # if the lengths don't match, this is the test where we use
+ # `self.ts[::2]`. Pad every other value in `other_np` with nan.
+ other_np = []
+ for n in other:
+ other_np.append(n)
+ other_np.append(np.nan)
+ else:
+ other_np = other
+ other_np = np.asarray(other_np)
+ with np.errstate(all='ignore'):
+ expecteds = divmod(series.values, np.asarray(other_np))
+
+ for result, expected in zip(results, expecteds):
+ # check the values, name, and index separatly
+ assert_almost_equal(np.asarray(result), expected)
+
+ self.assertEqual(result.name, series.name)
+ assert_index_equal(result.index, series.index)
+
+ check(self.ts, self.ts * 2)
+ check(self.ts, self.ts * 0)
+ check(self.ts, self.ts[::2])
+ check(self.ts, 5)
+
def test_operators_empty_int_corner(self):
s1 = Series([], [], dtype=np.int32)
s2 = Series({'x': 0.})
| - [x] closes #8174
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
edit: in the issue it said this is uncommon; however, I recently ran into an issue where I was doing arithmetic with fiscal quarters and using divmod seemed more natural.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14208 | 2016-09-12T18:58:01Z | 2016-09-19T21:07:02Z | 2016-09-19T21:07:02Z | 2016-09-19T21:09:29Z |
DOC: add source links to api docs | diff --git a/doc/source/conf.py b/doc/source/conf.py
index a1b71f0279c7a..fd3a2493a53e8 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -13,6 +13,7 @@
import sys
import os
import re
+import inspect
from pandas.compat import u, PY3
# If extensions (or modules to document with autodoc) are in another directory,
@@ -47,6 +48,7 @@
'sphinx.ext.coverage',
'sphinx.ext.pngmath',
'sphinx.ext.ifconfig',
+ 'sphinx.ext.linkcode',
]
@@ -424,6 +426,55 @@ def get_items(self, names):
return items
+# based on numpy doc/source/conf.py
+def linkcode_resolve(domain, info):
+ """
+ Determine the URL corresponding to Python object
+ """
+ if domain != 'py':
+ return None
+
+ modname = info['module']
+ fullname = info['fullname']
+
+ submod = sys.modules.get(modname)
+ if submod is None:
+ return None
+
+ obj = submod
+ for part in fullname.split('.'):
+ try:
+ obj = getattr(obj, part)
+ except:
+ return None
+
+ try:
+ fn = inspect.getsourcefile(obj)
+ except:
+ fn = None
+ if not fn:
+ return None
+
+ try:
+ source, lineno = inspect.getsourcelines(obj)
+ except:
+ lineno = None
+
+ if lineno:
+ linespec = "#L%d-L%d" % (lineno, lineno + len(source) - 1)
+ else:
+ linespec = ""
+
+ fn = os.path.relpath(fn, start=os.path.dirname(pandas.__file__))
+
+ if '+' in pandas.__version__:
+ return "http://github.com/pydata/pandas/blob/master/pandas/%s%s" % (
+ fn, linespec)
+ else:
+ return "http://github.com/pydata/pandas/blob/v%s/pandas/%s%s" % (
+ pandas.__version__, fn, linespec)
+
+
# remove the docstring of the flags attribute (inherited from numpy ndarray)
# because these give doc build errors (see GH issue 5331)
def remove_flags_docstring(app, what, name, obj, options, lines):
| - [x] closes #14178
- [x] tests not needed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew not needed
cc @jorisvandenbossche - like you mentioned, this doesn't work for everything (properties, accessors, some generated methods)
| https://api.github.com/repos/pandas-dev/pandas/pulls/14200 | 2016-09-10T14:42:28Z | 2016-09-12T07:57:26Z | 2016-09-12T07:57:26Z | 2016-09-12T11:32:40Z |
Fix: F999 dictionary key '2000q4' repeated with different values | diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index 6cee45df2a63c..21cfe84f153fa 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -687,7 +687,6 @@ def test_parsers(self):
'00-Q4': datetime.datetime(2000, 10, 1),
'4Q-2000': datetime.datetime(2000, 10, 1),
'4Q-00': datetime.datetime(2000, 10, 1),
- '2000q4': datetime.datetime(2000, 10, 1),
'00q4': datetime.datetime(2000, 10, 1),
'2005': datetime.datetime(2005, 1, 1),
'2005-11': datetime.datetime(2005, 11, 1),
| Removal of duplicate line to fix lint error
```
pandas/tseries/tests/test_tslib.py:685:18: F999 dictionary key '2000q4' repeated with different values
pandas/tseries/tests/test_tslib.py:690:18: F999 dictionary key '2000q4' repeated with different values
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/14198 | 2016-09-10T13:15:18Z | 2016-09-10T19:00:13Z | 2016-09-10T19:00:13Z | 2016-09-10T19:00:13Z |
DOC: minor typo in 0.19.0 whatsnew file | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 3f3ebcb6e5830..7e8e1b15654a0 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -296,8 +296,7 @@ Categorical Concatenation
b = pd.Categorical(["a", "b"])
union_categoricals([a, b])
-- ``concat`` and ``append`` now can concat ``category`` dtypes wifht different
-``categories`` as ``object`` dtype (:issue:`13524`)
+- ``concat`` and ``append`` now can concat ``category`` dtypes with different ``categories`` as ``object`` dtype (:issue:`13524`)
**Previous behavior**:
| https://api.github.com/repos/pandas-dev/pandas/pulls/14185 | 2016-09-08T14:36:38Z | 2016-09-08T15:02:13Z | 2016-09-08T15:02:13Z | 2016-09-08T15:02:19Z | |
ENH: Added multicolumn/multirow support for latex | diff --git a/doc/source/options.rst b/doc/source/options.rst
index 77cac6d495d13..10a13ed36df8d 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -273,151 +273,156 @@ Options are 'right', and 'left'.
Available Options
-----------------
-========================== ============ ==================================
-Option Default Function
-========================== ============ ==================================
-display.chop_threshold None If set to a float value, all float
- values smaller then the given
- threshold will be displayed as
- exactly 0 by repr and friends.
-display.colheader_justify right Controls the justification of
- column headers. used by DataFrameFormatter.
-display.column_space 12 No description available.
-display.date_dayfirst False When True, prints and parses dates
- with the day first, eg 20/01/2005
-display.date_yearfirst False When True, prints and parses dates
- with the year first, eg 2005/01/20
-display.encoding UTF-8 Defaults to the detected encoding
- of the console. Specifies the encoding
- to be used for strings returned by
- to_string, these are generally strings
- meant to be displayed on the console.
-display.expand_frame_repr True Whether to print out the full DataFrame
- repr for wide DataFrames across
- multiple lines, `max_columns` is
- still respected, but the output will
- wrap-around across multiple "pages"
- if its width exceeds `display.width`.
-display.float_format None The callable should accept a floating
- point number and return a string with
- the desired format of the number.
- This is used in some places like
- SeriesFormatter.
- See core.format.EngFormatter for an example.
-display.height 60 Deprecated. Use `display.max_rows` instead.
-display.large_repr truncate For DataFrames exceeding max_rows/max_cols,
- the repr (and HTML repr) can show
- a truncated table (the default from 0.13),
- or switch to the view from df.info()
- (the behaviour in earlier versions of pandas).
- allowable settings, ['truncate', 'info']
-display.latex.repr False Whether to produce a latex DataFrame
- representation for jupyter frontends
- that support it.
-display.latex.escape True Escapes special caracters in Dataframes, when
- using the to_latex method.
-display.latex.longtable False Specifies if the to_latex method of a Dataframe
- uses the longtable format.
-display.line_width 80 Deprecated. Use `display.width` instead.
-display.max_columns 20 max_rows and max_columns are used
- in __repr__() methods to decide if
- to_string() or info() is used to
- render an object to a string. In
- case python/IPython is running in
- a terminal this can be set to 0 and
- pandas will correctly auto-detect
- the width the terminal and swap to
- a smaller format in case all columns
- would not fit vertically. The IPython
- notebook, IPython qtconsole, or IDLE
- do not run in a terminal and hence
- it is not possible to do correct
- auto-detection. 'None' value means
- unlimited.
-display.max_colwidth 50 The maximum width in characters of
- a column in the repr of a pandas
- data structure. When the column overflows,
- a "..." placeholder is embedded in
- the output.
-display.max_info_columns 100 max_info_columns is used in DataFrame.info
- method to decide if per column information
- will be printed.
-display.max_info_rows 1690785 df.info() will usually show null-counts
- for each column. For large frames
- this can be quite slow. max_info_rows
- and max_info_cols limit this null
- check only to frames with smaller
- dimensions then specified.
-display.max_rows 60 This sets the maximum number of rows
- pandas should output when printing
- out various output. For example,
- this value determines whether the
- repr() for a dataframe prints out
- fully or just a summary repr.
- 'None' value means unlimited.
-display.max_seq_items 100 when pretty-printing a long sequence,
- no more then `max_seq_items` will
- be printed. If items are omitted,
- they will be denoted by the addition
- of "..." to the resulting string.
- If set to None, the number of items
- to be printed is unlimited.
-display.memory_usage True This specifies if the memory usage of
- a DataFrame should be displayed when the
- df.info() method is invoked.
-display.multi_sparse True "Sparsify" MultiIndex display (don't
- display repeated elements in outer
- levels within groups)
-display.notebook_repr_html True When True, IPython notebook will
- use html representation for
- pandas objects (if it is available).
-display.pprint_nest_depth 3 Controls the number of nested levels
- to process when pretty-printing
-display.precision 6 Floating point output precision in
- terms of number of places after the
- decimal, for regular formatting as well
- as scientific notation. Similar to
- numpy's ``precision`` print option
-display.show_dimensions truncate Whether to print out dimensions
- at the end of DataFrame repr.
- If 'truncate' is specified, only
- print out the dimensions if the
- frame is truncated (e.g. not display
- all rows and/or columns)
-display.width 80 Width of the display in characters.
- In case python/IPython is running in
- a terminal this can be set to None
- and pandas will correctly auto-detect
- the width. Note that the IPython notebook,
- IPython qtconsole, or IDLE do not run in a
- terminal and hence it is not possible
- to correctly detect the width.
-html.border 1 A ``border=value`` attribute is
- inserted in the ``<table>`` tag
- for the DataFrame HTML repr.
-io.excel.xls.writer xlwt The default Excel writer engine for
- 'xls' files.
-io.excel.xlsm.writer openpyxl The default Excel writer engine for
- 'xlsm' files. Available options:
- 'openpyxl' (the default).
-io.excel.xlsx.writer openpyxl The default Excel writer engine for
- 'xlsx' files.
-io.hdf.default_format None default format writing format, if
- None, then put will default to
- 'fixed' and append will default to
- 'table'
-io.hdf.dropna_table True drop ALL nan rows when appending
- to a table
-mode.chained_assignment warn Raise an exception, warn, or no
- action if trying to use chained
- assignment, The default is warn
-mode.sim_interactive False Whether to simulate interactive mode
- for purposes of testing
-mode.use_inf_as_null False True means treat None, NaN, -INF,
- INF as null (old way), False means
- None and NaN are null, but INF, -INF
- are not null (new way).
-========================== ============ ==================================
+=================================== ============ ==================================
+Option Default Function
+=================================== ============ ==================================
+display.chop_threshold None If set to a float value, all float
+ values smaller then the given
+ threshold will be displayed as
+ exactly 0 by repr and friends.
+display.colheader_justify right Controls the justification of
+ column headers. used by DataFrameFormatter.
+display.column_space 12 No description available.
+display.date_dayfirst False When True, prints and parses dates
+ with the day first, eg 20/01/2005
+display.date_yearfirst False When True, prints and parses dates
+ with the year first, eg 2005/01/20
+display.encoding UTF-8 Defaults to the detected encoding
+ of the console. Specifies the encoding
+ to be used for strings returned by
+ to_string, these are generally strings
+ meant to be displayed on the console.
+display.expand_frame_repr True Whether to print out the full DataFrame
+ repr for wide DataFrames across
+ multiple lines, `max_columns` is
+ still respected, but the output will
+ wrap-around across multiple "pages"
+ if its width exceeds `display.width`.
+display.float_format None The callable should accept a floating
+ point number and return a string with
+ the desired format of the number.
+ This is used in some places like
+ SeriesFormatter.
+ See core.format.EngFormatter for an example.
+display.height 60 Deprecated. Use `display.max_rows` instead.
+display.large_repr truncate For DataFrames exceeding max_rows/max_cols,
+ the repr (and HTML repr) can show
+ a truncated table (the default from 0.13),
+ or switch to the view from df.info()
+ (the behaviour in earlier versions of pandas).
+ allowable settings, ['truncate', 'info']
+display.latex.repr False Whether to produce a latex DataFrame
+ representation for jupyter frontends
+ that support it.
+display.latex.escape True Escapes special caracters in Dataframes, when
+ using the to_latex method.
+display.latex.longtable False Specifies if the to_latex method of a Dataframe
+ uses the longtable format.
+display.latex.multicolumn True Combines columns when using a MultiIndex
+display.latex.multicolumn_format 'l' Alignment of multicolumn labels
+display.latex.multirow False Combines rows when using a MultiIndex.
+ Centered instead of top-aligned,
+ separated by clines.
+display.line_width 80 Deprecated. Use `display.width` instead.
+display.max_columns 20 max_rows and max_columns are used
+ in __repr__() methods to decide if
+ to_string() or info() is used to
+ render an object to a string. In
+ case python/IPython is running in
+ a terminal this can be set to 0 and
+ pandas will correctly auto-detect
+ the width the terminal and swap to
+ a smaller format in case all columns
+ would not fit vertically. The IPython
+ notebook, IPython qtconsole, or IDLE
+ do not run in a terminal and hence
+ it is not possible to do correct
+ auto-detection. 'None' value means
+ unlimited.
+display.max_colwidth 50 The maximum width in characters of
+ a column in the repr of a pandas
+ data structure. When the column overflows,
+ a "..." placeholder is embedded in
+ the output.
+display.max_info_columns 100 max_info_columns is used in DataFrame.info
+ method to decide if per column information
+ will be printed.
+display.max_info_rows 1690785 df.info() will usually show null-counts
+ for each column. For large frames
+ this can be quite slow. max_info_rows
+ and max_info_cols limit this null
+ check only to frames with smaller
+ dimensions then specified.
+display.max_rows 60 This sets the maximum number of rows
+ pandas should output when printing
+ out various output. For example,
+ this value determines whether the
+ repr() for a dataframe prints out
+ fully or just a summary repr.
+ 'None' value means unlimited.
+display.max_seq_items 100 when pretty-printing a long sequence,
+ no more then `max_seq_items` will
+ be printed. If items are omitted,
+ they will be denoted by the addition
+ of "..." to the resulting string.
+ If set to None, the number of items
+ to be printed is unlimited.
+display.memory_usage True This specifies if the memory usage of
+ a DataFrame should be displayed when the
+ df.info() method is invoked.
+display.multi_sparse True "Sparsify" MultiIndex display (don't
+ display repeated elements in outer
+ levels within groups)
+display.notebook_repr_html True When True, IPython notebook will
+ use html representation for
+ pandas objects (if it is available).
+display.pprint_nest_depth 3 Controls the number of nested levels
+ to process when pretty-printing
+display.precision 6 Floating point output precision in
+ terms of number of places after the
+ decimal, for regular formatting as well
+ as scientific notation. Similar to
+ numpy's ``precision`` print option
+display.show_dimensions truncate Whether to print out dimensions
+ at the end of DataFrame repr.
+ If 'truncate' is specified, only
+ print out the dimensions if the
+ frame is truncated (e.g. not display
+ all rows and/or columns)
+display.width 80 Width of the display in characters.
+ In case python/IPython is running in
+ a terminal this can be set to None
+ and pandas will correctly auto-detect
+ the width. Note that the IPython notebook,
+ IPython qtconsole, or IDLE do not run in a
+ terminal and hence it is not possible
+ to correctly detect the width.
+html.border 1 A ``border=value`` attribute is
+ inserted in the ``<table>`` tag
+ for the DataFrame HTML repr.
+io.excel.xls.writer xlwt The default Excel writer engine for
+ 'xls' files.
+io.excel.xlsm.writer openpyxl The default Excel writer engine for
+ 'xlsm' files. Available options:
+ 'openpyxl' (the default).
+io.excel.xlsx.writer openpyxl The default Excel writer engine for
+ 'xlsx' files.
+io.hdf.default_format None default format writing format, if
+ None, then put will default to
+ 'fixed' and append will default to
+ 'table'
+io.hdf.dropna_table True drop ALL nan rows when appending
+ to a table
+mode.chained_assignment warn Raise an exception, warn, or no
+ action if trying to use chained
+ assignment, The default is warn
+mode.sim_interactive False Whether to simulate interactive mode
+ for purposes of testing
+mode.use_inf_as_null False True means treat None, NaN, -INF,
+ INF as null (old way), False means
+ None and NaN are null, but INF, -INF
+ are not null (new way).
+=================================== ============ ==================================
.. _basics.console_output:
diff --git a/doc/source/whatsnew/v0.20.0.txt b/doc/source/whatsnew/v0.20.0.txt
index dca4f890e496b..0991f3873b06f 100644
--- a/doc/source/whatsnew/v0.20.0.txt
+++ b/doc/source/whatsnew/v0.20.0.txt
@@ -182,6 +182,7 @@ Other enhancements
- ``Timedelta.isoformat`` method added for formatting Timedeltas as an `ISO 8601 duration`_. See the :ref:`Timedelta docs <timedeltas.isoformat>` (:issue:`15136`)
- ``pandas.io.json.json_normalize()`` gained the option ``errors='ignore'|'raise'``; the default is ``errors='raise'`` which is backward compatible. (:issue:`14583`)
- ``.select_dtypes()`` now allows the string 'datetimetz' to generically select datetimes with tz (:issue:`14910`)
+- The ``.to_latex()`` method will now accept ``multicolumn`` and ``multirow`` arguments to use the accompanying LaTeX enhancements
- ``pd.merge_asof()`` gained the option ``direction='backward'|'forward'|'nearest'`` (:issue:`14887`)
- ``Series/DataFrame.asfreq()`` have gained a ``fill_value`` parameter, to fill missing values (:issue:`3715`).
- ``Series/DataFrame.resample.asfreq`` have gained a ``fill_value`` parameter, to fill missing values during resampling (:issue:`3715`).
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index d3db633f3aa04..89616890e1de1 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -239,14 +239,35 @@
: bool
This specifies if the to_latex method of a Dataframe uses escapes special
characters.
- method. Valid values: False,True
+ Valid values: False,True
"""
pc_latex_longtable = """
:bool
This specifies if the to_latex method of a Dataframe uses the longtable
format.
- method. Valid values: False,True
+ Valid values: False,True
+"""
+
+pc_latex_multicolumn = """
+: bool
+ This specifies if the to_latex method of a Dataframe uses multicolumns
+ to pretty-print MultiIndex columns.
+ Valid values: False,True
+"""
+
+pc_latex_multicolumn_format = """
+: string
+ This specifies the format for multicolumn headers.
+ Can be surrounded with '|'.
+ Valid values: 'l', 'c', 'r', 'p{<width>}'
+"""
+
+pc_latex_multirow = """
+: bool
+ This specifies if the to_latex method of a Dataframe uses multirows
+ to pretty-print MultiIndex rows.
+ Valid values: False,True
"""
style_backup = dict()
@@ -339,6 +360,12 @@ def mpl_style_cb(key):
validator=is_bool)
cf.register_option('latex.longtable', False, pc_latex_longtable,
validator=is_bool)
+ cf.register_option('latex.multicolumn', True, pc_latex_multicolumn,
+ validator=is_bool)
+ cf.register_option('latex.multicolumn_format', 'l', pc_latex_multicolumn,
+ validator=is_text)
+ cf.register_option('latex.multirow', False, pc_latex_multirow,
+ validator=is_bool)
cf.deprecate_option('display.line_width',
msg=pc_line_width_deprecation_warning,
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 26a0a91094e7d..b3e43edc3eb55 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1614,10 +1614,11 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
index=True, na_rep='NaN', formatters=None, float_format=None,
sparsify=None, index_names=True, bold_rows=True,
column_format=None, longtable=None, escape=None,
- encoding=None, decimal='.'):
- """
+ encoding=None, decimal='.', multicolumn=None,
+ multicolumn_format=None, multirow=None):
+ r"""
Render a DataFrame to a tabular environment table. You can splice
- this into a LaTeX document. Requires \\usepackage{booktabs}.
+ this into a LaTeX document. Requires \usepackage{booktabs}.
`to_latex`-specific options:
@@ -1628,27 +1629,54 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
<https://en.wikibooks.org/wiki/LaTeX/Tables>`__ e.g 'rcl' for 3
columns
longtable : boolean, default will be read from the pandas config module
- default: False
+ Default: False.
Use a longtable environment instead of tabular. Requires adding
- a \\usepackage{longtable} to your LaTeX preamble.
+ a \usepackage{longtable} to your LaTeX preamble.
escape : boolean, default will be read from the pandas config module
- default: True
+ Default: True.
When set to False prevents from escaping latex special
characters in column names.
encoding : str, default None
A string representing the encoding to use in the output file,
defaults to 'ascii' on Python 2 and 'utf-8' on Python 3.
decimal : string, default '.'
- Character recognized as decimal separator, e.g. ',' in Europe
+ Character recognized as decimal separator, e.g. ',' in Europe.
.. versionadded:: 0.18.0
+ multicolumn : boolean, default True
+ Use \multicolumn to enhance MultiIndex columns.
+ The default will be read from the config module.
+
+ .. versionadded:: 0.20.0
+
+ multicolumn_format : str, default 'l'
+ The alignment for multicolumns, similar to `column_format`
+ The default will be read from the config module.
+
+ .. versionadded:: 0.20.0
+
+ multirow : boolean, default False
+ Use \multirow to enhance MultiIndex rows.
+ Requires adding a \usepackage{multirow} to your LaTeX preamble.
+ Will print centered labels (instead of top-aligned)
+ across the contained rows, separating groups via clines.
+ The default will be read from the pandas config module.
+
+ .. versionadded:: 0.20.0
+
"""
# Get defaults from the pandas config
if longtable is None:
longtable = get_option("display.latex.longtable")
if escape is None:
escape = get_option("display.latex.escape")
+ if multicolumn is None:
+ multicolumn = get_option("display.latex.multicolumn")
+ if multicolumn_format is None:
+ multicolumn_format = get_option("display.latex.multicolumn_format")
+ if multirow is None:
+ multirow = get_option("display.latex.multirow")
formatter = fmt.DataFrameFormatter(self, buf=buf, columns=columns,
col_space=col_space, na_rep=na_rep,
@@ -1660,7 +1688,9 @@ def to_latex(self, buf=None, columns=None, col_space=None, header=True,
index_names=index_names,
escape=escape, decimal=decimal)
formatter.to_latex(column_format=column_format, longtable=longtable,
- encoding=encoding)
+ encoding=encoding, multicolumn=multicolumn,
+ multicolumn_format=multicolumn_format,
+ multirow=multirow)
if buf is None:
return formatter.buf.getvalue()
diff --git a/pandas/formats/format.py b/pandas/formats/format.py
index 4c081770e0125..9dde3b0001c31 100644
--- a/pandas/formats/format.py
+++ b/pandas/formats/format.py
@@ -650,13 +650,17 @@ def _join_multiline(self, *strcols):
st = ed
return '\n\n'.join(str_lst)
- def to_latex(self, column_format=None, longtable=False, encoding=None):
+ def to_latex(self, column_format=None, longtable=False, encoding=None,
+ multicolumn=False, multicolumn_format=None, multirow=False):
"""
Render a DataFrame to a LaTeX tabular/longtable environment output.
"""
latex_renderer = LatexFormatter(self, column_format=column_format,
- longtable=longtable)
+ longtable=longtable,
+ multicolumn=multicolumn,
+ multicolumn_format=multicolumn_format,
+ multirow=multirow)
if encoding is None:
encoding = 'ascii' if compat.PY2 else 'utf-8'
@@ -824,11 +828,15 @@ class LatexFormatter(TableFormatter):
HTMLFormatter
"""
- def __init__(self, formatter, column_format=None, longtable=False):
+ def __init__(self, formatter, column_format=None, longtable=False,
+ multicolumn=False, multicolumn_format=None, multirow=False):
self.fmt = formatter
self.frame = self.fmt.frame
self.column_format = column_format
self.longtable = longtable
+ self.multicolumn = multicolumn
+ self.multicolumn_format = multicolumn_format
+ self.multirow = multirow
def write_result(self, buf):
"""
@@ -850,14 +858,21 @@ def get_col_type(dtype):
else:
return 'l'
+ # reestablish the MultiIndex that has been joined by _to_str_column
if self.fmt.index and isinstance(self.frame.index, MultiIndex):
clevels = self.frame.columns.nlevels
strcols.pop(0)
name = any(self.frame.index.names)
+ cname = any(self.frame.columns.names)
+ lastcol = self.frame.index.nlevels - 1
for i, lev in enumerate(self.frame.index.levels):
lev2 = lev.format()
blank = ' ' * len(lev2[0])
- lev3 = [blank] * clevels
+ # display column names in last index-column
+ if cname and i == lastcol:
+ lev3 = [x if x else '{}' for x in self.frame.columns.names]
+ else:
+ lev3 = [blank] * clevels
if name:
lev3.append(lev.name)
for level_idx, group in itertools.groupby(
@@ -885,10 +900,15 @@ def get_col_type(dtype):
buf.write('\\begin{longtable}{%s}\n' % column_format)
buf.write('\\toprule\n')
- nlevels = self.frame.columns.nlevels
+ ilevels = self.frame.index.nlevels
+ clevels = self.frame.columns.nlevels
+ nlevels = clevels
if any(self.frame.index.names):
nlevels += 1
- for i, row in enumerate(zip(*strcols)):
+ strrows = list(zip(*strcols))
+ self.clinebuf = []
+
+ for i, row in enumerate(strrows):
if i == nlevels and self.fmt.header:
buf.write('\\midrule\n') # End of header
if self.longtable:
@@ -910,8 +930,17 @@ def get_col_type(dtype):
if x else '{}') for x in row]
else:
crow = [x if x else '{}' for x in row]
+ if i < clevels and self.fmt.header and self.multicolumn:
+ # sum up columns to multicolumns
+ crow = self._format_multicolumn(crow, ilevels)
+ if (i >= nlevels and self.fmt.index and self.multirow and
+ ilevels > 1):
+ # sum up rows to multirows
+ crow = self._format_multirow(crow, ilevels, i, strrows)
buf.write(' & '.join(crow))
buf.write(' \\\\\n')
+ if self.multirow and i < len(strrows) - 1:
+ self._print_cline(buf, i, len(strcols))
if not self.longtable:
buf.write('\\bottomrule\n')
@@ -919,6 +948,80 @@ def get_col_type(dtype):
else:
buf.write('\\end{longtable}\n')
+ def _format_multicolumn(self, row, ilevels):
+ """
+ Combine columns belonging to a group to a single multicolumn entry
+ according to self.multicolumn_format
+
+ e.g.:
+ a & & & b & c &
+ will become
+ \multicolumn{3}{l}{a} & b & \multicolumn{2}{l}{c}
+ """
+ row2 = list(row[:ilevels])
+ ncol = 1
+ coltext = ''
+
+ def append_col():
+ # write multicolumn if needed
+ if ncol > 1:
+ row2.append('\\multicolumn{{{0:d}}}{{{1:s}}}{{{2:s}}}'
+ .format(ncol, self.multicolumn_format,
+ coltext.strip()))
+ # don't modify where not needed
+ else:
+ row2.append(coltext)
+ for c in row[ilevels:]:
+ # if next col has text, write the previous
+ if c.strip():
+ if coltext:
+ append_col()
+ coltext = c
+ ncol = 1
+ # if not, add it to the previous multicolumn
+ else:
+ ncol += 1
+ # write last column name
+ if coltext:
+ append_col()
+ return row2
+
+ def _format_multirow(self, row, ilevels, i, rows):
+ """
+ Check following rows, whether row should be a multirow
+
+ e.g.: becomes:
+ a & 0 & \multirow{2}{*}{a} & 0 &
+ & 1 & & 1 &
+ b & 0 & \cline{1-2}
+ b & 0 &
+ """
+ for j in range(ilevels):
+ if row[j].strip():
+ nrow = 1
+ for r in rows[i + 1:]:
+ if not r[j].strip():
+ nrow += 1
+ else:
+ break
+ if nrow > 1:
+ # overwrite non-multirow entry
+ row[j] = '\\multirow{{{0:d}}}{{*}}{{{1:s}}}'.format(
+ nrow, row[j].strip())
+ # save when to end the current block with \cline
+ self.clinebuf.append([i + nrow - 1, j + 1])
+ return row
+
+ def _print_cline(self, buf, i, icol):
+ """
+ Print clines after multirow-blocks are finished
+ """
+ for cl in self.clinebuf:
+ if cl[0] == i:
+ buf.write('\cline{{{0:d}-{1:d}}}\n'.format(cl[1], icol))
+ # remove entries that have been written to buffer
+ self.clinebuf = [x for x in self.clinebuf if x[0] != i]
+
class HTMLFormatter(TableFormatter):
diff --git a/pandas/tests/formats/test_to_latex.py b/pandas/tests/formats/test_to_latex.py
index 89e18e1cec06e..17e1e18f03dd6 100644
--- a/pandas/tests/formats/test_to_latex.py
+++ b/pandas/tests/formats/test_to_latex.py
@@ -168,6 +168,24 @@ def test_to_latex_multiindex(self):
assert result == expected
+ # GH 14184
+ df = df.T
+ df.columns.names = ['a', 'b']
+ result = df.to_latex()
+ expected = r"""\begin{tabular}{lrrrrr}
+\toprule
+a & \multicolumn{2}{l}{c1} & \multicolumn{2}{l}{c2} & c3 \\
+b & 0 & 1 & 0 & 1 & 0 \\
+\midrule
+0 & 0 & 4 & 0 & 4 & 0 \\
+1 & 1 & 5 & 1 & 5 & 1 \\
+2 & 2 & 6 & 2 & 6 & 2 \\
+3 & 3 & 7 & 3 & 7 & 3 \\
+\bottomrule
+\end{tabular}
+"""
+ assert result == expected
+
# GH 10660
df = pd.DataFrame({'a': [0, 0, 1, 1],
'b': list('abab'),
@@ -189,16 +207,95 @@ def test_to_latex_multiindex(self):
assert result == expected
result = df.groupby('a').describe().to_latex()
- expected = ('\\begin{tabular}{lrrrrrrrr}\n\\toprule\n{} & c & '
- ' & & & & & & '
- '\\\\\n{} & count & mean & std & min & 25\\% & '
- '50\\% & 75\\% & max \\\\\na & & & '
- ' & & & & & \\\\\n\\midrule\n0 '
- '& 2.0 & 1.5 & 0.707107 & 1.0 & 1.25 & 1.5 & 1.75 '
- '& 2.0 \\\\\n1 & 2.0 & 3.5 & 0.707107 & 3.0 & 3.25 '
- '& 3.5 & 3.75 & 4.0 '
- '\\\\\n\\bottomrule\n\\end{tabular}\n')
+ expected = r"""\begin{tabular}{lrrrrrrrr}
+\toprule
+{} & \multicolumn{8}{l}{c} \\
+{} & count & mean & std & min & 25\% & 50\% & 75\% & max \\
+a & & & & & & & & \\
+\midrule
+0 & 2.0 & 1.5 & 0.707107 & 1.0 & 1.25 & 1.5 & 1.75 & 2.0 \\
+1 & 2.0 & 3.5 & 0.707107 & 3.0 & 3.25 & 3.5 & 3.75 & 4.0 \\
+\bottomrule
+\end{tabular}
+"""
+
+ assert result == expected
+
+ def test_to_latex_multicolumnrow(self):
+ df = pd.DataFrame({
+ ('c1', 0): dict((x, x) for x in range(5)),
+ ('c1', 1): dict((x, x + 5) for x in range(5)),
+ ('c2', 0): dict((x, x) for x in range(5)),
+ ('c2', 1): dict((x, x + 5) for x in range(5)),
+ ('c3', 0): dict((x, x) for x in range(5))
+ })
+ result = df.to_latex()
+ expected = r"""\begin{tabular}{lrrrrr}
+\toprule
+{} & \multicolumn{2}{l}{c1} & \multicolumn{2}{l}{c2} & c3 \\
+{} & 0 & 1 & 0 & 1 & 0 \\
+\midrule
+0 & 0 & 5 & 0 & 5 & 0 \\
+1 & 1 & 6 & 1 & 6 & 1 \\
+2 & 2 & 7 & 2 & 7 & 2 \\
+3 & 3 & 8 & 3 & 8 & 3 \\
+4 & 4 & 9 & 4 & 9 & 4 \\
+\bottomrule
+\end{tabular}
+"""
+ assert result == expected
+ result = df.to_latex(multicolumn=False)
+ expected = r"""\begin{tabular}{lrrrrr}
+\toprule
+{} & c1 & & c2 & & c3 \\
+{} & 0 & 1 & 0 & 1 & 0 \\
+\midrule
+0 & 0 & 5 & 0 & 5 & 0 \\
+1 & 1 & 6 & 1 & 6 & 1 \\
+2 & 2 & 7 & 2 & 7 & 2 \\
+3 & 3 & 8 & 3 & 8 & 3 \\
+4 & 4 & 9 & 4 & 9 & 4 \\
+\bottomrule
+\end{tabular}
+"""
+ assert result == expected
+
+ result = df.T.to_latex(multirow=True)
+ expected = r"""\begin{tabular}{llrrrrr}
+\toprule
+ & & 0 & 1 & 2 & 3 & 4 \\
+\midrule
+\multirow{2}{*}{c1} & 0 & 0 & 1 & 2 & 3 & 4 \\
+ & 1 & 5 & 6 & 7 & 8 & 9 \\
+\cline{1-7}
+\multirow{2}{*}{c2} & 0 & 0 & 1 & 2 & 3 & 4 \\
+ & 1 & 5 & 6 & 7 & 8 & 9 \\
+\cline{1-7}
+c3 & 0 & 0 & 1 & 2 & 3 & 4 \\
+\bottomrule
+\end{tabular}
+"""
+ assert result == expected
+
+ df.index = df.T.index
+ result = df.T.to_latex(multirow=True, multicolumn=True,
+ multicolumn_format='c')
+ expected = r"""\begin{tabular}{llrrrrr}
+\toprule
+ & & \multicolumn{2}{c}{c1} & \multicolumn{2}{c}{c2} & c3 \\
+ & & 0 & 1 & 0 & 1 & 0 \\
+\midrule
+\multirow{2}{*}{c1} & 0 & 0 & 1 & 2 & 3 & 4 \\
+ & 1 & 5 & 6 & 7 & 8 & 9 \\
+\cline{1-7}
+\multirow{2}{*}{c2} & 0 & 0 & 1 & 2 & 3 & 4 \\
+ & 1 & 5 & 6 & 7 & 8 & 9 \\
+\cline{1-7}
+c3 & 0 & 0 & 1 & 2 & 3 & 4 \\
+\bottomrule
+\end{tabular}
+"""
assert result == expected
def test_to_latex_escape(self):
| - [x] closes #13508
- [X] tests added / passed
- [X] passes `git diff upstream/master | flake8 --diff`
- [X] whatsnew entry
Print names of MultiIndex columns.
Added "multicol" and "multirow" flags to to_latex
which trigger the corresponding feature.
Multirow adds clines to visually separate sections.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14184 | 2016-09-08T12:14:10Z | 2017-03-03T09:16:46Z | 2017-03-03T09:16:46Z | 2017-03-03T09:17:22Z |
MAINT: Use __module__ in _DeprecatedModule. | diff --git a/pandas/core/api.py b/pandas/core/api.py
index c0f39e2ac4717..b5e1de2063c7e 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -31,14 +31,12 @@
# see gh-14094.
from pandas.util.depr_module import _DeprecatedModule
-_alts = ['pandas.tseries.tools', 'pandas.tseries.offsets',
- 'pandas.tseries.frequencies']
_removals = ['day', 'bday', 'businessDay', 'cday', 'customBusinessDay',
'customBusinessMonthEnd', 'customBusinessMonthBegin',
'monthEnd', 'yearEnd', 'yearBegin', 'bmonthEnd', 'bmonthBegin',
'cbmonthEnd', 'cbmonthBegin', 'bquarterEnd', 'quarterEnd',
'byearEnd', 'week']
-datetools = _DeprecatedModule(deprmod='pandas.core.datetools', alts=_alts,
+datetools = _DeprecatedModule(deprmod='pandas.core.datetools',
removals=_removals)
from pandas.core.config import (get_option, set_option, reset_option,
diff --git a/pandas/util/depr_module.py b/pandas/util/depr_module.py
index 7e03a000a50ec..736d2cdaab31c 100644
--- a/pandas/util/depr_module.py
+++ b/pandas/util/depr_module.py
@@ -13,18 +13,11 @@ class _DeprecatedModule(object):
Parameters
----------
deprmod : name of module to be deprecated.
- alts : alternative modules to be used to access objects or methods
- available in module.
removals : objects or methods in module that will no longer be
accessible once module is removed.
"""
- def __init__(self, deprmod, alts=None, removals=None):
+ def __init__(self, deprmod, removals=None):
self.deprmod = deprmod
-
- self.alts = alts
- if self.alts is not None:
- self.alts = frozenset(self.alts)
-
self.removals = removals
if self.removals is not None:
self.removals = frozenset(self.removals)
@@ -33,47 +26,39 @@ def __init__(self, deprmod, alts=None, removals=None):
self.self_dir = frozenset(dir(self.__class__))
def __dir__(self):
- _dir = object.__dir__(self)
-
- if self.removals is not None:
- _dir.extend(list(self.removals))
+ deprmodule = self._import_deprmod()
+ return dir(deprmodule)
- if self.alts is not None:
- for modname in self.alts:
- module = importlib.import_module(modname)
- _dir.extend(dir(module))
+ def __repr__(self):
+ deprmodule = self._import_deprmod()
+ return repr(deprmodule)
- return _dir
+ __str__ = __repr__
def __getattr__(self, name):
if name in self.self_dir:
return object.__getattribute__(self, name)
- if self.removals is not None and name in self.removals:
- with warnings.catch_warnings():
- warnings.filterwarnings('ignore', category=FutureWarning)
- module = importlib.import_module(self.deprmod)
+ deprmodule = self._import_deprmod()
+ obj = getattr(deprmodule, name)
+ if self.removals is not None and name in self.removals:
warnings.warn(
"{deprmod}.{name} is deprecated and will be removed in "
"a future version.".format(deprmod=self.deprmod, name=name),
FutureWarning, stacklevel=2)
+ else:
+ # The object is actually located in another module.
+ warnings.warn(
+ "{deprmod}.{name} is deprecated. Please use "
+ "{modname}.{name} instead.".format(
+ deprmod=self.deprmod, modname=obj.__module__, name=name),
+ FutureWarning, stacklevel=2)
- return object.__getattribute__(module, name)
-
- if self.alts is not None:
- for modname in self.alts:
- module = importlib.import_module(modname)
-
- if hasattr(module, name):
- warnings.warn(
- "{deprmod}.{name} is deprecated. Please use "
- "{modname}.{name} instead.".format(
- deprmod=self.deprmod, modname=modname, name=name),
- FutureWarning, stacklevel=2)
-
- return getattr(module, name)
+ return obj
- raise AttributeError("module '{deprmod}' has no attribute "
- "'{name}'".format(deprmod=self.deprmod,
- name=name))
+ def _import_deprmod(self):
+ with warnings.catch_warnings():
+ warnings.filterwarnings('ignore', category=FutureWarning)
+ deprmodule = importlib.import_module(self.deprmod)
+ return deprmodule
| Follow-up to #14105. Uses the `__module__` method to correctly determine the location of the alternative
module to use.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14181 | 2016-09-08T03:44:04Z | 2016-09-14T08:31:04Z | 2016-09-14T08:31:04Z | 2016-09-14T14:52:06Z |
DOC: clean-up 0.19.0 whatsnew file | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 9f468ae6785cb..a007500322ed4 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -1,25 +1,28 @@
.. _whatsnew_0190:
-v0.19.0 (August ??, 2016)
--------------------------
+v0.19.0 (September ??, 2016)
+----------------------------
-This is a major release from 0.18.1 and includes a small number of API changes, several new features,
+This is a major release from 0.18.1 and includes number of API changes, several new features,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
-.. warning::
-
- pandas >= 0.19.0 will no longer silence numpy ufunc warnings upon import, see :ref:`here <whatsnew_0190.errstate>`.
-
Highlights include:
- :func:`merge_asof` for asof-style time-series joining, see :ref:`here <whatsnew_0190.enhancements.asof_merge>`
- ``.rolling()`` are now time-series aware, see :ref:`here <whatsnew_0190.enhancements.rolling_ts>`
- :func:`read_csv` now supports parsing ``Categorical`` data, see :ref:`here <whatsnew_0190.enhancements.read_csv_categorical>`
- A function :func:`union_categorical` has been added for combining categoricals, see :ref:`here <whatsnew_0190.enhancements.union_categoricals>`
-- pandas development api, see :ref:`here <whatsnew_0190.dev_api>`
- ``PeriodIndex`` now has its own ``period`` dtype, and changed to be more consistent with other ``Index`` classes. See :ref:`here <whatsnew_0190.api.period>`
-- Sparse data structures now gained enhanced support of ``int`` and ``bool`` dtypes, see :ref:`here <whatsnew_0190.sparse>`
+- Sparse data structures gained enhanced support of ``int`` and ``bool`` dtypes, see :ref:`here <whatsnew_0190.sparse>`
+- Comparison operations with ``Series`` no longer ignores the index, see :ref:`here <whatsnew_0190.api.series_ops>` for an overview of the API changes.
+- Introduction of a pandas development API for utility functions, see :ref:`here <whatsnew_0190.dev_api>`.
+- Deprecation of ``Panel4D`` and ``PanelND``. We recommend to represent these types of n-dimensional data with the `xarray package <http://xarray.pydata.org/en/stable/>`__.
+- Removal of the previously deprecated modules ``pandas.io.data``, ``pandas.io.wb``, ``pandas.tools.rplot``.
+
+.. warning::
+
+ pandas >= 0.19.0 will no longer silence numpy ufunc warnings upon import, see :ref:`here <whatsnew_0190.errstate>`.
.. contents:: What's new in v0.19.0
:local:
@@ -35,7 +38,7 @@ New features
pandas development API
^^^^^^^^^^^^^^^^^^^^^^
-As part of making pandas APi more uniform and accessible in the future, we have created a standard
+As part of making pandas API more uniform and accessible in the future, we have created a standard
sub-package of pandas, ``pandas.api`` to hold public API's. We are starting by exposing type
introspection functions in ``pandas.api.types``. More sub-packages and officially sanctioned API's
will be published in future versions of pandas (:issue:`13147`, :issue:`13634`)
@@ -215,12 +218,12 @@ default of the index) in a DataFrame.
:ref:`Duplicate column names <io.dupe_names>` are now supported in :func:`read_csv` whether
they are in the file or passed in as the ``names`` parameter (:issue:`7160`, :issue:`9424`)
-.. ipython :: python
+.. ipython:: python
data = '0,1,2\n3,4,5'
names = ['a', 'b', 'a']
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -230,25 +233,25 @@ Previous Behavior:
0 2 1 2
1 5 4 5
-The first ``a`` column contains the same data as the second ``a`` column, when it should have
+The first ``a`` column contained the same data as the second ``a`` column, when it should have
contained the values ``[0, 3]``.
-New Behavior:
+**New behavior**:
-.. ipython :: python
+.. ipython:: python
- In [2]: pd.read_csv(StringIO(data), names=names)
+ pd.read_csv(StringIO(data), names=names)
.. _whatsnew_0190.enhancements.read_csv_categorical:
-:func:`read_csv` supports parsing ``Categorical`` directly
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+``read_csv`` supports parsing ``Categorical`` directly
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The :func:`read_csv` function now supports parsing a ``Categorical`` column when
specified as a dtype (:issue:`10153`). Depending on the structure of the data,
this can result in a faster parse time and lower memory usage compared to
-converting to ``Categorical`` after parsing. See the io :ref:`docs here <io.categorical>`
+converting to ``Categorical`` after parsing. See the io :ref:`docs here <io.categorical>`.
.. ipython:: python
@@ -296,7 +299,7 @@ Categorical Concatenation
- ``concat`` and ``append`` now can concat ``category`` dtypes wifht different
``categories`` as ``object`` dtype (:issue:`13524`)
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -305,7 +308,7 @@ Previous Behavior:
In [3]: pd.concat([s1, s2])
ValueError: incompatible categories in categorical concat
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -407,12 +410,12 @@ After upgrading pandas, you may see *new* ``RuntimeWarnings`` being issued from
.. _whatsnew_0190.get_dummies_dtypes:
-get_dummies dtypes
-^^^^^^^^^^^^^^^^^^
+``get_dummies`` now returns integer dtypes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
The ``pd.get_dummies`` function now returns dummy-encoded columns as small integers, rather than floats (:issue:`8725`). This should provide an improved memory footprint.
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -424,22 +427,19 @@ Previous Behavior:
c float64
dtype: object
-New Behavior:
+**New behavior**:
.. ipython:: python
pd.get_dummies(['a', 'b', 'a', 'c']).dtypes
-.. _whatsnew_0190.enhancements.other:
-
-Other enhancements
-^^^^^^^^^^^^^^^^^^
+.. _whatsnew_0190.enhancements.to_numeric_downcast:
-- The ``.get_credentials()`` method of ``GbqConnector`` can now first try to fetch `the application default credentials <https://developers.google.com/identity/protocols/application-default-credentials>`__. See the :ref:`docs <io.bigquery_authentication>` for more details (:issue:`13577`).
+Downcast values to smallest possible dtype in ``to_numeric``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behavior remains to raising a ``NonExistentTimeError`` (:issue:`13057`)
-- ``pd.to_numeric()`` now accepts a ``downcast`` parameter, which will downcast the data if possible to smallest specified numerical dtype (:issue:`13352`)
+``pd.to_numeric()`` now accepts a ``downcast`` parameter, which will downcast the data if possible to smallest specified numerical dtype (:issue:`13352`)
.. ipython:: python
@@ -447,6 +447,16 @@ Other enhancements
pd.to_numeric(s, downcast='unsigned')
pd.to_numeric(s, downcast='integer')
+
+.. _whatsnew_0190.enhancements.other:
+
+Other enhancements
+^^^^^^^^^^^^^^^^^^
+
+- The ``.get_credentials()`` method of ``GbqConnector`` can now first try to fetch `the application default credentials <https://developers.google.com/identity/protocols/application-default-credentials>`__. See the :ref:`docs <io.bigquery_authentication>` for more details (:issue:`13577`).
+
+- The ``.tz_localize()`` method of ``DatetimeIndex`` and ``Timestamp`` has gained the ``errors`` keyword, so you can potentially coerce nonexistent timestamps to ``NaT``. The default behavior remains to raising a ``NonExistentTimeError`` (:issue:`13057`)
+
- ``.to_hdf/read_hdf()`` now accept path objects (e.g. ``pathlib.Path``, ``py.path.local``) for the file path (:issue:`11773`)
- ``Timestamp`` can now accept positional and keyword parameters similar to :func:`datetime.datetime` (:issue:`10758`, :issue:`11630`)
@@ -471,13 +481,10 @@ Other enhancements
df.resample('M', on='date').sum()
df.resample('M', level='d').sum()
-- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``decimal`` option (:issue:`12933`)
-- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``na_filter`` option (:issue:`13321`)
-- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the ``memory_map`` option (:issue:`13381`)
+- The ``pd.read_csv()`` with ``engine='python'`` has gained support for the
+ ``decimal`` (:issue:`12933`), ``na_filter`` (:issue:`13321`) and the ``memory_map`` option (:issue:`13381`).
- Consistent with the Python API, ``pd.read_csv()`` will now interpret ``+inf`` as positive infinity (:issue:`13274`)
-
- The ``pd.read_html()`` has gained support for the ``na_values``, ``converters``, ``keep_default_na`` options (:issue:`13461`)
-
- ``Categorical.astype()`` now accepts an optional boolean argument ``copy``, effective when dtype is categorical (:issue:`13209`)
- ``DataFrame`` has gained the ``.asof()`` method to return the last non-NaN values according to the selected subset (:issue:`13358`)
- The ``DataFrame`` constructor will now respect key ordering if a list of ``OrderedDict`` objects are passed in (:issue:`13304`)
@@ -504,43 +511,14 @@ Other enhancements
- :meth:`~DataFrame.to_html` now has a ``border`` argument to control the value in the opening ``<table>`` tag. The default is the value of the ``html.border`` option, which defaults to 1. This also affects the notebook HTML repr, but since Jupyter's CSS includes a border-width attribute, the visual effect is the same. (:issue:`11563`).
- Raise ``ImportError`` in the sql functions when ``sqlalchemy`` is not installed and a connection string is used (:issue:`11920`).
- Compatibility with matplotlib 2.0. Older versions of pandas should also work with matplotlib 2.0 (:issue:`13333`)
-
-.. _whatsnew_0190.api:
-
-
-API changes
-~~~~~~~~~~~
-
-
-- ``Timestamp.to_pydatetime`` will issue a ``UserWarning`` when ``warn=True``, and the instance has a non-zero number of nanoseconds, previously this would print a message to stdout. (:issue:`14101`)
-- Non-convertible dates in an excel date column will be returned without conversion and the column will be ``object`` dtype, rather than raising an exception (:issue:`10001`)
-- ``Series.unique()`` with datetime and timezone now returns return array of ``Timestamp`` with timezone (:issue:`13565`)
- ``Timestamp``, ``Period``, ``DatetimeIndex``, ``PeriodIndex`` and ``.dt`` accessor have gained a ``.is_leap_year`` property to check whether the date belongs to a leap year. (:issue:`13727`)
-- ``pd.Timedelta(None)`` is now accepted and will return ``NaT``, mirroring ``pd.Timestamp`` (:issue:`13687`)
-- ``Panel.to_sparse()`` will raise a ``NotImplementedError`` exception when called (:issue:`13778`)
-- ``Index.reshape()`` will raise a ``NotImplementedError`` exception when called (:issue:`12882`)
-- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
-- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
-- An ``UnsupportedFunctionCall`` error is now raised if NumPy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`)
-- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)
-- Calls to ``.sample()`` will respect the random seed set via ``numpy.random.seed(n)`` (:issue:`13161`)
-- ``Styler.apply`` is now more strict about the outputs your function must return. For ``axis=0`` or ``axis=1``, the output shape must be identical. For ``axis=None``, the output must be a DataFrame with identical columns and index labels. (:issue:`13222`)
-- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`)
-- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`)
-- Passing ``Period`` with multiple frequencies to normal ``Index`` now returns ``Index`` with ``object`` dtype (:issue:`13664`)
-- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
-- ``PeriodIndex.fillna`` with ``Period`` has different freq now coerces to ``object`` dtype (:issue:`13664`)
-- Faceted boxplots from ``DataFrame.boxplot(by=col)`` now return a ``Series`` when ``return_type`` is not None. Previously these returned an ``OrderedDict``. Note that when ``return_type=None``, the default, these still return a 2-D NumPy array. (:issue:`12216`, :issue:`7096`)
- ``astype()`` will now accept a dict of column name to data types mapping as the ``dtype`` argument. (:issue:`12086`)
- The ``pd.read_json`` and ``DataFrame.to_json`` has gained support for reading and writing json lines with ``lines`` option see :ref:`Line delimited json <io.jsonl>` (:issue:`9180`)
-- ``pd.read_hdf`` will now raise a ``ValueError`` instead of ``KeyError``, if a mode other than ``r``, ``r+`` and ``a`` is supplied. (:issue:`13623`)
-- ``pd.read_csv()``, ``pd.read_table()``, and ``pd.read_hdf()`` raise the builtin ``FileNotFoundError`` exception for Python 3.x when called on a nonexistent file; this is back-ported as ``IOError`` in Python 2.x (:issue:`14086`)
-- More informative exceptions are passed through the csv parser. The exception type would now be the original exception type instead of ``CParserError``. (:issue:`13652`)
-- ``pd.read_csv()`` in the C engine will now issue a ``ParserWarning`` or raise a ``ValueError`` when ``sep`` encoded is more than one character long (:issue:`14065`)
-- ``DataFrame.values`` will now return ``float64`` with a ``DataFrame`` of mixed ``int64`` and ``uint64`` dtypes, conforming to ``np.find_common_type`` (:issue:`10364`, :issue:`13917`)
+.. _whatsnew_0190.api:
-.. _whatsnew_0190.api.tolist:
+API changes
+~~~~~~~~~~~
``Series.tolist()`` will now return Python types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -551,9 +529,8 @@ API changes
.. ipython:: python
s = pd.Series([1,2,3])
- type(s.tolist()[0])
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -561,7 +538,7 @@ Previous Behavior:
Out[7]:
<class 'numpy.int64'>
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -572,11 +549,11 @@ New Behavior:
``Series`` operators for different indexes
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Following ``Series`` operators has been changed to make all operators consistent,
+Following ``Series`` operators have been changed to make all operators consistent,
including ``DataFrame`` (:issue:`1134`, :issue:`4581`, :issue:`13538`)
- ``Series`` comparison operators now raise ``ValueError`` when ``index`` are different.
-- ``Series`` logical operators align both ``index``.
+- ``Series`` logical operators align both ``index`` of left and right hand side.
.. warning::
Until 0.18.1, comparing ``Series`` with the same length, would succeed even if
@@ -607,7 +584,7 @@ Comparison operators raise ``ValueError`` when ``.index`` are different.
Previous Behavior (``Series``):
-``Series`` compares values ignoring ``.index`` as long as both lengthes are the same.
+``Series`` compared values ignoring the ``.index`` as long as both had the same length:
.. code-block:: ipython
@@ -618,7 +595,7 @@ Previous Behavior (``Series``):
C False
dtype: bool
-New Behavior (``Series``):
+**New behavior** (``Series``):
.. code-block:: ipython
@@ -627,13 +604,18 @@ New Behavior (``Series``):
ValueError: Can only compare identically-labeled Series objects
.. note::
+
To achieve the same result as previous versions (compare values based on locations ignoring ``.index``), compare both ``.values``.
.. ipython:: python
s1.values == s2.values
- If you want to compare ``Series`` aligning its ``.index``, see flexible comparison methods section below.
+ If you want to compare ``Series`` aligning its ``.index``, see flexible comparison methods section below:
+
+ .. ipython:: python
+
+ s1.eq(s2)
Current Behavior (``DataFrame``, no change):
@@ -646,9 +628,9 @@ Current Behavior (``DataFrame``, no change):
Logical operators
"""""""""""""""""
-Logical operators align both ``.index``.
+Logical operators align both ``.index`` of left and right hand side.
-Previous behavior (``Series``), only left hand side ``index`` is kept:
+Previous behavior (``Series``), only left hand side ``index`` was kept:
.. code-block:: ipython
@@ -661,7 +643,7 @@ Previous behavior (``Series``), only left hand side ``index`` is kept:
C False
dtype: bool
-New Behavior (``Series``):
+**New behavior** (``Series``):
.. ipython:: python
@@ -673,11 +655,11 @@ New Behavior (``Series``):
``Series`` logical operators fill a ``NaN`` result with ``False``.
.. note::
- To achieve the same result as previous versions (compare values based on locations ignoring ``.index``), compare both ``.values``.
+ To achieve the same result as previous versions (compare values based on only left hand side index), you can use ``reindex_like``:
.. ipython:: python
- s1.values & s2.values
+ s1 & s2.reindex_like(s1)
Current Behavior (``DataFrame``, no change):
@@ -714,7 +696,7 @@ A ``Series`` will now correctly promote its dtype for assignment with incompat v
s = pd.Series()
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -723,7 +705,7 @@ Previous Behavior:
In [3]: s["b"] = 3.0
TypeError: invalid type promotion
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -739,7 +721,7 @@ New Behavior:
Previously if ``.to_datetime()`` encountered mixed integers/floats and strings, but no datetimes with ``errors='coerce'`` it would convert all to ``NaT``.
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -774,7 +756,7 @@ Merging will now preserve the dtype of the join keys (:issue:`8596`)
df2 = pd.DataFrame({'key': [1, 2], 'v1': [20, 30]})
df2
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -791,7 +773,7 @@ Previous Behavior:
v1 float64
dtype: object
-New Behavior:
+**New behavior**:
We are able to preserve the join keys
@@ -820,7 +802,7 @@ Percentile identifiers in the index of a ``.describe()`` output will now be roun
s = pd.Series([0, 1, 2, 3, 4])
df = pd.DataFrame([0, 1, 2, 3, 4])
-Previous Behavior:
+**Previous behavior**:
The percentiles were rounded to at most one decimal place, which could raise ``ValueError`` for a data frame if the percentiles were duplicated.
@@ -847,7 +829,7 @@ The percentiles were rounded to at most one decimal place, which could raise ``V
...
ValueError: cannot reindex from a duplicate axis
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -868,10 +850,10 @@ Furthermore:
""""""""""""""""""""""""""""""""""""""""
``PeriodIndex`` now has its own ``period`` dtype. The ``period`` dtype is a
-pandas extension dtype like ``category`` or :ref:`timezone aware dtype <timeseries.timezone_series>` (``datetime64[ns, tz]``). (:issue:`13941`).
+pandas extension dtype like ``category`` or the :ref:`timezone aware dtype <timeseries.timezone_series>` (``datetime64[ns, tz]``). (:issue:`13941`).
As a consequence of this change, ``PeriodIndex`` no longer has an integer dtype:
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -886,7 +868,7 @@ Previous Behavior:
In [4]: pi.dtype
Out[4]: dtype('int64')
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -904,14 +886,14 @@ New Behavior:
Previously, ``Period`` has its own ``Period('NaT')`` representation different from ``pd.NaT``. Now ``Period('NaT')`` has been changed to return ``pd.NaT``. (:issue:`12759`, :issue:`13582`)
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
In [5]: pd.Period('NaT', freq='D')
Out[5]: Period('NaT', 'D')
-New Behavior:
+**New behavior**:
These result in ``pd.NaT`` without providing ``freq`` option.
@@ -921,9 +903,9 @@ These result in ``pd.NaT`` without providing ``freq`` option.
pd.Period(None)
-To be compat with ``Period`` addition and subtraction, ``pd.NaT`` now supports addition and subtraction with ``int``. Previously it raises ``ValueError``.
+To be compatible with ``Period`` addition and subtraction, ``pd.NaT`` now supports addition and subtraction with ``int``. Previously it raised ``ValueError``.
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -931,7 +913,7 @@ Previous Behavior:
...
ValueError: Cannot add integral value to Timestamp without freq.
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -941,10 +923,10 @@ New Behavior:
``PeriodIndex.values`` now returns array of ``Period`` object
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
-``.values`` is changed to return array of ``Period`` object, rather than array
-of ``int64`` (:issue:`13988`)
+``.values`` is changed to return an array of ``Period`` objects, rather than an array
+of integers (:issue:`13988`).
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -952,7 +934,7 @@ Previous Behavior:
In [7]: pi.values
array([492, 493])
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -982,7 +964,7 @@ Previous behavior:
FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
Out[1]: Index(['a', 'b', 'c'], dtype='object')
-The same operation will now perform element-wise addition:
+**New behavior**: the same operation will now perform element-wise addition:
.. ipython:: python
@@ -1008,7 +990,7 @@ Previous behavior:
FutureWarning: using '-' to provide set differences with datetimelike Indexes is deprecated, use .difference()
Out[1]: DatetimeIndex(['2016-01-01'], dtype='datetime64[ns]', freq=None)
-New behavior:
+**New behavior**:
.. ipython:: python
@@ -1027,7 +1009,7 @@ New behavior:
idx1 = pd.Index([1, 2, 3, np.nan])
idx2 = pd.Index([0, 1, np.nan])
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1037,7 +1019,7 @@ Previous Behavior:
In [4]: idx1.symmetric_difference(idx2)
Out[4]: Float64Index([0.0, nan, 2.0, 3.0], dtype='float64')
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -1050,12 +1032,11 @@ New Behavior:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``Index.unique()`` now returns unique values as an
-``Index`` of the appropriate ``dtype``. (:issue:`13395`)
-
+``Index`` of the appropriate ``dtype``. (:issue:`13395`).
Previously, most ``Index`` classes returned ``np.ndarray``, and ``DatetimeIndex``,
``TimedeltaIndex`` and ``PeriodIndex`` returned ``Index`` to keep metadata like timezone.
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1063,11 +1044,12 @@ Previous Behavior:
Out[1]: array([1, 2, 3])
In [2]: pd.DatetimeIndex(['2011-01-01', '2011-01-02', '2011-01-03'], tz='Asia/Tokyo').unique()
- Out[2]: DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
- '2011-01-03 00:00:00+09:00'],
- dtype='datetime64[ns, Asia/Tokyo]', freq=None)
+ Out[2]:
+ DatetimeIndex(['2011-01-01 00:00:00+09:00', '2011-01-02 00:00:00+09:00',
+ '2011-01-03 00:00:00+09:00'],
+ dtype='datetime64[ns, Asia/Tokyo]', freq=None)
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -1076,8 +1058,8 @@ New Behavior:
.. _whatsnew_0190.api.multiindex:
-``MultiIndex`` constructors preserve categorical dtypes
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+``MultiIndex`` constructors, ``groupby`` and ``set_index`` preserve categorical dtypes
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``MultiIndex.from_arrays`` and ``MultiIndex.from_product`` will now preserve categorical dtype
in ``MultiIndex`` levels. (:issue:`13743`, :issue:`13854`)
@@ -1089,7 +1071,7 @@ in ``MultiIndex`` levels. (:issue:`13743`, :issue:`13854`)
midx = pd.MultiIndex.from_arrays([cat, lvl1])
midx
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1099,7 +1081,7 @@ Previous Behavior:
In [5]: midx.get_level_values[0]
Out[5]: Index(['a', 'b'], dtype='object')
-New Behavior:
+**New behavior**: the single level is now a ``CategoricalIndex``:
.. ipython:: python
@@ -1115,7 +1097,7 @@ As a consequence, ``groupby`` and ``set_index`` also preserve categorical dtypes
df_grouped = df.groupby(by=['A', 'C']).first()
df_set_idx = df.set_index(['A', 'C'])
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1137,7 +1119,7 @@ Previous Behavior:
B int64
dtype: object
-New Behavior:
+**New behavior**:
.. ipython:: python
@@ -1152,8 +1134,8 @@ New Behavior:
``read_csv`` will progressively enumerate chunks
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-When :func:`read_csv` is called with ``chunksize='n'`` and without specifying an index,
-each chunk used to have an independently generated index from `0`` to ``n-1``.
+When :func:`read_csv` is called with ``chunksize=n`` and without specifying an index,
+each chunk used to have an independently generated index from ``0`` to ``n-1``.
They are now given instead a progressive index, starting from ``0`` for the first chunk,
from ``n`` for the second, and so on, so that, when concatenated, they are identical to
the result of calling :func:`read_csv` without the ``chunksize=`` argument.
@@ -1163,7 +1145,7 @@ the result of calling :func:`read_csv` without the ``chunksize=`` argument.
data = 'A,B\n0,1\n2,3\n4,5\n6,7'
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1175,7 +1157,7 @@ Previous Behavior:
0 4 5
1 6 7
-New Behavior:
+**New behavior**:
.. ipython :: python
@@ -1188,13 +1170,12 @@ Sparse Changes
These changes allow pandas to handle sparse data with more dtypes, and for work to make a smoother experience with data handling.
-
``int64`` and ``bool`` support enhancements
"""""""""""""""""""""""""""""""""""""""""""
-Sparse data structures now gained enhanced support of ``int64`` and ``bool`` ``dtype`` (:issue:`667`, :issue:`13849`)
+Sparse data structures now gained enhanced support of ``int64`` and ``bool`` ``dtype`` (:issue:`667`, :issue:`13849`).
-Previously, sparse data were ``float64`` dtype by default, even if all inputs were ``int`` or ``bool`` dtype. You had to specify ``dtype`` explicitly to create sparse data with ``int64`` dtype. Also, ``fill_value`` had to be specified explicitly becuase it's default was ``np.nan`` which doesn't appear in ``int64`` or ``bool`` data.
+Previously, sparse data were ``float64`` dtype by default, even if all inputs were of ``int`` or ``bool`` dtype. You had to specify ``dtype`` explicitly to create sparse data with ``int64`` dtype. Also, ``fill_value`` had to be specified explicitly because the default was ``np.nan`` which doesn't appear in ``int64`` or ``bool`` data.
.. code-block:: ipython
@@ -1221,9 +1202,9 @@ Previously, sparse data were ``float64`` dtype by default, even if all inputs we
IntIndex
Indices: array([0, 1], dtype=int32)
-As of v0.19.0, sparse data keeps the input dtype, and assign more appropriate ``fill_value`` default (``0`` for ``int64`` dtype, ``False`` for ``bool`` dtype).
+As of v0.19.0, sparse data keeps the input dtype, and uses more appropriate ``fill_value`` defaults (``0`` for ``int64`` dtype, ``False`` for ``bool`` dtype).
-.. ipython :: python
+.. ipython:: python
pd.SparseArray([1, 2, 0, 0], dtype=np.int64)
pd.SparseArray([True, False, False, False])
@@ -1235,29 +1216,29 @@ Operators now preserve dtypes
- Sparse data structure now can preserve ``dtype`` after arithmetic ops (:issue:`13848`)
-.. ipython:: python
+ .. ipython:: python
- s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
- s.dtype
+ s = pd.SparseSeries([0, 2, 0, 1], fill_value=0, dtype=np.int64)
+ s.dtype
- s + 1
+ s + 1
- Sparse data structure now support ``astype`` to convert internal ``dtype`` (:issue:`13900`)
-.. ipython:: python
+ .. ipython:: python
- s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
- s
- s.astype(np.int64)
+ s = pd.SparseSeries([1., 0., 2., 0.], fill_value=0)
+ s
+ s.astype(np.int64)
-``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
-Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.
+ ``astype`` fails if data contains values which cannot be converted to specified ``dtype``.
+ Note that the limitation is applied to ``fill_value`` which default is ``np.nan``.
-.. code-block:: ipython
+ .. code-block:: ipython
- In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
- Out[7]:
- ValueError: unable to coerce current fill_value nan to int64 dtype
+ In [7]: pd.SparseSeries([1., np.nan, 2., np.nan], fill_value=np.nan).astype(np.int64)
+ Out[7]:
+ ValueError: unable to coerce current fill_value nan to int64 dtype
Other sparse fixes
""""""""""""""""""
@@ -1301,7 +1282,7 @@ These types are the same on many platform, but for 64 bit python on Windows,
``np.int_`` is 32 bits, and ``np.intp`` is 64 bits. Changing this behavior improves performance for many
operations on that platform.
-Previous Behavior:
+**Previous behavior**:
.. code-block:: ipython
@@ -1310,7 +1291,7 @@ Previous Behavior:
In [2]: i.get_indexer(['b', 'b', 'c']).dtype
Out[2]: dtype('int32')
-New Behavior:
+**New behavior**:
.. code-block:: ipython
@@ -1319,6 +1300,35 @@ New Behavior:
In [2]: i.get_indexer(['b', 'b', 'c']).dtype
Out[2]: dtype('int64')
+
+.. _whatsnew_0190.api.other:
+
+Other API Changes
+^^^^^^^^^^^^^^^^^
+
+- ``Timestamp.to_pydatetime`` will issue a ``UserWarning`` when ``warn=True``, and the instance has a non-zero number of nanoseconds, previously this would print a message to stdout. (:issue:`14101`)
+- Non-convertible dates in an excel date column will be returned without conversion and the column will be ``object`` dtype, rather than raising an exception (:issue:`10001`)
+- ``Series.unique()`` with datetime and timezone now returns return array of ``Timestamp`` with timezone (:issue:`13565`)
+- ``pd.Timedelta(None)`` is now accepted and will return ``NaT``, mirroring ``pd.Timestamp`` (:issue:`13687`)
+- ``Panel.to_sparse()`` will raise a ``NotImplementedError`` exception when called (:issue:`13778`)
+- ``Index.reshape()`` will raise a ``NotImplementedError`` exception when called (:issue:`12882`)
+- ``.filter()`` enforces mutual exclusion of the keyword arguments. (:issue:`12399`)
+- ``eval``'s upcasting rules for ``float32`` types have been updated to be more consistent with NumPy's rules. New behavior will not upcast to ``float64`` if you multiply a pandas ``float32`` object by a scalar float64. (:issue:`12388`)
+- An ``UnsupportedFunctionCall`` error is now raised if NumPy ufuncs like ``np.mean`` are called on groupby or resample objects (:issue:`12811`)
+- ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`)
+- Calls to ``.sample()`` will respect the random seed set via ``numpy.random.seed(n)`` (:issue:`13161`)
+- ``Styler.apply`` is now more strict about the outputs your function must return. For ``axis=0`` or ``axis=1``, the output shape must be identical. For ``axis=None``, the output must be a DataFrame with identical columns and index labels. (:issue:`13222`)
+- ``Float64Index.astype(int)`` will now raise ``ValueError`` if ``Float64Index`` contains ``NaN`` values (:issue:`13149`)
+- ``TimedeltaIndex.astype(int)`` and ``DatetimeIndex.astype(int)`` will now return ``Int64Index`` instead of ``np.array`` (:issue:`13209`)
+- Passing ``Period`` with multiple frequencies to normal ``Index`` now returns ``Index`` with ``object`` dtype (:issue:`13664`)
+- ``PeriodIndex.fillna`` with ``Period`` has different freq now coerces to ``object`` dtype (:issue:`13664`)
+- Faceted boxplots from ``DataFrame.boxplot(by=col)`` now return a ``Series`` when ``return_type`` is not None. Previously these returned an ``OrderedDict``. Note that when ``return_type=None``, the default, these still return a 2-D NumPy array. (:issue:`12216`, :issue:`7096`)
+- ``pd.read_hdf`` will now raise a ``ValueError`` instead of ``KeyError``, if a mode other than ``r``, ``r+`` and ``a`` is supplied. (:issue:`13623`)
+- ``pd.read_csv()``, ``pd.read_table()``, and ``pd.read_hdf()`` raise the builtin ``FileNotFoundError`` exception for Python 3.x when called on a nonexistent file; this is back-ported as ``IOError`` in Python 2.x (:issue:`14086`)
+- More informative exceptions are passed through the csv parser. The exception type would now be the original exception type instead of ``CParserError``. (:issue:`13652`)
+- ``pd.read_csv()`` in the C engine will now issue a ``ParserWarning`` or raise a ``ValueError`` when ``sep`` encoded is more than one character long (:issue:`14065`)
+- ``DataFrame.values`` will now return ``float64`` with a ``DataFrame`` of mixed ``int64`` and ``uint64`` dtypes, conforming to ``np.find_common_type`` (:issue:`10364`, :issue:`13917`)
+
.. _whatsnew_0190.deprecations:
Deprecations
@@ -1326,10 +1336,10 @@ Deprecations
- ``Categorical.reshape`` has been deprecated and will be removed in a subsequent release (:issue:`12882`)
- ``Series.reshape`` has been deprecated and will be removed in a subsequent release (:issue:`12882`)
-- ``PeriodIndex.to_datetime`` has been deprecated in favour of ``PeriodIndex.to_timestamp`` (:issue:`8254`)
-- ``Timestamp.to_datetime`` has been deprecated in favour of ``Timestamp.to_pydatetime`` (:issue:`8254`)
+- ``PeriodIndex.to_datetime`` has been deprecated in favor of ``PeriodIndex.to_timestamp`` (:issue:`8254`)
+- ``Timestamp.to_datetime`` has been deprecated in favor of ``Timestamp.to_pydatetime`` (:issue:`8254`)
- ``pandas.core.datetools`` module has been deprecated and will be removed in a subsequent release (:issue:`14094`)
-- ``Index.to_datetime`` and ``DatetimeIndex.to_datetime`` have been deprecated in favour of ``pd.to_datetime`` (:issue:`8254`)
+- ``Index.to_datetime`` and ``DatetimeIndex.to_datetime`` have been deprecated in favor of ``pd.to_datetime`` (:issue:`8254`)
- ``SparseList`` has been deprecated and will be removed in a future version (:issue:`13784`)
- ``DataFrame.to_html()`` and ``DataFrame.to_latex()`` have dropped the ``colSpace`` parameter in favor of ``col_space`` (:issue:`13857`)
- ``DataFrame.to_sql()`` has deprecated the ``flavor`` parameter, as it is superfluous when SQLAlchemy is not installed (:issue:`13611`)
@@ -1350,6 +1360,7 @@ Deprecations
Removal of prior version deprecations/changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
- The ``SparsePanel`` class has been removed (:issue:`13778`)
- The ``pd.sandbox`` module has been removed in favor of the external library ``pandas-qt`` (:issue:`13670`)
- The ``pandas.io.data`` and ``pandas.io.wb`` modules are removed in favor of
@@ -1359,30 +1370,19 @@ Removal of prior version deprecations/changes
- ``DataFrame.to_csv()`` has dropped the ``engine`` parameter, as was deprecated in 0.17.1 (:issue:`11274`, :issue:`13419`)
- ``DataFrame.to_dict()`` has dropped the ``outtype`` parameter in favor of ``orient`` (:issue:`13627`, :issue:`8486`)
- ``pd.Categorical`` has dropped setting of the ``ordered`` attribute directly in favor of the ``set_ordered`` method (:issue:`13671`)
-- ``pd.Categorical`` has dropped the ``levels`` attribute in favour of ``categories`` (:issue:`8376`)
+- ``pd.Categorical`` has dropped the ``levels`` attribute in favor of ``categories`` (:issue:`8376`)
- ``DataFrame.to_sql()`` has dropped the ``mysql`` option for the ``flavor`` parameter (:issue:`13611`)
-- ``Panel.shift()`` has dropped the ``lags`` parameter in favour of ``periods`` (:issue:`14041`)
-- ``pd.Index`` has dropped the ``diff`` method in favour of ``difference`` (:issue:`13669`)
-
-- ``pd.DataFrame`` has dropped the ``to_wide`` method in favour of ``to_panel`` (:issue:`14039`)
+- ``Panel.shift()`` has dropped the ``lags`` parameter in favor of ``periods`` (:issue:`14041`)
+- ``pd.Index`` has dropped the ``diff`` method in favor of ``difference`` (:issue:`13669`)
+- ``pd.DataFrame`` has dropped the ``to_wide`` method in favor of ``to_panel`` (:issue:`14039`)
- ``Series.to_csv`` has dropped the ``nanRep`` parameter in favor of ``na_rep`` (:issue:`13804`)
- ``Series.xs``, ``DataFrame.xs``, ``Panel.xs``, ``Panel.major_xs``, and ``Panel.minor_xs`` have dropped the ``copy`` parameter (:issue:`13781`)
- ``str.split`` has dropped the ``return_type`` parameter in favor of ``expand`` (:issue:`13701`)
-- Removal of the legacy time rules (offset aliases), deprecated since 0.17.0 (this has been alias since 0.8.0) (:issue:`13590`, :issue:`13868`)
-
- Previous Behavior:
-
- .. code-block:: ipython
-
- In [2]: pd.date_range('2016-07-01', freq='W@MON', periods=3)
- pandas/tseries/frequencies.py:465: FutureWarning: Freq "W@MON" is deprecated, use "W-MON" as alternative.
- Out[2]: DatetimeIndex(['2016-07-04', '2016-07-11', '2016-07-18'], dtype='datetime64[ns]', freq='W-MON')
-
- Now legacy time rules raises ``ValueError``. For the list of currently supported offsets, see :ref:`here <timeseries.offset_aliases>`
-
+- Removal of the legacy time rules (offset aliases), deprecated since 0.17.0 (this has been alias since 0.8.0) (:issue:`13590`, :issue:`13868`). Now legacy time rules raises ``ValueError``. For the list of currently supported offsets, see :ref:`here <timeseries.offset_aliases>`.
- The default value for the ``return_type`` parameter for ``DataFrame.plot.box`` and ``DataFrame.boxplot`` changed from ``None`` to ``"axes"``. These methods will now return a matplotlib axes by default instead of a dictionary of artists. See :ref:`here <visualization.box.return>` (:issue:`6581`).
- The ``tquery`` and ``uquery`` functions in the ``pandas.io.sql`` module are removed (:issue:`5950`).
+
.. _whatsnew_0190.performance:
Performance Improvements
@@ -1390,8 +1390,7 @@ Performance Improvements
- Improved performance of sparse ``IntIndex.intersect`` (:issue:`13082`)
- Improved performance of sparse arithmetic with ``BlockIndex`` when the number of blocks are large, though recommended to use ``IntIndex`` in such cases (:issue:`13082`)
-- increased performance of ``DataFrame.quantile()`` as it now operates per-block (:issue:`11623`)
-
+- Improved performance of ``DataFrame.quantile()`` as it now operates per-block (:issue:`11623`)
- Improved performance of float64 hash table operations, fixing some very slow indexing and groupby operations in python 3 (:issue:`13166`, :issue:`13334`)
- Improved performance of ``DataFrameGroupBy.transform`` (:issue:`12737`)
- Improved performance of ``Index`` and ``Series`` ``.duplicated`` (:issue:`10235`)
@@ -1402,7 +1401,6 @@ Performance Improvements
- Improved performance of ``factorize`` of datetime with timezone (:issue:`13750`)
-
.. _whatsnew_0190.bug_fixes:
Bug Fixes
@@ -1568,3 +1566,4 @@ Bug Fixes
- Bug in ``eval()`` where the ``resolvers`` argument would not accept a list (:issue:`14095`)
- Bugs in ``stack``, ``get_dummies``, ``make_axis_dummies`` which don't preserve categorical dtypes in (multi)indexes (:issue:`13854`)
+- ``PeridIndex`` can now accept ``list`` and ``array`` which contains ``pd.NaT`` (:issue:`13430`)
| WIP (it's actually not critical for rc, as we do link to the dev docs (which are updated after rc release) anyway)
| https://api.github.com/repos/pandas-dev/pandas/pulls/14176 | 2016-09-07T10:50:38Z | 2016-09-07T19:15:38Z | 2016-09-07T19:15:38Z | 2016-09-07T19:15:38Z |
Fix trivial typo in comment | diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index f12ba8083f545..051cc8aa4d018 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -814,7 +814,7 @@ def apply(self, other):
if bd != 0:
skip_bd = BusinessDay(n=bd)
- # midnight busienss hour may not on BusinessDay
+ # midnight business hour may not on BusinessDay
if not self.next_bday.onOffset(other):
remain = other - self._prev_opening_time(other)
other = self._next_opening_time(other + skip_bd) + remain
| https://api.github.com/repos/pandas-dev/pandas/pulls/14174 | 2016-09-07T10:28:45Z | 2016-09-07T10:49:24Z | 2016-09-07T10:49:24Z | 2016-09-07T10:49:41Z | |
DOC: cleanup build warnings | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index b5ad681426b15..6063e3e8bce45 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -935,134 +935,20 @@ method:
minor_axis=['a', 'b', 'c', 'd'])
panel.to_frame()
-
-.. _dsintro.panel4d:
-
-Panel4D (Experimental)
-----------------------
-
-.. warning::
-
- In 0.19.0 ``Panel4D`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package <http://xarray.pydata.org/en/stable/>`__. Pandas provides a :meth:`~Panel4D.to_xarray` method to automate this conversion.
-
-``Panel4D`` is a 4-Dimensional named container very much like a ``Panel``, but
-having 4 named dimensions. It is intended as a test bed for more N-Dimensional named
-containers.
-
- - **labels**: axis 0, each item corresponds to a Panel contained inside
- - **items**: axis 1, each item corresponds to a DataFrame contained inside
- - **major_axis**: axis 2, it is the **index** (rows) of each of the
- DataFrames
- - **minor_axis**: axis 3, it is the **columns** of each of the DataFrames
-
-``Panel4D`` is a sub-class of ``Panel``, so most methods that work on Panels are
-applicable to Panel4D. The following methods are disabled:
-
- - ``join , to_frame , to_excel , to_sparse , groupby``
-
-Construction of Panel4D works in a very similar manner to a ``Panel``
-
-From 4D ndarray with optional axis labels
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. ipython:: python
-
- p4d = pd.Panel4D(np.random.randn(2, 2, 5, 4),
- labels=['Label1','Label2'],
- items=['Item1', 'Item2'],
- major_axis=pd.date_range('1/1/2000', periods=5),
- minor_axis=['A', 'B', 'C', 'D'])
- p4d
-
-
-From dict of Panel objects
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. ipython:: python
-
- data = { 'Label1' : pd.Panel({ 'Item1' : pd.DataFrame(np.random.randn(4, 3)) }),
- 'Label2' : pd.Panel({ 'Item2' : pd.DataFrame(np.random.randn(4, 2)) }) }
- pd.Panel4D(data)
-
-Note that the values in the dict need only be **convertible to Panels**.
-Thus, they can be any of the other valid inputs to Panel as per above.
-
-Slicing
-~~~~~~~
-
-Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
-``.ix`` allows you to slice arbitrarily and get back lower dimensional objects
-
-.. ipython:: python
-
- p4d['Label1']
-
-4D -> Panel
-
-.. ipython:: python
-
- p4d.ix[:,:,:,'A']
-
-4D -> DataFrame
-
-.. ipython:: python
-
- p4d.ix[:,:,0,'A']
-
-4D -> Series
-
-.. ipython:: python
-
- p4d.ix[:,0,0,'A']
-
-Transposing
-~~~~~~~~~~~
-
-A Panel4D can be rearranged using its ``transpose`` method (which does not make a
-copy by default unless the data are heterogeneous):
-
-.. ipython:: python
-
- p4d.transpose(3, 2, 1, 0)
-
.. _dsintro.panelnd:
+.. _dsintro.panel4d:
-PanelND (Experimental)
-----------------------
+Panel4D and PanelND (Deprecated)
+--------------------------------
.. warning::
- In 0.19.0 ``PanelND`` is deprecated and will be removed in a future version. The recommended way to represent these types of n-dimensional data are with the `xarray package <http://xarray.pydata.org/en/stable/>`__.
+ In 0.19.0 ``Panel4D`` and ``PanelND`` are deprecated and will be removed in
+ a future version. The recommended way to represent these types of
+ n-dimensional data are with the
+ `xarray package <http://xarray.pydata.org/en/stable/>`__.
+ Pandas provides a :meth:`~Panel4D.to_xarray` method to automate
+ this conversion.
-PanelND is a module with a set of factory functions to enable a user to construct N-dimensional named
-containers like Panel4D, with a custom set of axis labels. Thus a domain-specific container can easily be
-created.
-
-The following creates a Panel5D. A new panel type object must be sliceable into a lower dimensional object.
-Here we slice to a Panel4D.
-
-.. ipython:: python
- :okwarning:
-
- from pandas.core import panelnd
- Panel5D = panelnd.create_nd_panel_factory(
- klass_name = 'Panel5D',
- orders = [ 'cool', 'labels','items','major_axis','minor_axis'],
- slices = { 'labels' : 'labels', 'items' : 'items',
- 'major_axis' : 'major_axis', 'minor_axis' : 'minor_axis' },
- slicer = pd.Panel4D,
- aliases = { 'major' : 'major_axis', 'minor' : 'minor_axis' },
- stat_axis = 2)
-
- p5d = Panel5D(dict(C1 = p4d))
- p5d
-
- # print a slice of our 5D
- p5d.ix['C1',:,:,0:3,:]
-
- # transpose it
- p5d.transpose(1,2,3,4,0)
-
- # look at the shape & dim
- p5d.shape
- p5d.ndim
+See the `docs of a previous version <http://pandas.pydata.org/pandas-docs/version/0.18.1/dsintro.html#panel4d-experimental>`__
+for documentation on these objects.
diff --git a/doc/source/install.rst b/doc/source/install.rst
index f8ee0542ea17e..6295e6f6cbb68 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -255,6 +255,7 @@ Optional Dependencies
* `matplotlib <http://matplotlib.org/>`__: for plotting
* For Excel I/O:
+
* `xlrd/xlwt <http://www.python-excel.org/>`__: Excel reading (xlrd) and writing (xlwt)
* `openpyxl <http://packages.python.org/openpyxl/>`__: openpyxl version 1.6.1
or higher (but lower than 2.0.0), or version 2.2 or higher, for writing .xlsx files (xlrd >= 0.9.0)
@@ -296,8 +297,8 @@ Optional Dependencies
<html-gotchas>`. It explains issues surrounding the installation and
usage of the above three libraries
* You may need to install an older version of `BeautifulSoup4`_:
- - Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and
- 32-bit Ubuntu/Debian
+ Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 32-bit
+ Ubuntu/Debian
* Additionally, if you're using `Anaconda`_ you should definitely
read :ref:`the gotchas about HTML parsing libraries <html-gotchas>`
diff --git a/doc/source/sparse.rst b/doc/source/sparse.rst
index b6c5c15bc9081..d3f921f8762cc 100644
--- a/doc/source/sparse.rst
+++ b/doc/source/sparse.rst
@@ -9,7 +9,7 @@
import pandas as pd
import pandas.util.testing as tm
np.set_printoptions(precision=4, suppress=True)
- options.display.max_rows = 15
+ pd.options.display.max_rows = 15
**********************
Sparse data structures
@@ -90,38 +90,10 @@ can be converted back to a regular ndarray by calling ``to_dense``:
SparseList
----------
-.. note:: The ``SparseList`` class has been deprecated and will be removed in a future version.
+The ``SparseList`` class has been deprecated and will be removed in a future version.
+See the `docs of a previous version <http://pandas.pydata.org/pandas-docs/version/0.18.1/sparse.html#sparselist>`__
+for documentation on ``SparseList``.
-``SparseList`` is a list-like data structure for managing a dynamic collection
-of SparseArrays. To create one, simply call the ``SparseList`` constructor with
-a ``fill_value`` (defaulting to ``NaN``):
-
-.. ipython:: python
-
- spl = pd.SparseList()
- spl
-
-The two important methods are ``append`` and ``to_array``. ``append`` can
-accept scalar values or any 1-dimensional sequence:
-
-.. ipython:: python
- :suppress:
-
-.. ipython:: python
-
- spl.append(np.array([1., np.nan, np.nan, 2., 3.]))
- spl.append(5)
- spl.append(sparr)
- spl
-
-As you can see, all of the contents are stored internally as a list of
-memory-efficient ``SparseArray`` objects. Once you've accumulated all of the
-data, you can call ``to_array`` to get a single ``SparseArray`` with all the
-data:
-
-.. ipython:: python
-
- spl.to_array()
SparseIndex objects
-------------------
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 36e492df29983..7ab97c6af3583 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1219,7 +1219,7 @@ objects.
ts.shift(1)
The shift method accepts an ``freq`` argument which can accept a
-``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias <timeseries.alias>`:
+``DateOffset`` class or other ``timedelta``-like object or also a :ref:`offset alias <timeseries.offset_aliases>`:
.. ipython:: python
@@ -1494,7 +1494,7 @@ level of ``MultiIndex``, its name or location can be passed to the
.. ipython:: python
- df.resample(level='d').sum()
+ df.resample('M', level='d').sum()
.. _timeseries.periods:
@@ -1630,8 +1630,6 @@ Period Dtypes
``PeriodIndex`` has a custom ``period`` dtype. This is a pandas extension
dtype similar to the :ref:`timezone aware dtype <timeseries.timezone_series>` (``datetime64[ns, tz]``).
-.. _timeseries.timezone_series:
-
The ``period`` dtype holds the ``freq`` attribute and is represented with
``period[freq]`` like ``period[D]`` or ``period[M]``, using :ref:`frequency strings <timeseries.offset_aliases>`.
diff --git a/doc/source/whatsnew/v0.14.1.txt b/doc/source/whatsnew/v0.14.1.txt
index 84f2a77203c41..239d6c9c6e0d4 100644
--- a/doc/source/whatsnew/v0.14.1.txt
+++ b/doc/source/whatsnew/v0.14.1.txt
@@ -156,7 +156,7 @@ Experimental
~~~~~~~~~~~~
- ``pandas.io.data.Options`` has a new method, ``get_all_data`` method, and now consistently returns a
- multi-indexed ``DataFrame``, see :ref:`the docs <remote_data.yahoo_options>`. (:issue:`5602`)
+ multi-indexed ``DataFrame`` (:issue:`5602`)
- ``io.gbq.read_gbq`` and ``io.gbq.to_gbq`` were refactored to remove the
dependency on the Google ``bq.py`` command line client. This submodule
now uses ``httplib2`` and the Google ``apiclient`` and ``oauth2client`` API client
diff --git a/doc/source/whatsnew/v0.15.1.txt b/doc/source/whatsnew/v0.15.1.txt
index a25e5a80b65fc..cd9298c74539a 100644
--- a/doc/source/whatsnew/v0.15.1.txt
+++ b/doc/source/whatsnew/v0.15.1.txt
@@ -185,8 +185,6 @@ API changes
2014-11-22 call AAPL141122C00110000 1.02
2014-11-28 call AAPL141128C00110000 1.32
- See the Options documentation in :ref:`Remote Data <remote_data.yahoo_options>`
-
.. _whatsnew_0151.datetime64_plotting:
- pandas now also registers the ``datetime64`` dtype in matplotlib's units registry
@@ -257,7 +255,7 @@ Enhancements
- Added support for 3-character ISO and non-standard country codes in :func:`io.wb.download()` (:issue:`8482`)
-- :ref:`World Bank data requests <remote_data.wb>` now will warn/raise based
+- World Bank data requests now will warn/raise based
on an ``errors`` argument, as well as a list of hard-coded country codes and
the World Bank's JSON response. In prior versions, the error messages
didn't look at the World Bank's JSON response. Problem-inducing input were
diff --git a/doc/source/whatsnew/v0.8.0.txt b/doc/source/whatsnew/v0.8.0.txt
index cf6ac7c1e6ad2..4136c108fba57 100644
--- a/doc/source/whatsnew/v0.8.0.txt
+++ b/doc/source/whatsnew/v0.8.0.txt
@@ -59,7 +59,7 @@ Time series changes and improvements
aggregation functions, and control over how the intervals and result labeling
are defined. A suite of high performance Cython/C-based resampling functions
(including Open-High-Low-Close) have also been implemented.
-- Revamp of :ref:`frequency aliases <timeseries.alias>` and support for
+- Revamp of :ref:`frequency aliases <timeseries.offset_aliases>` and support for
**frequency shortcuts** like '15min', or '1h30min'
- New :ref:`DatetimeIndex class <timeseries.datetimeindex>` supports both fixed
frequency and irregular time
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5a17401ea67b1..ea5dca32945e8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3998,7 +3998,7 @@ def asfreq(self, freq, method=None, how=None, normalize=False):
converted : type of caller
To learn more about the frequency strings, please see `this link
- <http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
+<http://pandas.pydata.org/pandas-docs/stable/timeseries.html#offset-aliases>`__.
"""
from pandas.tseries.resample import asfreq
return asfreq(self, freq, method=method, how=how, normalize=normalize)
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index 068cfee2b2aa2..8f23e82daf2e3 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -630,16 +630,20 @@ def read_gbq(query, project_id=None, index_col=None, col_order=None,
https://developers.google.com/api-client-library/python/apis/bigquery/v2
Authentication to the Google BigQuery service is via OAuth 2.0.
+
- If "private_key" is not provided:
- By default "application default credentials" are used.
- .. versionadded:: 0.19.0
+ By default "application default credentials" are used.
+
+ .. versionadded:: 0.19.0
+
+ If default application credentials are not found or are restrictive,
+ user account credentials are used. In this case, you will be asked to
+ grant permissions for product name 'pandas GBQ'.
- If default application credentials are not found or are restrictive,
- user account credentials are used. In this case, you will be asked to
- grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
- Service account credentials will be used to authenticate.
+
+ Service account credentials will be used to authenticate.
Parameters
----------
@@ -747,16 +751,20 @@ def to_gbq(dataframe, destination_table, project_id, chunksize=10000,
https://developers.google.com/api-client-library/python/apis/bigquery/v2
Authentication to the Google BigQuery service is via OAuth 2.0.
+
- If "private_key" is not provided:
- By default "application default credentials" are used.
- .. versionadded:: 0.19.0
+ By default "application default credentials" are used.
+
+ .. versionadded:: 0.19.0
+
+ If default application credentials are not found or are restrictive,
+ user account credentials are used. In this case, you will be asked to
+ grant permissions for product name 'pandas GBQ'.
- If default application credentials are not found or are restrictive,
- user account credentials are used. In this case, you will be asked to
- grant permissions for product name 'pandas GBQ'.
- If "private_key" is provided:
- Service account credentials will be used to authenticate.
+
+ Service account credentials will be used to authenticate.
Parameters
----------
| https://api.github.com/repos/pandas-dev/pandas/pulls/14172 | 2016-09-07T09:50:46Z | 2016-09-07T13:57:22Z | 2016-09-07T13:57:22Z | 2016-09-07T13:57:22Z | |
API/DEPR: Remove +/- as setops for DatetimeIndex/PeriodIndex (GH9630) | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index 2d93652ca91db..9345f11aca341 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -932,14 +932,16 @@ New Behavior:
Index ``+`` / ``-`` no longer used for set operations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Addition and subtraction of the base Index type (not the numeric subclasses)
+Addition and subtraction of the base Index type and of DatetimeIndex
+(not the numeric index types)
previously performed set operations (set union and difference). This
behaviour was already deprecated since 0.15.0 (in favor using the specific
``.union()`` and ``.difference()`` methods), and is now disabled. When
possible, ``+`` and ``-`` are now used for element-wise operations, for
-example for concatenating strings (:issue:`8227`, :issue:`14127`).
+example for concatenating strings or subtracting datetimes
+(:issue:`8227`, :issue:`14127`).
-Previous Behavior:
+Previous behavior:
.. code-block:: ipython
@@ -962,6 +964,23 @@ For example, the behaviour of adding two integer Indexes:
is unchanged. The base ``Index`` is now made consistent with this behaviour.
+Further, because of this change, it is now possible to subtract two
+DatetimeIndex objects resulting in a TimedeltaIndex:
+
+Previous behavior:
+
+.. code-block:: ipython
+
+ In [1]: pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02', '2016-01-03'])
+ FutureWarning: using '-' to provide set differences with datetimelike Indexes is deprecated, use .difference()
+ Out[1]: DatetimeIndex(['2016-01-01'], dtype='datetime64[ns]', freq=None)
+
+New behavior:
+
+.. ipython:: python
+
+ pd.DatetimeIndex(['2016-01-01', '2016-01-02']) - pd.DatetimeIndex(['2016-01-02', '2016-01-03'])
+
.. _whatsnew_0190.api.difference:
diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py
index 1690a9b229db2..3b676b894d355 100644
--- a/pandas/tseries/base.py
+++ b/pandas/tseries/base.py
@@ -2,7 +2,6 @@
Base and utility classes for tseries type pandas objects.
"""
-import warnings
from datetime import datetime, timedelta
from pandas import compat
@@ -628,10 +627,9 @@ def __add__(self, other):
raise TypeError("cannot add TimedeltaIndex and {typ}"
.format(typ=type(other)))
elif isinstance(other, Index):
- warnings.warn("using '+' to provide set union with "
- "datetimelike Indexes is deprecated, "
- "use .union()", FutureWarning, stacklevel=2)
- return self.union(other)
+ raise TypeError("cannot add {typ1} and {typ2}"
+ .format(typ1=type(self).__name__,
+ typ2=type(other).__name__))
elif isinstance(other, (DateOffset, timedelta, np.timedelta64,
tslib.Timedelta)):
return self._add_delta(other)
@@ -646,6 +644,7 @@ def __add__(self, other):
def __sub__(self, other):
from pandas.core.index import Index
+ from pandas.tseries.index import DatetimeIndex
from pandas.tseries.tdi import TimedeltaIndex
from pandas.tseries.offsets import DateOffset
if isinstance(other, TimedeltaIndex):
@@ -653,13 +652,14 @@ def __sub__(self, other):
elif isinstance(self, TimedeltaIndex) and isinstance(other, Index):
if not isinstance(other, TimedeltaIndex):
raise TypeError("cannot subtract TimedeltaIndex and {typ}"
- .format(typ=type(other)))
+ .format(typ=type(other).__name__))
return self._add_delta(-other)
+ elif isinstance(other, DatetimeIndex):
+ return self._sub_datelike(other)
elif isinstance(other, Index):
- warnings.warn("using '-' to provide set differences with "
- "datetimelike Indexes is deprecated, "
- "use .difference()", FutureWarning, stacklevel=2)
- return self.difference(other)
+ raise TypeError("cannot subtract {typ1} and {typ2}"
+ .format(typ1=type(self).__name__,
+ typ2=type(other).__name__))
elif isinstance(other, (DateOffset, timedelta, np.timedelta64,
tslib.Timedelta)):
return self._add_delta(-other)
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 351edf1b38352..e26a0548fdc78 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -731,19 +731,43 @@ def _add_datelike(self, other):
def _sub_datelike(self, other):
# subtract a datetime from myself, yielding a TimedeltaIndex
from pandas import TimedeltaIndex
- other = Timestamp(other)
- if other is tslib.NaT:
- result = self._nat_new(box=False)
- # require tz compat
- elif not self._has_same_tz(other):
- raise TypeError("Timestamp subtraction must have the same "
- "timezones or no timezones")
+ if isinstance(other, DatetimeIndex):
+ # require tz compat
+ if not self._has_same_tz(other):
+ raise TypeError("DatetimeIndex subtraction must have the same "
+ "timezones or no timezones")
+ result = self._sub_datelike_dti(other)
+ elif isinstance(other, (tslib.Timestamp, datetime)):
+ other = Timestamp(other)
+ if other is tslib.NaT:
+ result = self._nat_new(box=False)
+ # require tz compat
+ elif not self._has_same_tz(other):
+ raise TypeError("Timestamp subtraction must have the same "
+ "timezones or no timezones")
+ else:
+ i8 = self.asi8
+ result = i8 - other.value
+ result = self._maybe_mask_results(result,
+ fill_value=tslib.iNaT)
else:
- i8 = self.asi8
- result = i8 - other.value
- result = self._maybe_mask_results(result, fill_value=tslib.iNaT)
+ raise TypeError("cannot subtract DatetimeIndex and {typ}"
+ .format(typ=type(other).__name__))
return TimedeltaIndex(result, name=self.name, copy=False)
+ def _sub_datelike_dti(self, other):
+ """subtraction of two DatetimeIndexes"""
+ if not len(self) == len(other):
+ raise ValueError("cannot add indices of unequal length")
+
+ self_i8 = self.asi8
+ other_i8 = other.asi8
+ new_values = self_i8 - other_i8
+ if self.hasnans or other.hasnans:
+ mask = (self._isnan) | (other._isnan)
+ new_values[mask] = tslib.iNaT
+ return new_values.view('i8')
+
def _maybe_update_attributes(self, attrs):
""" Update Index attributes (e.g. freq) depending on op """
freq = attrs.get('freq', None)
diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py
index 96ff74c819624..8a86fcba32ecb 100644
--- a/pandas/tseries/tests/test_base.py
+++ b/pandas/tseries/tests/test_base.py
@@ -360,7 +360,7 @@ def test_resolution(self):
tz=tz)
self.assertEqual(idx.resolution, expected)
- def test_add_iadd(self):
+ def test_union(self):
for tz in self.tz:
# union
rng1 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
@@ -378,17 +378,12 @@ def test_add_iadd(self):
for rng, other, expected in [(rng1, other1, expected1),
(rng2, other2, expected2),
(rng3, other3, expected3)]:
- # GH9094
- with tm.assert_produces_warning(FutureWarning):
- result_add = rng + other
- result_union = rng.union(other)
- tm.assert_index_equal(result_add, expected)
+ result_union = rng.union(other)
tm.assert_index_equal(result_union, expected)
- # GH9094
- with tm.assert_produces_warning(FutureWarning):
- rng += other
- tm.assert_index_equal(rng, expected)
+
+ def test_add_iadd(self):
+ for tz in self.tz:
# offset
offsets = [pd.offsets.Hour(2), timedelta(hours=2),
@@ -421,7 +416,26 @@ def test_add_iadd(self):
with tm.assertRaisesRegexp(TypeError, msg):
Timestamp('2011-01-01') + idx
- def test_sub_isub(self):
+ def test_add_dti_dti(self):
+ # previously performed setop (deprecated in 0.16.0), now raises
+ # TypeError (GH14164)
+
+ dti = date_range('20130101', periods=3)
+ dti_tz = date_range('20130101', periods=3).tz_localize('US/Eastern')
+
+ with tm.assertRaises(TypeError):
+ dti + dti
+
+ with tm.assertRaises(TypeError):
+ dti_tz + dti_tz
+
+ with tm.assertRaises(TypeError):
+ dti_tz + dti
+
+ with tm.assertRaises(TypeError):
+ dti + dti_tz
+
+ def test_difference(self):
for tz in self.tz:
# diff
rng1 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
@@ -439,9 +453,11 @@ def test_sub_isub(self):
for rng, other, expected in [(rng1, other1, expected1),
(rng2, other2, expected2),
(rng3, other3, expected3)]:
- result_union = rng.difference(other)
+ result_diff = rng.difference(other)
+ tm.assert_index_equal(result_diff, expected)
- tm.assert_index_equal(result_union, expected)
+ def test_sub_isub(self):
+ for tz in self.tz:
# offset
offsets = [pd.offsets.Hour(2), timedelta(hours=2),
@@ -449,9 +465,10 @@ def test_sub_isub(self):
for delta in offsets:
rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
- result = rng - delta
expected = pd.date_range('1999-12-31 22:00',
'2000-01-31 22:00', tz=tz)
+
+ result = rng - delta
tm.assert_index_equal(result, expected)
rng -= delta
tm.assert_index_equal(rng, expected)
@@ -466,6 +483,47 @@ def test_sub_isub(self):
rng -= 1
tm.assert_index_equal(rng, expected)
+ def test_sub_dti_dti(self):
+ # previously performed setop (deprecated in 0.16.0), now changed to
+ # return subtraction -> TimeDeltaIndex (GH ...)
+
+ dti = date_range('20130101', periods=3)
+ dti_tz = date_range('20130101', periods=3).tz_localize('US/Eastern')
+ dti_tz2 = date_range('20130101', periods=3).tz_localize('UTC')
+ expected = TimedeltaIndex([0, 0, 0])
+
+ result = dti - dti
+ tm.assert_index_equal(result, expected)
+
+ result = dti_tz - dti_tz
+ tm.assert_index_equal(result, expected)
+
+ with tm.assertRaises(TypeError):
+ dti_tz - dti
+
+ with tm.assertRaises(TypeError):
+ dti - dti_tz
+
+ with tm.assertRaises(TypeError):
+ dti_tz - dti_tz2
+
+ # isub
+ dti -= dti
+ tm.assert_index_equal(dti, expected)
+
+ # different length raises ValueError
+ dti1 = date_range('20130101', periods=3)
+ dti2 = date_range('20130101', periods=4)
+ with tm.assertRaises(ValueError):
+ dti1 - dti2
+
+ # NaN propagation
+ dti1 = DatetimeIndex(['2012-01-01', np.nan, '2012-01-03'])
+ dti2 = DatetimeIndex(['2012-01-02', '2012-01-03', np.nan])
+ expected = TimedeltaIndex(['1 days', np.nan, np.nan])
+ result = dti2 - dti1
+ tm.assert_index_equal(result, expected)
+
def test_sub_period(self):
# GH 13078
# not supported, check TypeError
@@ -1239,50 +1297,6 @@ def _check(result, expected):
['20121231', '20130101', '20130102'], tz='US/Eastern')
tm.assert_index_equal(result, expected)
- def test_dti_dti_deprecated_ops(self):
-
- # deprecated in 0.16.0 (GH9094)
- # change to return subtraction -> TimeDeltaIndex in 0.17.0
- # shoudl move to the appropriate sections above
-
- dti = date_range('20130101', periods=3)
- dti_tz = date_range('20130101', periods=3).tz_localize('US/Eastern')
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti - dti
- expected = Index([])
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti + dti
- expected = dti
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti_tz - dti_tz
- expected = Index([])
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti_tz + dti_tz
- expected = dti_tz
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti_tz - dti
- expected = dti_tz
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- result = dti - dti_tz
- expected = dti
- tm.assert_index_equal(result, expected)
-
- with tm.assert_produces_warning(FutureWarning):
- self.assertRaises(TypeError, lambda: dti_tz + dti)
- with tm.assert_produces_warning(FutureWarning):
- self.assertRaises(TypeError, lambda: dti + dti_tz)
-
def test_dti_tdi_numeric_ops(self):
# These are normally union/diff set-like ops
@@ -2005,7 +2019,7 @@ def test_resolution(self):
idx = pd.period_range(start='2013-04-01', periods=30, freq=freq)
self.assertEqual(idx.resolution, expected)
- def test_add_iadd(self):
+ def test_union(self):
# union
rng1 = pd.period_range('1/1/2000', freq='D', periods=5)
other1 = pd.period_range('1/6/2000', freq='D', periods=5)
@@ -2031,7 +2045,8 @@ def test_add_iadd(self):
rng5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03',
'2000-01-01 09:05'], freq='T')
other5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:05'
- '2000-01-01 09:08'], freq='T')
+ '2000-01-01 09:08'],
+ freq='T')
expected5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03',
'2000-01-01 09:05', '2000-01-01 09:08'],
freq='T')
@@ -2052,20 +2067,19 @@ def test_add_iadd(self):
expected6),
(rng7, other7, expected7)]:
- # GH9094
- with tm.assert_produces_warning(FutureWarning):
- result_add = rng + other
-
result_union = rng.union(other)
-
- tm.assert_index_equal(result_add, expected)
tm.assert_index_equal(result_union, expected)
- # GH 6527
- # GH9094
- with tm.assert_produces_warning(FutureWarning):
- rng += other
- tm.assert_index_equal(rng, expected)
+ def test_add_iadd(self):
+ rng = pd.period_range('1/1/2000', freq='D', periods=5)
+ other = pd.period_range('1/6/2000', freq='D', periods=5)
+
+ # previously performed setop union, now raises TypeError (GH14164)
+ with tm.assertRaises(TypeError):
+ rng + other
+
+ with tm.assertRaises(TypeError):
+ rng += other
# offset
# DateOffset
@@ -2152,7 +2166,7 @@ def test_add_iadd(self):
rng += 1
tm.assert_index_equal(rng, expected)
- def test_sub_isub(self):
+ def test_difference(self):
# diff
rng1 = pd.period_range('1/1/2000', freq='D', periods=5)
other1 = pd.period_range('1/6/2000', freq='D', periods=5)
@@ -2194,6 +2208,19 @@ def test_sub_isub(self):
result_union = rng.difference(other)
tm.assert_index_equal(result_union, expected)
+ def test_sub_isub(self):
+
+ # previously performed setop, now raises TypeError (GH14164)
+ # TODO needs to wait on #13077 for decision on result type
+ rng = pd.period_range('1/1/2000', freq='D', periods=5)
+ other = pd.period_range('1/6/2000', freq='D', periods=5)
+
+ with tm.assertRaises(TypeError):
+ rng - other
+
+ with tm.assertRaises(TypeError):
+ rng -= other
+
# offset
# DateOffset
rng = pd.period_range('2014', '2024', freq='A')
| xref #13777, deprecations put in place in #9630
| https://api.github.com/repos/pandas-dev/pandas/pulls/14164 | 2016-09-06T15:53:21Z | 2016-09-07T13:11:04Z | 2016-09-07T13:11:04Z | 2016-09-07T13:24:30Z |
TST: Make encoded sep check more locale sensitive | diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3bd8579d456d3..93c431531355a 100755
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -800,17 +800,22 @@ def _clean_options(self, options, engine):
" different from '\s+' are"\
" interpreted as regex)"
engine = 'python'
-
- elif len(sep.encode(encoding)) > 1:
- if engine not in ('python', 'python-fwf'):
- fallback_reason = "the separator encoded in {encoding}"\
- " is > 1 char long, and the 'c' engine"\
- " does not support such separators".format(
- encoding=encoding)
- engine = 'python'
elif delim_whitespace:
if 'python' in engine:
result['delimiter'] = '\s+'
+ elif sep is not None:
+ encodeable = True
+ try:
+ if len(sep.encode(encoding)) > 1:
+ encodeable = False
+ except UnicodeDecodeError:
+ encodeable = False
+ if not encodeable and engine not in ('python', 'python-fwf'):
+ fallback_reason = "the separator encoded in {encoding}" \
+ " is > 1 char long, and the 'c' engine" \
+ " does not support such separators".format(
+ encoding=encoding)
+ engine = 'python'
if fallback_reason and engine_specified:
raise ValueError(fallback_reason)
diff --git a/pandas/io/tests/parser/test_unsupported.py b/pandas/io/tests/parser/test_unsupported.py
index 0bfb8b17349cf..ef8f7967193ff 100644
--- a/pandas/io/tests/parser/test_unsupported.py
+++ b/pandas/io/tests/parser/test_unsupported.py
@@ -60,10 +60,6 @@ def test_c_engine(self):
sep=None, delim_whitespace=False)
with tm.assertRaisesRegexp(ValueError, msg):
read_table(StringIO(data), engine='c', sep='\s')
-
- # GH 14120, skipping as failing when locale is set
- # with tm.assertRaisesRegexp(ValueError, msg):
- # read_table(StringIO(data), engine='c', sep='§')
with tm.assertRaisesRegexp(ValueError, msg):
read_table(StringIO(data), engine='c', skipfooter=1)
| Follow-up to #14120 to make the `sep` check more locale sensitive. Closes #14140.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14161 | 2016-09-06T02:08:59Z | 2016-09-08T22:12:53Z | 2016-09-08T22:12:53Z | 2016-09-09T02:03:05Z |
Fix typo (change 'n' to 'k' in get_dummies documentation) | diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 4dec8b4106126..fa5d16bd85e98 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -984,7 +984,7 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False,
.. versionadded:: 0.16.1
drop_first : bool, default False
- Whether to get k-1 dummies out of n categorical levels by removing the
+ Whether to get k-1 dummies out of k categorical levels by removing the
first level.
.. versionadded:: 0.18.0
| Just changing an `n` to a `k`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14153 | 2016-09-05T12:19:02Z | 2016-09-05T14:16:31Z | 2016-09-05T14:16:31Z | 2016-09-05T14:16:35Z |
Add the steps to setup gbq integration testing to the contributing docs | diff --git a/.travis.yml b/.travis.yml
index 4d3908bc35de4..c6f6d8b81ae59 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -229,14 +229,8 @@ matrix:
- USE_CACHE=true
before_install:
- # gbq secure key
- - if [ -n "$encrypted_1d9d7b1f171b_iv" ]; then
- openssl aes-256-cbc -K $encrypted_1d9d7b1f171b_key
- -iv $encrypted_1d9d7b1f171b_iv -in ci/travis_gbq.json.enc
- -out ci/travis_gbq.json -d;
- export VALID_GBQ_CREDENTIALS=True;
- fi
- echo "before_install"
+ - source ci/travis_process_gbq_encryption.sh
- echo $VIRTUAL_ENV
- export PATH="$HOME/miniconda/bin:$PATH"
- df -h
diff --git a/ci/travis_encrypt_gbq.sh b/ci/travis_encrypt_gbq.sh
new file mode 100755
index 0000000000000..719db67f384e0
--- /dev/null
+++ b/ci/travis_encrypt_gbq.sh
@@ -0,0 +1,35 @@
+#!/bin/bash
+
+GBQ_JSON_FILE=$1
+GBQ_PROJECT_ID=$2
+
+if [[ $# -ne 2 ]]; then
+ echo -e "Too few arguments.\nUsage: ./travis_encrypt_gbq.sh "\
+ "<gbq-json-credentials-file> <gbq-project-id>"
+ exit 1
+fi
+
+if [[ $GBQ_JSON_FILE != *.json ]]; then
+ echo "ERROR: Expected *.json file"
+ exit 1
+fi
+
+if [[ ! -f $GBQ_JSON_FILE ]]; then
+ echo "ERROR: File $GBQ_JSON_FILE does not exist"
+ exit 1
+fi
+
+echo "Encrypting $GBQ_JSON_FILE..."
+read -d "\n" TRAVIS_KEY TRAVIS_IV <<<$(travis encrypt-file $GBQ_JSON_FILE \
+travis_gbq.json.enc -f | grep -o "\w*_iv\|\w*_key");
+
+echo "Adding your secure key and project id to travis_gbq_config.txt ..."
+echo -e "TRAVIS_IV_ENV=$TRAVIS_IV\nTRAVIS_KEY_ENV=$TRAVIS_KEY\n"\
+"GBQ_PROJECT_ID='$GBQ_PROJECT_ID'" > travis_gbq_config.txt
+
+echo "Done. Removing file $GBQ_JSON_FILE"
+rm $GBQ_JSON_FILE
+
+echo -e "Created encrypted credentials file travis_gbq.json.enc.\n"\
+ "NOTE: Do NOT commit the *.json file containing your unencrypted" \
+ "private key"
diff --git a/ci/travis_gbq_config.txt b/ci/travis_gbq_config.txt
new file mode 100644
index 0000000000000..3b68d62f177cc
--- /dev/null
+++ b/ci/travis_gbq_config.txt
@@ -0,0 +1,3 @@
+TRAVIS_IV_ENV=encrypted_1d9d7b1f171b_iv
+TRAVIS_KEY_ENV=encrypted_1d9d7b1f171b_key
+GBQ_PROJECT_ID='pandas-travis'
diff --git a/ci/travis_process_gbq_encryption.sh b/ci/travis_process_gbq_encryption.sh
new file mode 100755
index 0000000000000..7ff4c08f78e37
--- /dev/null
+++ b/ci/travis_process_gbq_encryption.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+
+source ci/travis_gbq_config.txt
+
+if [[ -n ${!TRAVIS_IV_ENV} ]]; then
+ openssl aes-256-cbc -K ${!TRAVIS_KEY_ENV} -iv ${!TRAVIS_IV_ENV} \
+ -in ci/travis_gbq.json.enc -out ci/travis_gbq.json -d;
+ export GBQ_PROJECT_ID=$GBQ_PROJECT_ID;
+ echo 'Successfully decrypted gbq credentials'
+fi
+
diff --git a/doc/source/contributing.rst b/doc/source/contributing.rst
index 54de4d86a48d9..7f336abcaa6d7 100644
--- a/doc/source/contributing.rst
+++ b/doc/source/contributing.rst
@@ -626,6 +626,44 @@ This will display stderr from the benchmarks, and use your local
Information on how to write a benchmark and how to use asv can be found in the
`asv documentation <http://asv.readthedocs.org/en/latest/writing_benchmarks.html>`_.
+.. _contributing.gbq_integration_tests:
+
+Running Google BigQuery Integration Tests
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You will need to create a Google BigQuery private key in JSON format in
+order to run Google BigQuery integration tests on your local machine and
+on Travis-CI. The first step is to create a `service account
+<https://console.developers.google.com/iam-admin/serviceaccounts/>`__.
+
+Integration tests for ``pandas.io.gbq`` are skipped in pull requests because
+the credentials that are required for running Google BigQuery integration
+tests are `encrypted <https://docs.travis-ci.com/user/encrypting-files/>`__
+on Travis-CI and are only accessible from the pydata/pandas repository. The
+credentials won't be available on forks of pandas. Here are the steps to run
+gbq integration tests on a forked repository:
+
+#. First, complete all the steps in the `Encrypting Files Prerequisites
+ <https://docs.travis-ci.com/user/encrypting-files/>`__ section.
+#. Sign into `Travis <https://travis-ci.org/>`__ using your GitHub account.
+#. Enable your forked repository of pandas for testing in `Travis
+ <https://travis-ci.org/profile/>`__.
+#. Run the following command from terminal where the current working directory
+ is the ``ci`` folder::
+
+ ./travis_encrypt_gbq.sh <gbq-json-credentials-file> <gbq-project-id>
+
+#. Create a new branch from the branch used in your pull request. Commit the
+ encrypted file called ``travis_gbq.json.enc`` as well as the file
+ ``travis_gbq_config.txt``, in an otherwise empty commit. DO NOT commit the
+ ``*.json`` file which contains your unencrypted private key.
+#. Your branch should be tested automatically once it is pushed. You can check
+ the status by visiting your Travis branches page which exists at the
+ following location: https://travis-ci.org/your-user-name/pandas/branches .
+ Click on a build job for your branch. Expand the following line in the
+ build log: ``ci/print_skipped.py /tmp/nosetests.xml`` . Search for the
+ term ``test_gbq`` and confirm that gbq integration tests are not skipped.
+
Running the vbench performance test suite (phasing out)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -814,6 +852,11 @@ updated. Pushing them to GitHub again is done by::
This will automatically update your pull request with the latest code and restart the
Travis-CI tests.
+If your pull request is related to the ``pandas.io.gbq`` module, please see
+the section on :ref:`Running Google BigQuery Integration Tests
+<contributing.gbq_integration_tests>` to configure a Google BigQuery service
+account for your pull request on Travis-CI.
+
Delete your merged branch (optional)
------------------------------------
diff --git a/pandas/io/tests/test_gbq.py b/pandas/io/tests/test_gbq.py
index 7757950592da5..921fd824d6ffd 100644
--- a/pandas/io/tests/test_gbq.py
+++ b/pandas/io/tests/test_gbq.py
@@ -60,12 +60,12 @@ def _skip_if_no_private_key_contents():
def _in_travis_environment():
return 'TRAVIS_BUILD_DIR' in os.environ and \
- 'VALID_GBQ_CREDENTIALS' in os.environ
+ 'GBQ_PROJECT_ID' in os.environ
def _get_project_id():
if _in_travis_environment():
- return 'pandas-travis'
+ return os.environ.get('GBQ_PROJECT_ID')
else:
return PROJECT_ID
| Pull requests from forked repositories cannot access the existing secure Google BigQuery credentials which are on Travis at pydata/pandas. In order for contributors to run Google BigQuery integration tests on Travis, they need to create a pull request on their forked repository of pandas with their own secure Google BigQuery credentials.
I've updated the contributing documentation to include the steps required to run Google BigQuery integration tests on a forked repository of pandas.
| https://api.github.com/repos/pandas-dev/pandas/pulls/14144 | 2016-09-03T13:43:30Z | 2016-09-07T13:22:53Z | 2016-09-07T13:22:53Z | 2016-09-07T13:23:04Z |
TST: fix blosc version | diff --git a/ci/requirements-2.7.pip b/ci/requirements-2.7.pip
index d16b932c8be4f..44e1695bf1a7f 100644
--- a/ci/requirements-2.7.pip
+++ b/ci/requirements-2.7.pip
@@ -1,4 +1,4 @@
-blosc
+blosc==1.4.1
httplib2
google-api-python-client==1.2
python-gflags==2.0
| something wrong with update for blosc=1.4.3 (latest via pip).
| https://api.github.com/repos/pandas-dev/pandas/pulls/14142 | 2016-09-02T23:48:14Z | 2016-09-02T23:52:39Z | 2016-09-02T23:52:39Z | 2016-09-02T23:52:39Z |
TST: sparse / dummy array comparisons on windows, xref #14140 | diff --git a/pandas/sparse/tests/test_list.py b/pandas/sparse/tests/test_list.py
index 0b933b4f9c6f2..b117685b6e968 100644
--- a/pandas/sparse/tests/test_list.py
+++ b/pandas/sparse/tests/test_list.py
@@ -60,8 +60,11 @@ def test_append_zero(self):
splist.append(arr[5])
splist.append(arr[6:])
+ # list always produces int64, but SA constructor
+ # is platform dtype aware
sparr = splist.to_array()
- tm.assert_sp_array_equal(sparr, SparseArray(arr, fill_value=0))
+ exp = SparseArray(arr, fill_value=0)
+ tm.assert_sp_array_equal(sparr, exp, check_dtype=False)
def test_consolidate(self):
with tm.assert_produces_warning(FutureWarning,
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index 413724d1a6177..80d1f5f76e5a9 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -323,7 +323,7 @@ def test_dataframe_dummies_prefix_str(self):
[3, 1, 0, 0, 1]],
columns=['C', 'bad_a', 'bad_b', 'bad_b', 'bad_c'],
dtype=np.uint8)
- expected = expected.astype({"C": np.int})
+ expected = expected.astype({"C": np.int64})
assert_frame_equal(result, expected)
def test_dataframe_dummies_subset(self):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index d50a6c460ceb5..f5a93d1f17d00 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -1385,11 +1385,22 @@ def assert_panelnd_equal(left, right,
# Sparse
-def assert_sp_array_equal(left, right):
+def assert_sp_array_equal(left, right, check_dtype=True):
+ """Check that the left and right SparseArray are equal.
+
+ Parameters
+ ----------
+ left : SparseArray
+ right : SparseArray
+ check_dtype : bool, default True
+ Whether to check the data dtype is identical.
+ """
+
assertIsInstance(left, pd.SparseArray, '[SparseArray]')
assertIsInstance(right, pd.SparseArray, '[SparseArray]')
- assert_numpy_array_equal(left.sp_values, right.sp_values)
+ assert_numpy_array_equal(left.sp_values, right.sp_values,
+ check_dtype=check_dtype)
# SparseIndex comparison
assertIsInstance(left.sp_index, pd._sparse.SparseIndex, '[SparseIndex]')
@@ -1400,8 +1411,10 @@ def assert_sp_array_equal(left, right):
left.sp_index, right.sp_index)
assert_attr_equal('fill_value', left, right)
- assert_attr_equal('dtype', left, right)
- assert_numpy_array_equal(left.values, right.values)
+ if check_dtype:
+ assert_attr_equal('dtype', left, right)
+ assert_numpy_array_equal(left.values, right.values,
+ check_dtype=check_dtype)
def assert_sp_series_equal(left, right, check_dtype=True, exact_indices=True,
| partial on #14140
| https://api.github.com/repos/pandas-dev/pandas/pulls/14141 | 2016-09-02T23:35:24Z | 2016-09-03T00:00:37Z | 2016-09-03T00:00:37Z | 2016-09-03T00:00:37Z |
API/DEPR: Remove +/- as setops for Index (GH8227) | diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt
index a422e667e32a7..fd9446cc45c08 100644
--- a/doc/source/whatsnew/v0.19.0.txt
+++ b/doc/source/whatsnew/v0.19.0.txt
@@ -919,6 +919,43 @@ of ``int64`` (:issue:`13988`)
pi = pd.PeriodIndex(['2011-01', '2011-02'], freq='M')
pi.values
+
+.. _whatsnew_0190.api.setops:
+
+Index ``+`` / ``-`` no longer used for set operations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Addition and subtraction of the base Index type (not the numeric subclasses)
+previously performed set operations (set union and difference). This
+behaviour was already deprecated since 0.15.0 (in favor using the specific
+``.union()`` and ``.difference()`` methods), and is now disabled. When
+possible, ``+`` and ``-`` are now used for element-wise operations, for
+example for concatenating strings (:issue:`8227`, :issue:`14127`).
+
+Previous Behavior:
+
+.. code-block:: ipython
+
+ In [1]: pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
+ FutureWarning: using '+' to provide set union with Indexes is deprecated, use '|' or .union()
+ Out[1]: Index(['a', 'b', 'c'], dtype='object')
+
+The same operation will now perform element-wise addition:
+
+.. ipython:: python
+
+ pd.Index(['a', 'b']) + pd.Index(['a', 'c'])
+
+Note that numeric Index objects already performed element-wise operations.
+For example, the behaviour of adding two integer Indexes:
+
+.. ipython:: python
+
+ pd.Index([1, 2, 3]) + pd.Index([2, 3, 4])
+
+is unchanged. The base ``Index`` is now made consistent with this behaviour.
+
+
.. _whatsnew_0190.api.difference:
``Index.difference`` and ``.symmetric_difference`` changes
diff --git a/pandas/indexes/base.py b/pandas/indexes/base.py
index dac0e650cb923..d4ca18a6713b5 100644
--- a/pandas/indexes/base.py
+++ b/pandas/indexes/base.py
@@ -1739,28 +1739,16 @@ def argsort(self, *args, **kwargs):
return result.argsort(*args, **kwargs)
def __add__(self, other):
- if is_list_like(other):
- warnings.warn("using '+' to provide set union with Indexes is "
- "deprecated, use '|' or .union()", FutureWarning,
- stacklevel=2)
- if isinstance(other, Index):
- return self.union(other)
return Index(np.array(self) + other)
def __radd__(self, other):
- if is_list_like(other):
- warnings.warn("using '+' to provide set union with Indexes is "
- "deprecated, use '|' or .union()", FutureWarning,
- stacklevel=2)
return Index(other + np.array(self))
__iadd__ = __add__
def __sub__(self, other):
- warnings.warn("using '-' to provide set differences with Indexes is "
- "deprecated, use .difference()", FutureWarning,
- stacklevel=2)
- return self.difference(other)
+ raise TypeError("cannot perform __sub__ with this index type: "
+ "{typ}".format(typ=type(self)))
def __and__(self, other):
return self.intersection(other)
@@ -1990,7 +1978,8 @@ def symmetric_difference(self, other, result_name=None):
-----
``symmetric_difference`` contains elements that appear in either
``idx1`` or ``idx2`` but not both. Equivalent to the Index created by
- ``(idx1 - idx2) + (idx2 - idx1)`` with duplicates dropped.
+ ``idx1.difference(idx2) | idx2.difference(idx1)`` with duplicates
+ dropped.
Examples
--------
@@ -3333,8 +3322,8 @@ def _evaluate_compare(self, other):
cls.__ge__ = _make_compare(operator.ge)
@classmethod
- def _add_numericlike_set_methods_disabled(cls):
- """ add in the numeric set-like methods to disable """
+ def _add_numeric_methods_add_sub_disabled(cls):
+ """ add in the numeric add/sub methods to disable """
def _make_invalid_op(name):
def invalid_op(self, other=None):
@@ -3349,7 +3338,7 @@ def invalid_op(self, other=None):
@classmethod
def _add_numeric_methods_disabled(cls):
- """ add in numeric methods to disable """
+ """ add in numeric methods to disable other than add/sub """
def _make_invalid_op(name):
def invalid_op(self, other=None):
diff --git a/pandas/indexes/category.py b/pandas/indexes/category.py
index d4fc746c652ca..c1f5d47e1e04f 100644
--- a/pandas/indexes/category.py
+++ b/pandas/indexes/category.py
@@ -649,7 +649,7 @@ def _add_accessors(cls):
typ='method', overwrite=True)
-CategoricalIndex._add_numericlike_set_methods_disabled()
+CategoricalIndex._add_numeric_methods_add_sub_disabled()
CategoricalIndex._add_numeric_methods_disabled()
CategoricalIndex._add_logical_methods_disabled()
CategoricalIndex._add_comparison_methods()
diff --git a/pandas/indexes/multi.py b/pandas/indexes/multi.py
index f42410fcdf098..09c755b2c9792 100644
--- a/pandas/indexes/multi.py
+++ b/pandas/indexes/multi.py
@@ -2219,6 +2219,7 @@ def isin(self, values, level=None):
MultiIndex._add_numeric_methods_disabled()
+MultiIndex._add_numeric_methods_add_sub_disabled()
MultiIndex._add_logical_methods_disabled()
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 0ef7e6bf3be97..7f68318d4d7d3 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -730,16 +730,6 @@ def test_union(self):
expected = Index(list('ab'), name='A')
tm.assert_index_equal(union, expected)
- def test_add(self):
-
- # - API change GH 8226
- with tm.assert_produces_warning():
- self.strIndex + self.strIndex
- with tm.assert_produces_warning():
- self.strIndex + self.strIndex.tolist()
- with tm.assert_produces_warning():
- self.strIndex.tolist() + self.strIndex
-
with tm.assert_produces_warning(RuntimeWarning):
firstCat = self.strIndex.union(self.dateIndex)
secondCat = self.strIndex.union(self.strIndex)
@@ -755,6 +745,13 @@ def test_add(self):
tm.assert_contains_all(self.strIndex, secondCat)
tm.assert_contains_all(self.dateIndex, firstCat)
+ def test_add(self):
+ idx = self.strIndex
+ expected = Index(self.strIndex.values * 2)
+ self.assert_index_equal(idx + idx, expected)
+ self.assert_index_equal(idx + idx.tolist(), expected)
+ self.assert_index_equal(idx.tolist() + idx, expected)
+
# test add and radd
idx = Index(list('abc'))
expected = Index(['a1', 'b1', 'c1'])
@@ -762,6 +759,13 @@ def test_add(self):
expected = Index(['1a', '1b', '1c'])
self.assert_index_equal('1' + idx, expected)
+ def test_sub(self):
+ idx = self.strIndex
+ self.assertRaises(TypeError, lambda: idx - 'a')
+ self.assertRaises(TypeError, lambda: idx - idx)
+ self.assertRaises(TypeError, lambda: idx - idx.tolist())
+ self.assertRaises(TypeError, lambda: idx.tolist() - idx)
+
def test_append_multiple(self):
index = Index(['a', 'b', 'c', 'd', 'e', 'f'])
diff --git a/pandas/tests/indexes/test_multi.py b/pandas/tests/indexes/test_multi.py
index 25de6c5091853..5248f0775d22f 100644
--- a/pandas/tests/indexes/test_multi.py
+++ b/pandas/tests/indexes/test_multi.py
@@ -1408,21 +1408,24 @@ def test_intersection(self):
# result = self.index & tuples
# self.assertTrue(result.equals(tuples))
- def test_difference(self):
+ def test_sub(self):
first = self.index
- result = first.difference(self.index[-3:])
- # - API change GH 8226
- with tm.assert_produces_warning():
+ # - now raises (previously was set op difference)
+ with tm.assertRaises(TypeError):
first - self.index[-3:]
- with tm.assert_produces_warning():
+ with tm.assertRaises(TypeError):
self.index[-3:] - first
- with tm.assert_produces_warning():
+ with tm.assertRaises(TypeError):
self.index[-3:] - first.tolist()
+ with tm.assertRaises(TypeError):
+ first.tolist() - self.index[-3:]
- self.assertRaises(TypeError, lambda: first.tolist() - self.index[-3:])
+ def test_difference(self):
+ first = self.index
+ result = first.difference(self.index[-3:])
expected = MultiIndex.from_tuples(sorted(self.index[:-3].values),
sortorder=0,
names=self.index.names)
| xref #13777, deprecations put in place in #8227
- [x] tests added / passed
- [x] passes `git diff upstream/master | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/14127 | 2016-08-31T10:38:15Z | 2016-09-06T14:30:58Z | 2016-09-06T14:30:58Z | 2016-09-07T13:24:30Z |
TST/CLN: Tests parametrizations 3 | diff --git a/pandas/tests/plotting/frame/test_frame.py b/pandas/tests/plotting/frame/test_frame.py
index 45dc612148f40..7e0b8dc7282e4 100644
--- a/pandas/tests/plotting/frame/test_frame.py
+++ b/pandas/tests/plotting/frame/test_frame.py
@@ -1199,12 +1199,10 @@ def test_hist_df_orientation(self):
axes = df.plot.hist(rot=50, fontsize=8, orientation="horizontal")
_check_ticks_props(axes, xrot=0, yrot=50, ylabelsize=8)
- @pytest.mark.parametrize(
- "weights", [0.1 * np.ones(shape=(100,)), 0.1 * np.ones(shape=(100, 2))]
- )
- def test_hist_weights(self, weights):
+ @pytest.mark.parametrize("weight_shape", [(100,), (100, 2)])
+ def test_hist_weights(self, weight_shape):
# GH 33173
-
+ weights = 0.1 * np.ones(shape=weight_shape)
df = DataFrame(
dict(zip(["A", "B"], np.random.default_rng(2).standard_normal((2, 100))))
)
diff --git a/pandas/tests/plotting/frame/test_frame_color.py b/pandas/tests/plotting/frame/test_frame_color.py
index ff1edd323ef28..4f14f1e43cf29 100644
--- a/pandas/tests/plotting/frame/test_frame_color.py
+++ b/pandas/tests/plotting/frame/test_frame_color.py
@@ -30,11 +30,10 @@ def _check_colors_box(bp, box_c, whiskers_c, medians_c, caps_c="k", fliers_c=Non
class TestDataFrameColor:
- @pytest.mark.parametrize(
- "color", ["C0", "C1", "C2", "C3", "C4", "C5", "C6", "C7", "C8", "C9"]
- )
+ @pytest.mark.parametrize("color", list(range(10)))
def test_mpl2_color_cycle_str(self, color):
# GH 15516
+ color = f"C{color}"
df = DataFrame(
np.random.default_rng(2).standard_normal((10, 3)), columns=["a", "b", "c"]
)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index f370d32d0caa9..20749c7ed90e8 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -1144,18 +1144,7 @@ def test_timedelta64_analytics(self):
expected = Timedelta("1 days")
assert result == expected
- @pytest.mark.parametrize(
- "test_input,error_type",
- [
- (Series([], dtype="float64"), ValueError),
- # For strings, or any Series with dtype 'O'
- (Series(["foo", "bar", "baz"]), TypeError),
- (Series([(1,), (2,)]), TypeError),
- # For mixed data types
- (Series(["foo", "foo", "bar", "bar", None, np.nan, "baz"]), TypeError),
- ],
- )
- def test_assert_idxminmax_empty_raises(self, test_input, error_type):
+ def test_assert_idxminmax_empty_raises(self):
"""
Cases where ``Series.argmax`` and related should raise an exception
"""
@@ -1294,13 +1283,14 @@ def test_minmax_nat_series(self, nat_ser):
@pytest.mark.parametrize(
"nat_df",
[
- DataFrame([NaT, NaT]),
- DataFrame([NaT, Timedelta("nat")]),
- DataFrame([Timedelta("nat"), Timedelta("nat")]),
+ [NaT, NaT],
+ [NaT, Timedelta("nat")],
+ [Timedelta("nat"), Timedelta("nat")],
],
)
def test_minmax_nat_dataframe(self, nat_df):
# GH#23282
+ nat_df = DataFrame(nat_df)
assert nat_df.min()[0] is NaT
assert nat_df.max()[0] is NaT
assert nat_df.min(skipna=False)[0] is NaT
@@ -1399,14 +1389,10 @@ class TestSeriesMode:
# were moved from a series-specific test file, _not_ that these tests are
# intended long-term to be series-specific
- @pytest.mark.parametrize(
- "dropna, expected",
- [(True, Series([], dtype=np.float64)), (False, Series([], dtype=np.float64))],
- )
- def test_mode_empty(self, dropna, expected):
+ def test_mode_empty(self, dropna):
s = Series([], dtype=np.float64)
result = s.mode(dropna)
- tm.assert_series_equal(result, expected)
+ tm.assert_series_equal(result, s)
@pytest.mark.parametrize(
"dropna, data, expected",
@@ -1619,23 +1605,24 @@ def test_mode_boolean_with_na(self):
[
(
[0, 1j, 1, 1, 1 + 1j, 1 + 2j],
- Series([1], dtype=np.complex128),
+ [1],
np.complex128,
),
(
[0, 1j, 1, 1, 1 + 1j, 1 + 2j],
- Series([1], dtype=np.complex64),
+ [1],
np.complex64,
),
(
[1 + 1j, 2j, 1 + 1j],
- Series([1 + 1j], dtype=np.complex128),
+ [1 + 1j],
np.complex128,
),
],
)
def test_single_mode_value_complex(self, array, expected, dtype):
result = Series(array, dtype=dtype).mode()
+ expected = Series(expected, dtype=dtype)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
@@ -1644,12 +1631,12 @@ def test_single_mode_value_complex(self, array, expected, dtype):
(
# no modes
[0, 1j, 1, 1 + 1j, 1 + 2j],
- Series([0j, 1j, 1 + 0j, 1 + 1j, 1 + 2j], dtype=np.complex128),
+ [0j, 1j, 1 + 0j, 1 + 1j, 1 + 2j],
np.complex128,
),
(
[1 + 1j, 2j, 1 + 1j, 2j, 3],
- Series([2j, 1 + 1j], dtype=np.complex64),
+ [2j, 1 + 1j],
np.complex64,
),
],
@@ -1659,4 +1646,5 @@ def test_multimode_complex(self, array, expected, dtype):
# mode tries to sort multimodal series.
# Complex numbers are sorted by their magnitude
result = Series(array, dtype=dtype).mode()
+ expected = Series(expected, dtype=dtype)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/reshape/concat/test_series.py b/pandas/tests/reshape/concat/test_series.py
index c12b835cb61e1..bdc64fe826cc9 100644
--- a/pandas/tests/reshape/concat/test_series.py
+++ b/pandas/tests/reshape/concat/test_series.py
@@ -128,11 +128,10 @@ def test_concat_series_axis1_same_names_ignore_index(self):
tm.assert_index_equal(result.columns, expected, exact=True)
- @pytest.mark.parametrize(
- "s1name,s2name", [(np.int64(190), (43, 0)), (190, (43, 0))]
- )
- def test_concat_series_name_npscalar_tuple(self, s1name, s2name):
+ @pytest.mark.parametrize("s1name", [np.int64(190), 190])
+ def test_concat_series_name_npscalar_tuple(self, s1name):
# GH21015
+ s2name = (43, 0)
s1 = Series({"a": 1, "b": 2}, name=s1name)
s2 = Series({"c": 5, "d": 6}, name=s2name)
result = concat([s1, s2])
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 72e6457e65e3c..b9b1224194295 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1466,10 +1466,8 @@ def _check_merge(x, y):
class TestMergeDtypes:
- @pytest.mark.parametrize(
- "right_vals", [["foo", "bar"], Series(["foo", "bar"]).astype("category")]
- )
- def test_different(self, right_vals):
+ @pytest.mark.parametrize("dtype", [object, "category"])
+ def test_different(self, dtype):
left = DataFrame(
{
"A": ["foo", "bar"],
@@ -1480,6 +1478,7 @@ def test_different(self, right_vals):
"F": Series([1, 2], dtype="int32"),
}
)
+ right_vals = Series(["foo", "bar"], dtype=dtype)
right = DataFrame({"A": right_vals})
# GH 9780
@@ -2311,19 +2310,15 @@ def test_merge_suffix(col1, col2, kwargs, expected_cols):
[
(
"right",
- DataFrame(
- {"A": [100, 200, 300], "B1": [60, 70, np.nan], "B2": [600, 700, 800]}
- ),
+ {"A": [100, 200, 300], "B1": [60, 70, np.nan], "B2": [600, 700, 800]},
),
(
"outer",
- DataFrame(
- {
- "A": [1, 100, 200, 300],
- "B1": [80, 60, 70, np.nan],
- "B2": [np.nan, 600, 700, 800],
- }
- ),
+ {
+ "A": [1, 100, 200, 300],
+ "B1": [80, 60, 70, np.nan],
+ "B2": [np.nan, 600, 700, 800],
+ },
),
],
)
@@ -2331,6 +2326,7 @@ def test_merge_duplicate_suffix(how, expected):
left_df = DataFrame({"A": [100, 200, 1], "B": [60, 70, 80]})
right_df = DataFrame({"A": [100, 200, 300], "B": [600, 700, 800]})
result = merge(left_df, right_df, on="A", how=how, suffixes=("_x", "_x"))
+ expected = DataFrame(expected)
expected.columns = ["A", "B_x", "B_x"]
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/merge/test_merge_asof.py b/pandas/tests/reshape/merge/test_merge_asof.py
index 00df8064d5190..b0efbc253c04e 100644
--- a/pandas/tests/reshape/merge/test_merge_asof.py
+++ b/pandas/tests/reshape/merge/test_merge_asof.py
@@ -3099,7 +3099,7 @@ def test_merge_groupby_multiple_column_with_categorical_column(self):
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
- "func", [lambda x: x, lambda x: to_datetime(x)], ids=["numeric", "datetime"]
+ "func", [lambda x: x, to_datetime], ids=["numeric", "datetime"]
)
@pytest.mark.parametrize("side", ["left", "right"])
def test_merge_on_nans(self, func, side):
diff --git a/pandas/tests/reshape/merge/test_merge_ordered.py b/pandas/tests/reshape/merge/test_merge_ordered.py
index abd61026b4e37..db71a732b44e3 100644
--- a/pandas/tests/reshape/merge/test_merge_ordered.py
+++ b/pandas/tests/reshape/merge/test_merge_ordered.py
@@ -135,54 +135,50 @@ def test_doc_example(self):
"left, right, on, left_by, right_by, expected",
[
(
- DataFrame({"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]}),
- DataFrame({"T": [2], "E": [1]}),
+ {"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]},
+ {"T": [2], "E": [1]},
["T"],
["G", "H"],
None,
- DataFrame(
- {
- "G": ["g"] * 3,
- "H": ["h"] * 3,
- "T": [1, 2, 3],
- "E": [np.nan, 1.0, np.nan],
- }
- ),
+ {
+ "G": ["g"] * 3,
+ "H": ["h"] * 3,
+ "T": [1, 2, 3],
+ "E": [np.nan, 1.0, np.nan],
+ },
),
(
- DataFrame({"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]}),
- DataFrame({"T": [2], "E": [1]}),
+ {"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]},
+ {"T": [2], "E": [1]},
"T",
["G", "H"],
None,
- DataFrame(
- {
- "G": ["g"] * 3,
- "H": ["h"] * 3,
- "T": [1, 2, 3],
- "E": [np.nan, 1.0, np.nan],
- }
- ),
+ {
+ "G": ["g"] * 3,
+ "H": ["h"] * 3,
+ "T": [1, 2, 3],
+ "E": [np.nan, 1.0, np.nan],
+ },
),
(
- DataFrame({"T": [2], "E": [1]}),
- DataFrame({"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]}),
+ {"T": [2], "E": [1]},
+ {"G": ["g", "g"], "H": ["h", "h"], "T": [1, 3]},
["T"],
None,
["G", "H"],
- DataFrame(
- {
- "T": [1, 2, 3],
- "E": [np.nan, 1.0, np.nan],
- "G": ["g"] * 3,
- "H": ["h"] * 3,
- }
- ),
+ {
+ "T": [1, 2, 3],
+ "E": [np.nan, 1.0, np.nan],
+ "G": ["g"] * 3,
+ "H": ["h"] * 3,
+ },
),
],
)
def test_list_type_by(self, left, right, on, left_by, right_by, expected):
# GH 35269
+ left = DataFrame(left)
+ right = DataFrame(right)
result = merge_ordered(
left=left,
right=right,
@@ -190,6 +186,7 @@ def test_list_type_by(self, left, right, on, left_by, right_by, expected):
left_by=left_by,
right_by=right_by,
)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/test_from_dummies.py b/pandas/tests/reshape/test_from_dummies.py
index f9a03222c8057..ba71bb24e8a16 100644
--- a/pandas/tests/reshape/test_from_dummies.py
+++ b/pandas/tests/reshape/test_from_dummies.py
@@ -298,32 +298,32 @@ def test_no_prefix_string_cats_contains_get_dummies_NaN_column():
[
pytest.param(
"c",
- DataFrame({"": ["a", "b", "c"]}),
+ {"": ["a", "b", "c"]},
id="default_category is a str",
),
pytest.param(
1,
- DataFrame({"": ["a", "b", 1]}),
+ {"": ["a", "b", 1]},
id="default_category is a int",
),
pytest.param(
1.25,
- DataFrame({"": ["a", "b", 1.25]}),
+ {"": ["a", "b", 1.25]},
id="default_category is a float",
),
pytest.param(
0,
- DataFrame({"": ["a", "b", 0]}),
+ {"": ["a", "b", 0]},
id="default_category is a 0",
),
pytest.param(
False,
- DataFrame({"": ["a", "b", False]}),
+ {"": ["a", "b", False]},
id="default_category is a bool",
),
pytest.param(
(1, 2),
- DataFrame({"": ["a", "b", (1, 2)]}),
+ {"": ["a", "b", (1, 2)]},
id="default_category is a tuple",
),
],
@@ -333,6 +333,7 @@ def test_no_prefix_string_cats_default_category(
):
dummies = DataFrame({"a": [1, 0, 0], "b": [0, 1, 0]})
result = from_dummies(dummies, default_category=default_category)
+ expected = DataFrame(expected)
if using_infer_string:
expected[""] = expected[""].astype("string[pyarrow_numpy]")
tm.assert_frame_equal(result, expected)
@@ -366,32 +367,32 @@ def test_with_prefix_contains_get_dummies_NaN_column():
[
pytest.param(
"x",
- DataFrame({"col1": ["a", "b", "x"], "col2": ["x", "a", "c"]}),
+ {"col1": ["a", "b", "x"], "col2": ["x", "a", "c"]},
id="default_category is a str",
),
pytest.param(
0,
- DataFrame({"col1": ["a", "b", 0], "col2": [0, "a", "c"]}),
+ {"col1": ["a", "b", 0], "col2": [0, "a", "c"]},
id="default_category is a 0",
),
pytest.param(
False,
- DataFrame({"col1": ["a", "b", False], "col2": [False, "a", "c"]}),
+ {"col1": ["a", "b", False], "col2": [False, "a", "c"]},
id="default_category is a False",
),
pytest.param(
{"col2": 1, "col1": 2.5},
- DataFrame({"col1": ["a", "b", 2.5], "col2": [1, "a", "c"]}),
+ {"col1": ["a", "b", 2.5], "col2": [1, "a", "c"]},
id="default_category is a dict with int and float values",
),
pytest.param(
{"col2": None, "col1": False},
- DataFrame({"col1": ["a", "b", False], "col2": [None, "a", "c"]}),
+ {"col1": ["a", "b", False], "col2": [None, "a", "c"]},
id="default_category is a dict with bool and None values",
),
pytest.param(
{"col2": (1, 2), "col1": [1.25, False]},
- DataFrame({"col1": ["a", "b", [1.25, False]], "col2": [(1, 2), "a", "c"]}),
+ {"col1": ["a", "b", [1.25, False]], "col2": [(1, 2), "a", "c"]},
id="default_category is a dict with list and tuple values",
),
],
@@ -402,6 +403,7 @@ def test_with_prefix_default_category(
result = from_dummies(
dummies_with_unassigned, sep="_", default_category=default_category
)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/reshape/test_get_dummies.py b/pandas/tests/reshape/test_get_dummies.py
index 31260e4dcb7d2..082d5f0ee81ab 100644
--- a/pandas/tests/reshape/test_get_dummies.py
+++ b/pandas/tests/reshape/test_get_dummies.py
@@ -453,19 +453,19 @@ def test_dataframe_dummies_with_categorical(self, df, sparse, dtype):
[
(
{"data": DataFrame({"ä": ["a"]})},
- DataFrame({"ä_a": [True]}),
+ "ä_a",
),
(
{"data": DataFrame({"x": ["ä"]})},
- DataFrame({"x_ä": [True]}),
+ "x_ä",
),
(
{"data": DataFrame({"x": ["a"]}), "prefix": "ä"},
- DataFrame({"ä_a": [True]}),
+ "ä_a",
),
(
{"data": DataFrame({"x": ["a"]}), "prefix_sep": "ä"},
- DataFrame({"xäa": [True]}),
+ "xäa",
),
],
)
@@ -473,6 +473,7 @@ def test_dataframe_dummies_unicode(self, get_dummies_kwargs, expected):
# GH22084 get_dummies incorrectly encodes unicode characters
# in dataframe column names
result = get_dummies(**get_dummies_kwargs)
+ expected = DataFrame({expected: [True]})
tm.assert_frame_equal(result, expected)
def test_get_dummies_basic_drop_first(self, sparse):
diff --git a/pandas/tests/reshape/test_melt.py b/pandas/tests/reshape/test_melt.py
index ff9f927597956..63367aee77f83 100644
--- a/pandas/tests/reshape/test_melt.py
+++ b/pandas/tests/reshape/test_melt.py
@@ -133,25 +133,21 @@ def test_vars_work_with_multiindex(self, df1):
["A"],
["B"],
0,
- DataFrame(
- {
- "A": {0: 1.067683, 1: -1.321405, 2: -0.807333},
- "CAP": {0: "B", 1: "B", 2: "B"},
- "value": {0: -1.110463, 1: 0.368915, 2: 0.08298},
- }
- ),
+ {
+ "A": {0: 1.067683, 1: -1.321405, 2: -0.807333},
+ "CAP": {0: "B", 1: "B", 2: "B"},
+ "value": {0: -1.110463, 1: 0.368915, 2: 0.08298},
+ },
),
(
["a"],
["b"],
1,
- DataFrame(
- {
- "a": {0: 1.067683, 1: -1.321405, 2: -0.807333},
- "low": {0: "b", 1: "b", 2: "b"},
- "value": {0: -1.110463, 1: 0.368915, 2: 0.08298},
- }
- ),
+ {
+ "a": {0: 1.067683, 1: -1.321405, 2: -0.807333},
+ "low": {0: "b", 1: "b", 2: "b"},
+ "value": {0: -1.110463, 1: 0.368915, 2: 0.08298},
+ },
),
],
)
@@ -159,6 +155,7 @@ def test_single_vars_work_with_multiindex(
self, id_vars, value_vars, col_level, expected, df1
):
result = df1.melt(id_vars, value_vars, col_level=col_level)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
@@ -287,13 +284,14 @@ def test_multiindex(self, df1):
@pytest.mark.parametrize(
"col",
[
- pd.Series(date_range("2010", periods=5, tz="US/Pacific")),
- pd.Series(["a", "b", "c", "a", "d"], dtype="category"),
- pd.Series([0, 1, 0, 0, 0]),
+ date_range("2010", periods=5, tz="US/Pacific"),
+ pd.Categorical(["a", "b", "c", "a", "d"]),
+ [0, 1, 0, 0, 0],
],
)
def test_pandas_dtypes(self, col):
# GH 15785
+ col = pd.Series(col)
df = DataFrame(
{"klass": range(5), "col": col, "attr1": [1, 0, 0, 0, 0], "attr2": col}
)
diff --git a/pandas/tests/reshape/test_pivot.py b/pandas/tests/reshape/test_pivot.py
index af156a1da87f2..fbc3d2b8a7c35 100644
--- a/pandas/tests/reshape/test_pivot.py
+++ b/pandas/tests/reshape/test_pivot.py
@@ -744,18 +744,11 @@ def test_pivot_periods_with_margins(self):
result = df.pivot_table(index="a", columns="b", values="x", margins=True)
tm.assert_frame_equal(expected, result)
- @pytest.mark.parametrize(
- "values",
- [
- ["baz", "zoo"],
- np.array(["baz", "zoo"]),
- Series(["baz", "zoo"]),
- Index(["baz", "zoo"]),
- ],
- )
+ @pytest.mark.parametrize("box", [list, np.array, Series, Index])
@pytest.mark.parametrize("method", [True, False])
- def test_pivot_with_list_like_values(self, values, method):
+ def test_pivot_with_list_like_values(self, box, method):
# issue #17160
+ values = box(["baz", "zoo"])
df = DataFrame(
{
"foo": ["one", "one", "one", "two", "two", "two"],
diff --git a/pandas/tests/reshape/test_qcut.py b/pandas/tests/reshape/test_qcut.py
index b5b19eef1106f..53af673e0f7b0 100644
--- a/pandas/tests/reshape/test_qcut.py
+++ b/pandas/tests/reshape/test_qcut.py
@@ -154,14 +154,15 @@ def test_qcut_wrong_length_labels(labels):
@pytest.mark.parametrize(
"labels, expected",
[
- (["a", "b", "c"], Categorical(["a", "b", "c"], ordered=True)),
- (list(range(3)), Categorical([0, 1, 2], ordered=True)),
+ (["a", "b", "c"], ["a", "b", "c"]),
+ (list(range(3)), [0, 1, 2]),
],
)
def test_qcut_list_like_labels(labels, expected):
# GH 13318
values = range(3)
result = qcut(values, 3, labels=labels)
+ expected = Categorical(expected, ordered=True)
tm.assert_categorical_equal(result, expected)
@@ -209,13 +210,14 @@ def test_single_quantile(data, start, end, length, labels):
@pytest.mark.parametrize(
"ser",
[
- Series(DatetimeIndex(["20180101", NaT, "20180103"])),
- Series(TimedeltaIndex(["0 days", NaT, "2 days"])),
+ DatetimeIndex(["20180101", NaT, "20180103"]),
+ TimedeltaIndex(["0 days", NaT, "2 days"]),
],
ids=lambda x: str(x.dtype),
)
def test_qcut_nat(ser, unit):
# see gh-19768
+ ser = Series(ser)
ser = ser.dt.as_unit(unit)
td = Timedelta(1, unit=unit).as_unit(unit)
diff --git a/pandas/tests/scalar/period/test_arithmetic.py b/pandas/tests/scalar/period/test_arithmetic.py
index 5dc0858de466c..97e486f9af060 100644
--- a/pandas/tests/scalar/period/test_arithmetic.py
+++ b/pandas/tests/scalar/period/test_arithmetic.py
@@ -476,10 +476,11 @@ def test_period_comparison_nat(self):
assert not left >= right
@pytest.mark.parametrize(
- "zerodim_arr, expected",
- ((np.array(0), False), (np.array(Period("2000-01", "M")), True)),
+ "scalar, expected",
+ ((0, False), (Period("2000-01", "M"), True)),
)
- def test_period_comparison_numpy_zerodim_arr(self, zerodim_arr, expected):
+ def test_period_comparison_numpy_zerodim_arr(self, scalar, expected):
+ zerodim_arr = np.array(scalar)
per = Period("2000-01", "M")
assert (per == zerodim_arr) is expected
diff --git a/pandas/tests/scalar/timedelta/methods/test_round.py b/pandas/tests/scalar/timedelta/methods/test_round.py
index 676b44a4d54f4..082c36999e06f 100644
--- a/pandas/tests/scalar/timedelta/methods/test_round.py
+++ b/pandas/tests/scalar/timedelta/methods/test_round.py
@@ -19,29 +19,31 @@ class TestTimedeltaRound:
# This first case has s1, s2 being the same as t1,t2 below
(
"ns",
- Timedelta("1 days 02:34:56.789123456"),
- Timedelta("-1 days 02:34:56.789123456"),
+ "1 days 02:34:56.789123456",
+ "-1 days 02:34:56.789123456",
),
(
"us",
- Timedelta("1 days 02:34:56.789123000"),
- Timedelta("-1 days 02:34:56.789123000"),
+ "1 days 02:34:56.789123000",
+ "-1 days 02:34:56.789123000",
),
(
"ms",
- Timedelta("1 days 02:34:56.789000000"),
- Timedelta("-1 days 02:34:56.789000000"),
+ "1 days 02:34:56.789000000",
+ "-1 days 02:34:56.789000000",
),
- ("s", Timedelta("1 days 02:34:57"), Timedelta("-1 days 02:34:57")),
- ("2s", Timedelta("1 days 02:34:56"), Timedelta("-1 days 02:34:56")),
- ("5s", Timedelta("1 days 02:34:55"), Timedelta("-1 days 02:34:55")),
- ("min", Timedelta("1 days 02:35:00"), Timedelta("-1 days 02:35:00")),
- ("12min", Timedelta("1 days 02:36:00"), Timedelta("-1 days 02:36:00")),
- ("h", Timedelta("1 days 03:00:00"), Timedelta("-1 days 03:00:00")),
- ("d", Timedelta("1 days"), Timedelta("-1 days")),
+ ("s", "1 days 02:34:57", "-1 days 02:34:57"),
+ ("2s", "1 days 02:34:56", "-1 days 02:34:56"),
+ ("5s", "1 days 02:34:55", "-1 days 02:34:55"),
+ ("min", "1 days 02:35:00", "-1 days 02:35:00"),
+ ("12min", "1 days 02:36:00", "-1 days 02:36:00"),
+ ("h", "1 days 03:00:00", "-1 days 03:00:00"),
+ ("d", "1 days", "-1 days"),
],
)
def test_round(self, freq, s1, s2):
+ s1 = Timedelta(s1)
+ s2 = Timedelta(s2)
t1 = Timedelta("1 days 02:34:56.789123456")
t2 = Timedelta("-1 days 02:34:56.789123456")
diff --git a/pandas/tests/scalar/timedelta/test_arithmetic.py b/pandas/tests/scalar/timedelta/test_arithmetic.py
index d2fa0f722ca6f..f3edaffdb315d 100644
--- a/pandas/tests/scalar/timedelta/test_arithmetic.py
+++ b/pandas/tests/scalar/timedelta/test_arithmetic.py
@@ -955,11 +955,12 @@ def test_rdivmod_invalid(self):
@pytest.mark.parametrize(
"arr",
[
- np.array([Timestamp("20130101 9:01"), Timestamp("20121230 9:02")]),
- np.array([Timestamp("2021-11-09 09:54:00"), Timedelta("1D")]),
+ [Timestamp("20130101 9:01"), Timestamp("20121230 9:02")],
+ [Timestamp("2021-11-09 09:54:00"), Timedelta("1D")],
],
)
def test_td_op_timedelta_timedeltalike_array(self, op, arr):
+ arr = np.array(arr)
msg = "unsupported operand type|cannot use operands with types"
with pytest.raises(TypeError, match=msg):
op(arr, Timedelta("1D"))
diff --git a/pandas/tests/scalar/timedelta/test_constructors.py b/pandas/tests/scalar/timedelta/test_constructors.py
index 4663f8cb71961..e680ca737b546 100644
--- a/pandas/tests/scalar/timedelta/test_constructors.py
+++ b/pandas/tests/scalar/timedelta/test_constructors.py
@@ -631,17 +631,16 @@ def test_timedelta_pass_td_and_kwargs_raises():
@pytest.mark.parametrize(
- "constructor, value, unit, expectation",
+ "constructor, value, unit",
[
- (Timedelta, "10s", "ms", (ValueError, "unit must not be specified")),
- (to_timedelta, "10s", "ms", (ValueError, "unit must not be specified")),
- (to_timedelta, ["1", 2, 3], "s", (ValueError, "unit must not be specified")),
+ (Timedelta, "10s", "ms"),
+ (to_timedelta, "10s", "ms"),
+ (to_timedelta, ["1", 2, 3], "s"),
],
)
-def test_string_with_unit(constructor, value, unit, expectation):
- exp, match = expectation
- with pytest.raises(exp, match=match):
- _ = constructor(value, unit=unit)
+def test_string_with_unit(constructor, value, unit):
+ with pytest.raises(ValueError, match="unit must not be specified"):
+ constructor(value, unit=unit)
@pytest.mark.parametrize(
diff --git a/pandas/tests/scalar/timestamp/methods/test_round.py b/pandas/tests/scalar/timestamp/methods/test_round.py
index 59c0fe8bbebfb..2fb0e1a8d3397 100644
--- a/pandas/tests/scalar/timestamp/methods/test_round.py
+++ b/pandas/tests/scalar/timestamp/methods/test_round.py
@@ -162,10 +162,6 @@ def test_floor(self, unit):
assert result._creso == dt._creso
@pytest.mark.parametrize("method", ["ceil", "round", "floor"])
- @pytest.mark.parametrize(
- "unit",
- ["ns", "us", "ms", "s"],
- )
def test_round_dst_border_ambiguous(self, method, unit):
# GH 18946 round near "fall back" DST
ts = Timestamp("2017-10-29 00:00:00", tz="UTC").tz_convert("Europe/Madrid")
@@ -197,10 +193,6 @@ def test_round_dst_border_ambiguous(self, method, unit):
["floor", "2018-03-11 03:01:00-0500", "2h"],
],
)
- @pytest.mark.parametrize(
- "unit",
- ["ns", "us", "ms", "s"],
- )
def test_round_dst_border_nonexistent(self, method, ts_str, freq, unit):
# GH 23324 round near "spring forward" DST
ts = Timestamp(ts_str, tz="America/Chicago").as_unit(unit)
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index f92e9145a2205..bbda9d3ee7dce 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -191,18 +191,17 @@ def test_timestamp_constructor_infer_fold_from_value(self, tz, ts_input, fold_ou
@pytest.mark.parametrize("tz", ["dateutil/Europe/London"])
@pytest.mark.parametrize(
- "ts_input,fold,value_out",
+ "fold,value_out",
[
- (datetime(2019, 10, 27, 1, 30, 0, 0), 0, 1572136200000000),
- (datetime(2019, 10, 27, 1, 30, 0, 0), 1, 1572139800000000),
+ (0, 1572136200000000),
+ (1, 1572139800000000),
],
)
- def test_timestamp_constructor_adjust_value_for_fold(
- self, tz, ts_input, fold, value_out
- ):
+ def test_timestamp_constructor_adjust_value_for_fold(self, tz, fold, value_out):
# Test for GH#25057
# Check that we adjust value for fold correctly
# based on timestamps since utc
+ ts_input = datetime(2019, 10, 27, 1, 30)
ts = Timestamp(ts_input, tz=tz, fold=fold)
result = ts._value
expected = value_out
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index fb844a3e43181..44a16e51f2c47 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -118,18 +118,16 @@ def test_is_end(self, end, tz):
assert getattr(ts, end)
# GH 12806
- @pytest.mark.parametrize(
- "data",
- [Timestamp("2017-08-28 23:00:00"), Timestamp("2017-08-28 23:00:00", tz="EST")],
- )
+ @pytest.mark.parametrize("tz", [None, "EST"])
# error: Unsupported operand types for + ("List[None]" and "List[str]")
@pytest.mark.parametrize(
"time_locale",
[None] + tm.get_locales(), # type: ignore[operator]
)
- def test_names(self, data, time_locale):
+ def test_names(self, tz, time_locale):
# GH 17354
# Test .day_name(), .month_name
+ data = Timestamp("2017-08-28 23:00:00", tz=tz)
if time_locale is None:
expected_day = "Monday"
expected_month = "August"
diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py
index 911f5d7b28e3f..2365ff62b1680 100644
--- a/pandas/tests/series/accessors/test_dt_accessor.py
+++ b/pandas/tests/series/accessors/test_dt_accessor.py
@@ -697,15 +697,16 @@ def test_dt_accessor_api(self):
assert isinstance(ser.dt, DatetimeProperties)
@pytest.mark.parametrize(
- "ser",
+ "data",
[
- Series(np.arange(5)),
- Series(list("abcde")),
- Series(np.random.default_rng(2).standard_normal(5)),
+ np.arange(5),
+ list("abcde"),
+ np.random.default_rng(2).standard_normal(5),
],
)
- def test_dt_accessor_invalid(self, ser):
+ def test_dt_accessor_invalid(self, data):
# GH#9322 check that series with incorrect dtypes don't have attr
+ ser = Series(data)
with pytest.raises(AttributeError, match="only use .dt accessor"):
ser.dt
assert not hasattr(ser, "dt")
diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py
index 596a225c288b8..01c775e492888 100644
--- a/pandas/tests/series/indexing/test_getitem.py
+++ b/pandas/tests/series/indexing/test_getitem.py
@@ -561,14 +561,15 @@ def test_getitem_generator(string_series):
@pytest.mark.parametrize(
- "series",
+ "data",
[
- Series([0, 1]),
- Series(date_range("2012-01-01", periods=2)),
- Series(date_range("2012-01-01", periods=2, tz="CET")),
+ [0, 1],
+ date_range("2012-01-01", periods=2),
+ date_range("2012-01-01", periods=2, tz="CET"),
],
)
-def test_getitem_ndim_deprecated(series):
+def test_getitem_ndim_deprecated(data):
+ series = Series(data)
with pytest.raises(ValueError, match="Multi-dimensional indexing"):
series[:, None]
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index c52e47a812183..449b73ecf32fe 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -117,19 +117,21 @@ def test_getitem_setitem_ellipsis(using_copy_on_write, warn_copy_on_write):
"result_1, duplicate_item, expected_1",
[
[
- Series({1: 12, 2: [1, 2, 2, 3]}),
- Series({1: 313}),
+ {1: 12, 2: [1, 2, 2, 3]},
+ {1: 313},
Series({1: 12}, dtype=object),
],
[
- Series({1: [1, 2, 3], 2: [1, 2, 2, 3]}),
- Series({1: [1, 2, 3]}),
+ {1: [1, 2, 3], 2: [1, 2, 2, 3]},
+ {1: [1, 2, 3]},
Series({1: [1, 2, 3]}),
],
],
)
def test_getitem_with_duplicates_indices(result_1, duplicate_item, expected_1):
# GH 17610
+ result_1 = Series(result_1)
+ duplicate_item = Series(duplicate_item)
result = result_1._append(duplicate_item)
expected = expected_1._append(duplicate_item)
tm.assert_series_equal(result[1], expected)
diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py
index 23137f0975fb1..85ffb0f8fe647 100644
--- a/pandas/tests/series/indexing/test_setitem.py
+++ b/pandas/tests/series/indexing/test_setitem.py
@@ -1799,11 +1799,7 @@ def test_setitem_with_bool_indexer():
@pytest.mark.parametrize(
"item", [2.0, np.nan, np.finfo(float).max, np.finfo(float).min]
)
-# Test numpy arrays, lists and tuples as the input to be
-# broadcast
-@pytest.mark.parametrize(
- "box", [lambda x: np.array([x]), lambda x: [x], lambda x: (x,)]
-)
+@pytest.mark.parametrize("box", [np.array, list, tuple])
def test_setitem_bool_indexer_dont_broadcast_length1_values(size, mask, item, box):
# GH#44265
# see also tests.series.indexing.test_where.test_broadcast
@@ -1821,11 +1817,11 @@ def test_setitem_bool_indexer_dont_broadcast_length1_values(size, mask, item, bo
)
with pytest.raises(ValueError, match=msg):
# GH#44265
- ser[selection] = box(item)
+ ser[selection] = box([item])
else:
# In this corner case setting is equivalent to setting with the unboxed
# item
- ser[selection] = box(item)
+ ser[selection] = box([item])
expected = Series(np.arange(size, dtype=float))
expected[selection] = item
diff --git a/pandas/tests/series/indexing/test_where.py b/pandas/tests/series/indexing/test_where.py
index c978481ca9988..e1139ea75f48b 100644
--- a/pandas/tests/series/indexing/test_where.py
+++ b/pandas/tests/series/indexing/test_where.py
@@ -297,11 +297,7 @@ def test_where_setitem_invalid():
@pytest.mark.parametrize(
"item", [2.0, np.nan, np.finfo(float).max, np.finfo(float).min]
)
-# Test numpy arrays, lists and tuples as the input to be
-# broadcast
-@pytest.mark.parametrize(
- "box", [lambda x: np.array([x]), lambda x: [x], lambda x: (x,)]
-)
+@pytest.mark.parametrize("box", [np.array, list, tuple])
def test_broadcast(size, mask, item, box):
# GH#8801, GH#4195
selection = np.resize(mask, size)
@@ -320,11 +316,11 @@ def test_broadcast(size, mask, item, box):
tm.assert_series_equal(s, expected)
s = Series(data)
- result = s.where(~selection, box(item))
+ result = s.where(~selection, box([item]))
tm.assert_series_equal(result, expected)
s = Series(data)
- result = s.mask(selection, box(item))
+ result = s.mask(selection, box([item]))
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py
index 4d2cd2ba963fd..2588f195aab7e 100644
--- a/pandas/tests/series/methods/test_astype.py
+++ b/pandas/tests/series/methods/test_astype.py
@@ -164,14 +164,15 @@ def test_astype_empty_constructor_equality(self, dtype):
@pytest.mark.parametrize("dtype", [str, np.str_])
@pytest.mark.parametrize(
- "series",
+ "data",
[
- Series([string.digits * 10, rand_str(63), rand_str(64), rand_str(1000)]),
- Series([string.digits * 10, rand_str(63), rand_str(64), np.nan, 1.0]),
+ [string.digits * 10, rand_str(63), rand_str(64), rand_str(1000)],
+ [string.digits * 10, rand_str(63), rand_str(64), np.nan, 1.0],
],
)
- def test_astype_str_map(self, dtype, series, using_infer_string):
+ def test_astype_str_map(self, dtype, data, using_infer_string):
# see GH#4405
+ series = Series(data)
result = series.astype(dtype)
expected = series.map(str)
if using_infer_string:
@@ -459,13 +460,9 @@ def test_astype_nan_to_bool(self):
expected = Series(True, dtype="bool")
tm.assert_series_equal(result, expected)
- @pytest.mark.parametrize(
- "dtype",
- tm.ALL_INT_EA_DTYPES + tm.FLOAT_EA_DTYPES,
- )
- def test_astype_ea_to_datetimetzdtype(self, dtype):
+ def test_astype_ea_to_datetimetzdtype(self, any_numeric_ea_dtype):
# GH37553
- ser = Series([4, 0, 9], dtype=dtype)
+ ser = Series([4, 0, 9], dtype=any_numeric_ea_dtype)
result = ser.astype(DatetimeTZDtype(tz="US/Pacific"))
expected = Series(
diff --git a/pandas/tests/series/methods/test_diff.py b/pandas/tests/series/methods/test_diff.py
index a46389087f87b..89c34aeb64206 100644
--- a/pandas/tests/series/methods/test_diff.py
+++ b/pandas/tests/series/methods/test_diff.py
@@ -74,13 +74,11 @@ def test_diff_dt64tz(self):
expected = Series(TimedeltaIndex(["NaT"] + ["1 days"] * 4), name="foo")
tm.assert_series_equal(result, expected)
- @pytest.mark.parametrize(
- "input,output,diff",
- [([False, True, True, False, False], [np.nan, True, False, True, False], 1)],
- )
- def test_diff_bool(self, input, output, diff):
+ def test_diff_bool(self):
# boolean series (test for fixing #17294)
- ser = Series(input)
+ data = [False, True, True, False, False]
+ output = [np.nan, True, False, True, False]
+ ser = Series(data)
result = ser.diff()
expected = Series(output)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_drop.py b/pandas/tests/series/methods/test_drop.py
index 5d9a469915cfb..d2a5a3324e886 100644
--- a/pandas/tests/series/methods/test_drop.py
+++ b/pandas/tests/series/methods/test_drop.py
@@ -31,17 +31,17 @@ def test_drop_unique_and_non_unique_index(
@pytest.mark.parametrize(
- "data, index, drop_labels, axis, error_type, error_desc",
+ "drop_labels, axis, error_type, error_desc",
[
# single string/tuple-like
- (range(3), list("abc"), "bc", 0, KeyError, "not found in axis"),
+ ("bc", 0, KeyError, "not found in axis"),
# bad axis
- (range(3), list("abc"), ("a",), 0, KeyError, "not found in axis"),
- (range(3), list("abc"), "one", "columns", ValueError, "No axis named columns"),
+ (("a",), 0, KeyError, "not found in axis"),
+ ("one", "columns", ValueError, "No axis named columns"),
],
)
-def test_drop_exception_raised(data, index, drop_labels, axis, error_type, error_desc):
- ser = Series(data, index=index)
+def test_drop_exception_raised(drop_labels, axis, error_type, error_desc):
+ ser = Series(range(3), index=list("abc"))
with pytest.raises(error_type, match=error_desc):
ser.drop(drop_labels, axis=axis)
diff --git a/pandas/tests/series/methods/test_drop_duplicates.py b/pandas/tests/series/methods/test_drop_duplicates.py
index 10b2e98586365..31ef8ff896bcc 100644
--- a/pandas/tests/series/methods/test_drop_duplicates.py
+++ b/pandas/tests/series/methods/test_drop_duplicates.py
@@ -34,14 +34,14 @@ def test_drop_duplicates(any_numpy_dtype, keep, expected):
@pytest.mark.parametrize(
"keep, expected",
[
- ("first", Series([False, False, True, True])),
- ("last", Series([True, True, False, False])),
- (False, Series([True, True, True, True])),
+ ("first", [False, False, True, True]),
+ ("last", [True, True, False, False]),
+ (False, [True, True, True, True]),
],
)
def test_drop_duplicates_bool(keep, expected):
tc = Series([True, False, True, False])
-
+ expected = Series(expected)
tm.assert_series_equal(tc.duplicated(keep=keep), expected)
tm.assert_series_equal(tc.drop_duplicates(keep=keep), tc[~expected])
sc = tc.copy()
diff --git a/pandas/tests/series/methods/test_duplicated.py b/pandas/tests/series/methods/test_duplicated.py
index e177b5275d855..f5a387a7f302e 100644
--- a/pandas/tests/series/methods/test_duplicated.py
+++ b/pandas/tests/series/methods/test_duplicated.py
@@ -12,30 +12,32 @@
@pytest.mark.parametrize(
"keep, expected",
[
- ("first", Series([False, False, True, False, True], name="name")),
- ("last", Series([True, True, False, False, False], name="name")),
- (False, Series([True, True, True, False, True], name="name")),
+ ("first", [False, False, True, False, True]),
+ ("last", [True, True, False, False, False]),
+ (False, [True, True, True, False, True]),
],
)
def test_duplicated_keep(keep, expected):
ser = Series(["a", "b", "b", "c", "a"], name="name")
result = ser.duplicated(keep=keep)
+ expected = Series(expected, name="name")
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
"keep, expected",
[
- ("first", Series([False, False, True, False, True])),
- ("last", Series([True, True, False, False, False])),
- (False, Series([True, True, True, False, True])),
+ ("first", [False, False, True, False, True]),
+ ("last", [True, True, False, False, False]),
+ (False, [True, True, True, False, True]),
],
)
def test_duplicated_nan_none(keep, expected):
ser = Series([np.nan, 3, 3, None, np.nan], dtype=object)
result = ser.duplicated(keep=keep)
+ expected = Series(expected)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_explode.py b/pandas/tests/series/methods/test_explode.py
index 5a0188585ef30..15d615fc35081 100644
--- a/pandas/tests/series/methods/test_explode.py
+++ b/pandas/tests/series/methods/test_explode.py
@@ -74,11 +74,12 @@ def test_invert_array():
@pytest.mark.parametrize(
- "s", [pd.Series([1, 2, 3]), pd.Series(pd.date_range("2019", periods=3, tz="UTC"))]
+ "data", [[1, 2, 3], pd.date_range("2019", periods=3, tz="UTC")]
)
-def test_non_object_dtype(s):
- result = s.explode()
- tm.assert_series_equal(result, s)
+def test_non_object_dtype(data):
+ ser = pd.Series(data)
+ result = ser.explode()
+ tm.assert_series_equal(result, ser)
def test_typical_usecase():
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index f38e4a622cffa..a70d9f39ff488 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -786,13 +786,11 @@ def test_fillna_categorical(self, fill_value, expected_output):
@pytest.mark.parametrize(
"fill_value, expected_output",
[
- (Series(["a", "b", "c", "d", "e"]), ["a", "b", "b", "d", "e"]),
- (Series(["b", "d", "a", "d", "a"]), ["a", "d", "b", "d", "a"]),
+ (["a", "b", "c", "d", "e"], ["a", "b", "b", "d", "e"]),
+ (["b", "d", "a", "d", "a"], ["a", "d", "b", "d", "a"]),
(
- Series(
- Categorical(
- ["b", "d", "a", "d", "a"], categories=["b", "c", "d", "e", "a"]
- )
+ Categorical(
+ ["b", "d", "a", "d", "a"], categories=["b", "c", "d", "e", "a"]
),
["a", "d", "b", "d", "a"],
),
@@ -803,6 +801,7 @@ def test_fillna_categorical_with_new_categories(self, fill_value, expected_outpu
data = ["a", np.nan, "b", np.nan, np.nan]
ser = Series(Categorical(data, categories=["a", "b", "c", "d", "e"]))
exp = Series(Categorical(expected_output, categories=["a", "b", "c", "d", "e"]))
+ fill_value = Series(fill_value)
result = ser.fillna(fill_value)
tm.assert_series_equal(result, exp)
diff --git a/pandas/tests/series/methods/test_info.py b/pandas/tests/series/methods/test_info.py
index 29dd704f6efa9..bd1bc1781958c 100644
--- a/pandas/tests/series/methods/test_info.py
+++ b/pandas/tests/series/methods/test_info.py
@@ -141,19 +141,17 @@ def test_info_memory_usage_deep_pypy():
@pytest.mark.parametrize(
- "series, plus",
+ "index, plus",
[
- (Series(1, index=[1, 2, 3]), False),
- (Series(1, index=list("ABC")), True),
- (Series(1, index=MultiIndex.from_product([range(3), range(3)])), False),
- (
- Series(1, index=MultiIndex.from_product([range(3), ["foo", "bar"]])),
- True,
- ),
+ ([1, 2, 3], False),
+ (list("ABC"), True),
+ (MultiIndex.from_product([range(3), range(3)]), False),
+ (MultiIndex.from_product([range(3), ["foo", "bar"]]), True),
],
)
-def test_info_memory_usage_qualified(series, plus):
+def test_info_memory_usage_qualified(index, plus):
buf = StringIO()
+ series = Series(1, index=index)
series.info(buf=buf)
if plus:
assert "+" in buf.getvalue()
diff --git a/pandas/tests/series/methods/test_isin.py b/pandas/tests/series/methods/test_isin.py
index f94f67b8cc40a..937b85a547bcd 100644
--- a/pandas/tests/series/methods/test_isin.py
+++ b/pandas/tests/series/methods/test_isin.py
@@ -211,18 +211,11 @@ def test_isin_large_series_mixed_dtypes_and_nan(monkeypatch):
tm.assert_series_equal(result, expected)
-@pytest.mark.parametrize(
- "array,expected",
- [
- (
- [0, 1j, 1j, 1, 1 + 1j, 1 + 2j, 1 + 1j],
- Series([False, True, True, False, True, True, True], dtype=bool),
- )
- ],
-)
-def test_isin_complex_numbers(array, expected):
+def test_isin_complex_numbers():
# GH 17927
+ array = [0, 1j, 1j, 1, 1 + 1j, 1 + 2j, 1 + 1j]
result = Series(array).isin([1j, 1 + 1j, 1 + 2j])
+ expected = Series([False, True, True, False, True, True, True], dtype=bool)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/methods/test_replace.py b/pandas/tests/series/methods/test_replace.py
index b0f4e233ba5eb..5266bbf7741d7 100644
--- a/pandas/tests/series/methods/test_replace.py
+++ b/pandas/tests/series/methods/test_replace.py
@@ -395,13 +395,13 @@ def test_replace_mixed_types_with_string(self):
@pytest.mark.parametrize(
"categorical, numeric",
[
- (pd.Categorical(["A"], categories=["A", "B"]), [1]),
- (pd.Categorical(["A", "B"], categories=["A", "B"]), [1, 2]),
+ (["A"], [1]),
+ (["A", "B"], [1, 2]),
],
)
def test_replace_categorical(self, categorical, numeric):
# GH 24971, GH#23305
- ser = pd.Series(categorical)
+ ser = pd.Series(pd.Categorical(categorical, categories=["A", "B"]))
msg = "Downcasting behavior in `replace`"
msg = "with CategoricalDtype is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
diff --git a/pandas/tests/series/methods/test_update.py b/pandas/tests/series/methods/test_update.py
index 3f18ae6c13880..191aa36ad5d41 100644
--- a/pandas/tests/series/methods/test_update.py
+++ b/pandas/tests/series/methods/test_update.py
@@ -76,21 +76,23 @@ def test_update_dtypes(self, other, dtype, expected, warn):
tm.assert_series_equal(ser, expected)
@pytest.mark.parametrize(
- "series, other, expected",
+ "values, other, expected",
[
# update by key
(
- Series({"a": 1, "b": 2, "c": 3, "d": 4}),
+ {"a": 1, "b": 2, "c": 3, "d": 4},
{"b": 5, "c": np.nan},
- Series({"a": 1, "b": 5, "c": 3, "d": 4}),
+ {"a": 1, "b": 5, "c": 3, "d": 4},
),
# update by position
- (Series([1, 2, 3, 4]), [np.nan, 5, 1], Series([1, 5, 1, 4])),
+ ([1, 2, 3, 4], [np.nan, 5, 1], [1, 5, 1, 4]),
],
)
- def test_update_from_non_series(self, series, other, expected):
+ def test_update_from_non_series(self, values, other, expected):
# GH 33215
+ series = Series(values)
series.update(other)
+ expected = Series(expected)
tm.assert_series_equal(series, expected)
@pytest.mark.parametrize(
diff --git a/pandas/tests/series/methods/test_value_counts.py b/pandas/tests/series/methods/test_value_counts.py
index 859010d9c79c6..7f882fa348b7e 100644
--- a/pandas/tests/series/methods/test_value_counts.py
+++ b/pandas/tests/series/methods/test_value_counts.py
@@ -225,30 +225,14 @@ def test_value_counts_bool_with_nan(self, ser, dropna, exp):
out = ser.value_counts(dropna=dropna)
tm.assert_series_equal(out, exp)
- @pytest.mark.parametrize(
- "input_array,expected",
- [
- (
- [1 + 1j, 1 + 1j, 1, 3j, 3j, 3j],
- Series(
- [3, 2, 1],
- index=Index([3j, 1 + 1j, 1], dtype=np.complex128),
- name="count",
- ),
- ),
- (
- np.array([1 + 1j, 1 + 1j, 1, 3j, 3j, 3j], dtype=np.complex64),
- Series(
- [3, 2, 1],
- index=Index([3j, 1 + 1j, 1], dtype=np.complex64),
- name="count",
- ),
- ),
- ],
- )
- def test_value_counts_complex_numbers(self, input_array, expected):
+ @pytest.mark.parametrize("dtype", [np.complex128, np.complex64])
+ def test_value_counts_complex_numbers(self, dtype):
# GH 17927
+ input_array = np.array([1 + 1j, 1 + 1j, 1, 3j, 3j, 3j], dtype=dtype)
result = Series(input_array).value_counts()
+ expected = Series(
+ [3, 2, 1], index=Index([3j, 1 + 1j, 1], dtype=dtype), name="count"
+ )
tm.assert_series_equal(result, expected)
def test_value_counts_masked(self):
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index b40e2e99dae2e..d71a515c85bd0 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -674,22 +674,12 @@ def test_ne(self):
tm.assert_numpy_array_equal(ts.index != 5, expected)
tm.assert_numpy_array_equal(~(ts.index == 5), expected)
- @pytest.mark.parametrize(
- "left, right",
- [
- (
- Series([1, 2, 3], index=list("ABC"), name="x"),
- Series([2, 2, 2], index=list("ABD"), name="x"),
- ),
- (
- Series([1, 2, 3], index=list("ABC"), name="x"),
- Series([2, 2, 2, 2], index=list("ABCD"), name="x"),
- ),
- ],
- )
- def test_comp_ops_df_compat(self, left, right, frame_or_series):
+ @pytest.mark.parametrize("right_data", [[2, 2, 2], [2, 2, 2, 2]])
+ def test_comp_ops_df_compat(self, right_data, frame_or_series):
# GH 1134
# GH 50083 to clarify that index and columns must be identically labeled
+ left = Series([1, 2, 3], index=list("ABC"), name="x")
+ right = Series(right_data, index=list("ABDC")[: len(right_data)], name="x")
if frame_or_series is not Series:
msg = (
rf"Can only compare identically-labeled \(both index and columns\) "
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index b802e92e4fcca..55ca1f98f6d6c 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -335,11 +335,11 @@ def test_constructor_index_dtype(self, dtype):
@pytest.mark.parametrize(
"input_vals",
[
- ([1, 2]),
- (["1", "2"]),
- (list(date_range("1/1/2011", periods=2, freq="h"))),
- (list(date_range("1/1/2011", periods=2, freq="h", tz="US/Eastern"))),
- ([Interval(left=0, right=5)]),
+ [1, 2],
+ ["1", "2"],
+ list(date_range("1/1/2011", periods=2, freq="h")),
+ list(date_range("1/1/2011", periods=2, freq="h", tz="US/Eastern")),
+ [Interval(left=0, right=5)],
],
)
def test_constructor_list_str(self, input_vals, string_dtype):
@@ -1806,15 +1806,10 @@ def test_constructor_datetimelike_scalar_to_string_dtype(
expected = Series(["M", "M", "M"], index=[1, 2, 3], dtype=nullable_string_dtype)
tm.assert_series_equal(result, expected)
- @pytest.mark.parametrize(
- "values",
- [
- [np.datetime64("2012-01-01"), np.datetime64("2013-01-01")],
- ["2012-01-01", "2013-01-01"],
- ],
- )
- def test_constructor_sparse_datetime64(self, values):
+ @pytest.mark.parametrize("box", [lambda x: x, np.datetime64])
+ def test_constructor_sparse_datetime64(self, box):
# https://github.com/pandas-dev/pandas/issues/35762
+ values = [box("2012-01-01"), box("2013-01-01")]
dtype = pd.SparseDtype("datetime64[ns]")
result = Series(values, dtype=dtype)
arr = pd.arrays.SparseArray(values, dtype=dtype)
diff --git a/pandas/tests/series/test_cumulative.py b/pandas/tests/series/test_cumulative.py
index e6f7b2a5e69e0..68d7fd8b90df2 100644
--- a/pandas/tests/series/test_cumulative.py
+++ b/pandas/tests/series/test_cumulative.py
@@ -94,8 +94,8 @@ def test_cummin_cummax_datetimelike(self, ts, method, skipna, exp_tdi):
@pytest.mark.parametrize(
"func, exp",
[
- ("cummin", pd.Period("2012-1-1", freq="D")),
- ("cummax", pd.Period("2012-1-2", freq="D")),
+ ("cummin", "2012-1-1"),
+ ("cummax", "2012-1-2"),
],
)
def test_cummin_cummax_period(self, func, exp):
@@ -108,6 +108,7 @@ def test_cummin_cummax_period(self, func, exp):
tm.assert_series_equal(result, expected)
result = getattr(ser, func)(skipna=True)
+ exp = pd.Period(exp, freq="D")
expected = pd.Series([pd.Period("2012-1-1", freq="D"), pd.NaT, exp])
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/series/test_logical_ops.py b/pandas/tests/series/test_logical_ops.py
index d9c94e871bd4b..755d26623cf98 100644
--- a/pandas/tests/series/test_logical_ops.py
+++ b/pandas/tests/series/test_logical_ops.py
@@ -345,9 +345,9 @@ def test_reversed_logical_op_with_index_returns_series(self, op):
@pytest.mark.parametrize(
"op, expected",
[
- (ops.rand_, Series([False, False])),
- (ops.ror_, Series([True, True])),
- (ops.rxor, Series([True, True])),
+ (ops.rand_, [False, False]),
+ (ops.ror_, [True, True]),
+ (ops.rxor, [True, True]),
],
)
def test_reverse_ops_with_index(self, op, expected):
@@ -358,6 +358,7 @@ def test_reverse_ops_with_index(self, op, expected):
idx = Index([False, True])
result = op(ser, idx)
+ expected = Series(expected)
tm.assert_series_equal(result, expected)
def test_logical_ops_label_based(self, using_infer_string):
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56745 | 2024-01-05T18:18:48Z | 2024-01-10T21:51:18Z | 2024-01-10T21:51:18Z | 2024-01-10T21:51:21Z |
Backport PR #56677 on branch 2.2.x (Fix integral truediv and floordiv for pyarrow types with large divisor and avoid floating points for floordiv) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 4222de8ce324f..0b04a1d313a6d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -786,6 +786,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
+- Bug in :meth:`Series.__floordiv__` and :meth:`Series.__truediv__` for :class:`ArrowDtype` with integral dtypes raising for large divisors (:issue:`56706`)
- Bug in :meth:`Series.__floordiv__` for :class:`ArrowDtype` with integral dtypes raising for large values (:issue:`56645`)
- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 0bc01d2da330a..3858ce4cf0ea1 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -109,30 +109,50 @@
def cast_for_truediv(
arrow_array: pa.ChunkedArray, pa_object: pa.Array | pa.Scalar
- ) -> pa.ChunkedArray:
+ ) -> tuple[pa.ChunkedArray, pa.Array | pa.Scalar]:
# Ensure int / int -> float mirroring Python/Numpy behavior
# as pc.divide_checked(int, int) -> int
if pa.types.is_integer(arrow_array.type) and pa.types.is_integer(
pa_object.type
):
+ # GH: 56645.
# https://github.com/apache/arrow/issues/35563
- # Arrow does not allow safe casting large integral values to float64.
- # Intentionally not using arrow_array.cast because it could be a scalar
- # value in reflected case, and safe=False only added to
- # scalar cast in pyarrow 13.
- return pc.cast(arrow_array, pa.float64(), safe=False)
- return arrow_array
+ return pc.cast(arrow_array, pa.float64(), safe=False), pc.cast(
+ pa_object, pa.float64(), safe=False
+ )
+
+ return arrow_array, pa_object
def floordiv_compat(
left: pa.ChunkedArray | pa.Array | pa.Scalar,
right: pa.ChunkedArray | pa.Array | pa.Scalar,
) -> pa.ChunkedArray:
- # Ensure int // int -> int mirroring Python/Numpy behavior
- # as pc.floor(pc.divide_checked(int, int)) -> float
- converted_left = cast_for_truediv(left, right)
- result = pc.floor(pc.divide(converted_left, right))
+ # TODO: Replace with pyarrow floordiv kernel.
+ # https://github.com/apache/arrow/issues/39386
if pa.types.is_integer(left.type) and pa.types.is_integer(right.type):
+ divided = pc.divide_checked(left, right)
+ if pa.types.is_signed_integer(divided.type):
+ # GH 56676
+ has_remainder = pc.not_equal(pc.multiply(divided, right), left)
+ has_one_negative_operand = pc.less(
+ pc.bit_wise_xor(left, right),
+ pa.scalar(0, type=divided.type),
+ )
+ result = pc.if_else(
+ pc.and_(
+ has_remainder,
+ has_one_negative_operand,
+ ),
+ # GH: 55561
+ pc.subtract(divided, pa.scalar(1, type=divided.type)),
+ divided,
+ )
+ else:
+ result = divided
result = result.cast(left.type)
+ else:
+ divided = pc.divide(left, right)
+ result = pc.floor(divided)
return result
ARROW_ARITHMETIC_FUNCS = {
@@ -142,8 +162,8 @@ def floordiv_compat(
"rsub": lambda x, y: pc.subtract_checked(y, x),
"mul": pc.multiply_checked,
"rmul": lambda x, y: pc.multiply_checked(y, x),
- "truediv": lambda x, y: pc.divide(cast_for_truediv(x, y), y),
- "rtruediv": lambda x, y: pc.divide(y, cast_for_truediv(x, y)),
+ "truediv": lambda x, y: pc.divide(*cast_for_truediv(x, y)),
+ "rtruediv": lambda x, y: pc.divide(*cast_for_truediv(y, x)),
"floordiv": lambda x, y: floordiv_compat(x, y),
"rfloordiv": lambda x, y: floordiv_compat(y, x),
"mod": NotImplemented,
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 6689fb34f2ae3..05a112e464677 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3260,13 +3260,82 @@ def test_arrow_floordiv():
def test_arrow_floordiv_large_values():
- # GH 55561
+ # GH 56645
a = pd.Series([1425801600000000000], dtype="int64[pyarrow]")
expected = pd.Series([1425801600000], dtype="int64[pyarrow]")
result = a // 1_000_000
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_floordiv_large_integral_result(dtype):
+ # GH 56676
+ a = pd.Series([18014398509481983], dtype=dtype)
+ result = a // 1
+ tm.assert_series_equal(result, a)
+
+
+@pytest.mark.parametrize("pa_type", tm.SIGNED_INT_PYARROW_DTYPES)
+def test_arrow_floordiv_larger_divisor(pa_type):
+ # GH 56676
+ dtype = ArrowDtype(pa_type)
+ a = pd.Series([-23], dtype=dtype)
+ result = a // 24
+ expected = pd.Series([-1], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("pa_type", tm.SIGNED_INT_PYARROW_DTYPES)
+def test_arrow_floordiv_integral_invalid(pa_type):
+ # GH 56676
+ min_value = np.iinfo(pa_type.to_pandas_dtype()).min
+ a = pd.Series([min_value], dtype=ArrowDtype(pa_type))
+ with pytest.raises(pa.lib.ArrowInvalid, match="overflow|not in range"):
+ a // -1
+ with pytest.raises(pa.lib.ArrowInvalid, match="divide by zero"):
+ a // 0
+
+
+@pytest.mark.parametrize("dtype", tm.FLOAT_PYARROW_DTYPES_STR_REPR)
+def test_arrow_floordiv_floating_0_divisor(dtype):
+ # GH 56676
+ a = pd.Series([2], dtype=dtype)
+ result = a // 0
+ expected = pd.Series([float("inf")], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("pa_type", tm.ALL_INT_PYARROW_DTYPES)
+def test_arrow_integral_floordiv_large_values(pa_type):
+ # GH 56676
+ max_value = np.iinfo(pa_type.to_pandas_dtype()).max
+ dtype = ArrowDtype(pa_type)
+ a = pd.Series([max_value], dtype=dtype)
+ b = pd.Series([1], dtype=dtype)
+ result = a // b
+ tm.assert_series_equal(result, a)
+
+
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_true_division_large_divisor(dtype):
+ # GH 56706
+ a = pd.Series([0], dtype=dtype)
+ b = pd.Series([18014398509481983], dtype=dtype)
+ expected = pd.Series([0], dtype="float64[pyarrow]")
+ result = a / b
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_floor_division_large_divisor(dtype):
+ # GH 56706
+ a = pd.Series([0], dtype=dtype)
+ b = pd.Series([18014398509481983], dtype=dtype)
+ expected = pd.Series([0], dtype=dtype)
+ result = a // b
+ tm.assert_series_equal(result, expected)
+
+
def test_string_to_datetime_parsing_cast():
# GH 56266
string_dates = ["2020-01-01 04:30:00", "2020-01-02 00:00:00", "2020-01-03 00:00:00"]
| Backport PR #56677: Fix integral truediv and floordiv for pyarrow types with large divisor and avoid floating points for floordiv | https://api.github.com/repos/pandas-dev/pandas/pulls/56744 | 2024-01-05T18:09:24Z | 2024-01-05T18:56:14Z | 2024-01-05T18:56:14Z | 2024-01-05T18:56:14Z |
DOC: Modified docstring of DataFrame.to_dict() to make the usage of orient='records' more clear | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6851955d693bc..8db437ccec389 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2066,7 +2066,9 @@ def to_dict(
index : bool, default True
Whether to include the index item (and index_names item if `orient`
is 'tight') in the returned dictionary. Can only be ``False``
- when `orient` is 'split' or 'tight'.
+ when `orient` is 'split' or 'tight'. Note that when `orient` is
+ 'records', this parameter does not take effect (index item always
+ not included).
.. versionadded:: 2.0.0
| Modified documentation for parameter `index` as is mentioned in #56483.
However, I don't think this should be the final solution to this issue. The current behavior remains counter-intuitive and will confuse users who haven't checked the doc.
closes #56483 | https://api.github.com/repos/pandas-dev/pandas/pulls/56743 | 2024-01-05T12:24:58Z | 2024-01-05T18:03:29Z | 2024-01-05T18:03:29Z | 2024-01-05T18:03:36Z |
TYP: mostly DataFrame return overloads | diff --git a/pandas/core/base.py b/pandas/core/base.py
index a1484d9ad032b..492986faba4d3 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -8,6 +8,7 @@
from typing import (
TYPE_CHECKING,
Any,
+ Callable,
Generic,
Literal,
cast,
@@ -106,7 +107,7 @@ class PandasObject(DirNamesMixin):
_cache: dict[str, Any]
@property
- def _constructor(self):
+ def _constructor(self) -> Callable[..., Self]:
"""
Class constructor (for this class it's just `__class__`).
"""
@@ -802,7 +803,7 @@ def argmin(
# "int")
return result # type: ignore[return-value]
- def tolist(self):
+ def tolist(self) -> list:
"""
Return a list of the values.
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index e48e5d9023f33..3cce697120cd2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -10674,7 +10674,7 @@ def round(
"""
from pandas.core.reshape.concat import concat
- def _dict_round(df: DataFrame, decimals):
+ def _dict_round(df: DataFrame, decimals) -> Iterator[Series]:
for col, vals in df.items():
try:
yield _series_round(vals, decimals[col])
@@ -11110,7 +11110,7 @@ def c(x):
# ----------------------------------------------------------------------
# ndarray-like stats methods
- def count(self, axis: Axis = 0, numeric_only: bool = False):
+ def count(self, axis: Axis = 0, numeric_only: bool = False) -> Series:
"""
Count non-NA cells for each column or row.
@@ -11356,9 +11356,42 @@ def _reduce_axis1(self, name: str, func, skipna: bool) -> Series:
res_ser = self._constructor_sliced(result, index=self.index, copy=False)
return res_ser
- @doc(make_doc("any", ndim=2))
# error: Signature of "any" incompatible with supertype "NDFrame"
- def any( # type: ignore[override]
+ @overload # type: ignore[override]
+ def any(
+ self,
+ *,
+ axis: Axis = ...,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def any(
+ self,
+ *,
+ axis: None,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> bool:
+ ...
+
+ @overload
+ def any(
+ self,
+ *,
+ axis: Axis | None,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> Series | bool:
+ ...
+
+ @doc(make_doc("any", ndim=2))
+ def any(
self,
*,
axis: Axis | None = 0,
@@ -11373,6 +11406,39 @@ def any( # type: ignore[override]
result = result.__finalize__(self, method="any")
return result
+ @overload
+ def all(
+ self,
+ *,
+ axis: Axis = ...,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def all(
+ self,
+ *,
+ axis: None,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> bool:
+ ...
+
+ @overload
+ def all(
+ self,
+ *,
+ axis: Axis | None,
+ bool_only: bool = ...,
+ skipna: bool = ...,
+ **kwargs,
+ ) -> Series | bool:
+ ...
+
@doc(make_doc("all", ndim=2))
def all(
self,
@@ -11388,6 +11454,40 @@ def all(
result = result.__finalize__(self, method="all")
return result
+ # error: Signature of "min" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def min(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def min(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def min(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("min", ndim=2))
def min(
self,
@@ -11395,12 +11495,48 @@ def min(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().min(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().min(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="min")
return result
+ # error: Signature of "max" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def max(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def max(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def max(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("max", ndim=2))
def max(
self,
@@ -11408,8 +11544,10 @@ def max(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().max(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().max(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="max")
return result
@@ -11422,8 +11560,14 @@ def sum(
numeric_only: bool = False,
min_count: int = 0,
**kwargs,
- ):
- result = super().sum(axis, skipna, numeric_only, min_count, **kwargs)
+ ) -> Series:
+ result = super().sum(
+ axis=axis,
+ skipna=skipna,
+ numeric_only=numeric_only,
+ min_count=min_count,
+ **kwargs,
+ )
return result.__finalize__(self, method="sum")
@doc(make_doc("prod", ndim=2))
@@ -11434,10 +11578,50 @@ def prod(
numeric_only: bool = False,
min_count: int = 0,
**kwargs,
- ):
- result = super().prod(axis, skipna, numeric_only, min_count, **kwargs)
+ ) -> Series:
+ result = super().prod(
+ axis=axis,
+ skipna=skipna,
+ numeric_only=numeric_only,
+ min_count=min_count,
+ **kwargs,
+ )
return result.__finalize__(self, method="prod")
+ # error: Signature of "mean" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def mean(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def mean(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def mean(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("mean", ndim=2))
def mean(
self,
@@ -11445,12 +11629,48 @@ def mean(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().mean(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().mean(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="mean")
return result
+ # error: Signature of "median" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def median(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def median(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def median(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("median", ndim=2))
def median(
self,
@@ -11458,12 +11678,51 @@ def median(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().median(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().median(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="median")
return result
+ # error: Signature of "sem" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def sem(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def sem(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def sem(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("sem", ndim=2))
def sem(
self,
@@ -11472,12 +11731,51 @@ def sem(
ddof: int = 1,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().sem(axis, skipna, ddof, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().sem(
+ axis=axis, skipna=skipna, ddof=ddof, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="sem")
return result
+ # error: Signature of "var" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def var(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def var(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def var(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("var", ndim=2))
def var(
self,
@@ -11486,12 +11784,51 @@ def var(
ddof: int = 1,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().var(axis, skipna, ddof, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().var(
+ axis=axis, skipna=skipna, ddof=ddof, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="var")
return result
+ # error: Signature of "std" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def std(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def std(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def std(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ ddof: int = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("std", ndim=2))
def std(
self,
@@ -11500,12 +11837,48 @@ def std(
ddof: int = 1,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().std(axis, skipna, ddof, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().std(
+ axis=axis, skipna=skipna, ddof=ddof, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="std")
return result
+ # error: Signature of "skew" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def skew(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def skew(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def skew(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("skew", ndim=2))
def skew(
self,
@@ -11513,12 +11886,48 @@ def skew(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().skew(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().skew(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="skew")
return result
+ # error: Signature of "kurt" incompatible with supertype "NDFrame"
+ @overload # type: ignore[override]
+ def kurt(
+ self,
+ *,
+ axis: Axis = ...,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series:
+ ...
+
+ @overload
+ def kurt(
+ self,
+ *,
+ axis: None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Any:
+ ...
+
+ @overload
+ def kurt(
+ self,
+ *,
+ axis: Axis | None,
+ skipna: bool = ...,
+ numeric_only: bool = ...,
+ **kwargs,
+ ) -> Series | Any:
+ ...
+
@doc(make_doc("kurt", ndim=2))
def kurt(
self,
@@ -11526,13 +11935,16 @@ def kurt(
skipna: bool = True,
numeric_only: bool = False,
**kwargs,
- ):
- result = super().kurt(axis, skipna, numeric_only, **kwargs)
+ ) -> Series | Any:
+ result = super().kurt(
+ axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
if isinstance(result, Series):
result = result.__finalize__(self, method="kurt")
return result
- kurtosis = kurt
+ # error: Incompatible types in assignment
+ kurtosis = kurt # type: ignore[assignment]
product = prod
@doc(make_doc("cummin", ndim=2))
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3c71784ad81c4..f458e058c9755 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -11805,6 +11805,7 @@ def _logical_func(
def any(
self,
+ *,
axis: Axis | None = 0,
bool_only: bool_t = False,
skipna: bool_t = True,
@@ -11816,6 +11817,7 @@ def any(
def all(
self,
+ *,
axis: Axis = 0,
bool_only: bool_t = False,
skipna: bool_t = True,
@@ -11919,6 +11921,7 @@ def _stat_function_ddof(
def sem(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
ddof: int = 1,
@@ -11931,6 +11934,7 @@ def sem(
def var(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
ddof: int = 1,
@@ -11943,6 +11947,7 @@ def var(
def std(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
ddof: int = 1,
@@ -11974,6 +11979,7 @@ def _stat_function(
def min(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -11990,6 +11996,7 @@ def min(
def max(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12006,6 +12013,7 @@ def max(
def mean(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12017,6 +12025,7 @@ def mean(
def median(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12028,6 +12037,7 @@ def median(
def skew(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12039,6 +12049,7 @@ def skew(
def kurt(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12091,6 +12102,7 @@ def _min_count_stat_function(
def sum(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
@@ -12103,6 +12115,7 @@ def sum(
def prod(
self,
+ *,
axis: Axis | None = 0,
skipna: bool_t = True,
numeric_only: bool_t = False,
diff --git a/pandas/core/reshape/encoding.py b/pandas/core/reshape/encoding.py
index fae5c082c72a0..fef32fca828a9 100644
--- a/pandas/core/reshape/encoding.py
+++ b/pandas/core/reshape/encoding.py
@@ -6,10 +6,7 @@
Iterable,
)
import itertools
-from typing import (
- TYPE_CHECKING,
- cast,
-)
+from typing import TYPE_CHECKING
import numpy as np
@@ -492,7 +489,7 @@ def from_dummies(
f"Received 'data' of type: {type(data).__name__}"
)
- col_isna_mask = cast(Series, data.isna().any())
+ col_isna_mask = data.isna().any()
if col_isna_mask.any():
raise ValueError(
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 641a44efbf286..8093f9aa70cba 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -6435,7 +6435,9 @@ def min(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.min(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.min(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
@doc(make_doc("max", ndim=1))
def max(
@@ -6445,7 +6447,9 @@ def max(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.max(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.max(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
@doc(make_doc("sum", ndim=1))
def sum(
@@ -6456,7 +6460,14 @@ def sum(
min_count: int = 0,
**kwargs,
):
- return NDFrame.sum(self, axis, skipna, numeric_only, min_count, **kwargs)
+ return NDFrame.sum(
+ self,
+ axis=axis,
+ skipna=skipna,
+ numeric_only=numeric_only,
+ min_count=min_count,
+ **kwargs,
+ )
@doc(make_doc("prod", ndim=1))
def prod(
@@ -6467,7 +6478,14 @@ def prod(
min_count: int = 0,
**kwargs,
):
- return NDFrame.prod(self, axis, skipna, numeric_only, min_count, **kwargs)
+ return NDFrame.prod(
+ self,
+ axis=axis,
+ skipna=skipna,
+ numeric_only=numeric_only,
+ min_count=min_count,
+ **kwargs,
+ )
@doc(make_doc("mean", ndim=1))
def mean(
@@ -6477,7 +6495,9 @@ def mean(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.mean(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.mean(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
@doc(make_doc("median", ndim=1))
def median(
@@ -6487,7 +6507,9 @@ def median(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.median(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.median(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
@doc(make_doc("sem", ndim=1))
def sem(
@@ -6498,7 +6520,14 @@ def sem(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.sem(self, axis, skipna, ddof, numeric_only, **kwargs)
+ return NDFrame.sem(
+ self,
+ axis=axis,
+ skipna=skipna,
+ ddof=ddof,
+ numeric_only=numeric_only,
+ **kwargs,
+ )
@doc(make_doc("var", ndim=1))
def var(
@@ -6509,7 +6538,14 @@ def var(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.var(self, axis, skipna, ddof, numeric_only, **kwargs)
+ return NDFrame.var(
+ self,
+ axis=axis,
+ skipna=skipna,
+ ddof=ddof,
+ numeric_only=numeric_only,
+ **kwargs,
+ )
@doc(make_doc("std", ndim=1))
def std(
@@ -6520,7 +6556,14 @@ def std(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.std(self, axis, skipna, ddof, numeric_only, **kwargs)
+ return NDFrame.std(
+ self,
+ axis=axis,
+ skipna=skipna,
+ ddof=ddof,
+ numeric_only=numeric_only,
+ **kwargs,
+ )
@doc(make_doc("skew", ndim=1))
def skew(
@@ -6530,7 +6573,9 @@ def skew(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.skew(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.skew(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
@doc(make_doc("kurt", ndim=1))
def kurt(
@@ -6540,7 +6585,9 @@ def kurt(
numeric_only: bool = False,
**kwargs,
):
- return NDFrame.kurt(self, axis, skipna, numeric_only, **kwargs)
+ return NDFrame.kurt(
+ self, axis=axis, skipna=skipna, numeric_only=numeric_only, **kwargs
+ )
kurtosis = kurt
product = prod
| - `DataFrame.any/bool` return `bool` when `axis=None` and `Series` otherwise
- `DataFrame.sum/prod` currently always returns a `Series` (but will return anything when `axis=None` in the future)
- `DataFrame.mean/median/var/std/skew/kurt` return `Series` when `axis` is not `None` and potentially anything when `axis=None`
I assume it is fine to directly use keyword-only arguments in NDFrame as it is private(?), and Series+DataFrame overwrite the methods in question. | https://api.github.com/repos/pandas-dev/pandas/pulls/56739 | 2024-01-05T01:56:12Z | 2024-02-13T01:22:05Z | 2024-02-13T01:22:05Z | 2024-04-07T10:59:30Z |
TST/CLN: Test parametrizations 2 | diff --git a/pandas/tests/indexes/base_class/test_setops.py b/pandas/tests/indexes/base_class/test_setops.py
index 3ef3f3ad4d3a2..49c6a91236db7 100644
--- a/pandas/tests/indexes/base_class/test_setops.py
+++ b/pandas/tests/indexes/base_class/test_setops.py
@@ -149,13 +149,13 @@ def test_intersection_str_dates(self, sort):
@pytest.mark.parametrize(
"index2,expected_arr",
- [(Index(["B", "D"]), ["B"]), (Index(["B", "D", "A"]), ["A", "B"])],
+ [(["B", "D"], ["B"]), (["B", "D", "A"], ["A", "B"])],
)
def test_intersection_non_monotonic_non_unique(self, index2, expected_arr, sort):
# non-monotonic non-unique
index1 = Index(["A", "B", "A", "C"])
expected = Index(expected_arr)
- result = index1.intersection(index2, sort=sort)
+ result = index1.intersection(Index(index2), sort=sort)
if sort is None:
expected = expected.sort_values()
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/test_date_range.py b/pandas/tests/indexes/datetimes/test_date_range.py
index 019d434680661..024f37ee5b710 100644
--- a/pandas/tests/indexes/datetimes/test_date_range.py
+++ b/pandas/tests/indexes/datetimes/test_date_range.py
@@ -526,17 +526,13 @@ def test_range_tz_pytz(self):
@pytest.mark.parametrize(
"start, end",
[
- [
- Timestamp(datetime(2014, 3, 6), tz="US/Eastern"),
- Timestamp(datetime(2014, 3, 12), tz="US/Eastern"),
- ],
- [
- Timestamp(datetime(2013, 11, 1), tz="US/Eastern"),
- Timestamp(datetime(2013, 11, 6), tz="US/Eastern"),
- ],
+ [datetime(2014, 3, 6), datetime(2014, 3, 12)],
+ [datetime(2013, 11, 1), datetime(2013, 11, 6)],
],
)
def test_range_tz_dst_straddle_pytz(self, start, end):
+ start = Timestamp(start, tz="US/Eastern")
+ end = Timestamp(end, tz="US/Eastern")
dr = date_range(start, end, freq="D")
assert dr[0] == start
assert dr[-1] == end
diff --git a/pandas/tests/indexes/interval/test_interval.py b/pandas/tests/indexes/interval/test_interval.py
index 7391d39bdde7b..006a06e529971 100644
--- a/pandas/tests/indexes/interval/test_interval.py
+++ b/pandas/tests/indexes/interval/test_interval.py
@@ -789,23 +789,20 @@ def test_is_overlapping(self, start, shift, na_value, closed):
@pytest.mark.parametrize(
"tuples",
[
- list(zip(range(10), range(1, 11))),
- list(
- zip(
- date_range("20170101", periods=10),
- date_range("20170101", periods=10),
- )
+ zip(range(10), range(1, 11)),
+ zip(
+ date_range("20170101", periods=10),
+ date_range("20170101", periods=10),
),
- list(
- zip(
- timedelta_range("0 days", periods=10),
- timedelta_range("1 day", periods=10),
- )
+ zip(
+ timedelta_range("0 days", periods=10),
+ timedelta_range("1 day", periods=10),
),
],
)
def test_to_tuples(self, tuples):
# GH 18756
+ tuples = list(tuples)
idx = IntervalIndex.from_tuples(tuples)
result = idx.to_tuples()
expected = Index(com.asarray_tuplesafe(tuples))
diff --git a/pandas/tests/indexes/multi/test_duplicates.py b/pandas/tests/indexes/multi/test_duplicates.py
index 6c6d9022b1af3..1bbeedac3fb10 100644
--- a/pandas/tests/indexes/multi/test_duplicates.py
+++ b/pandas/tests/indexes/multi/test_duplicates.py
@@ -243,13 +243,14 @@ def f(a):
@pytest.mark.parametrize(
"keep, expected",
[
- ("first", np.array([False, False, False, True, True, False])),
- ("last", np.array([False, True, True, False, False, False])),
- (False, np.array([False, True, True, True, True, False])),
+ ("first", [False, False, False, True, True, False]),
+ ("last", [False, True, True, False, False, False]),
+ (False, [False, True, True, True, True, False]),
],
)
def test_duplicated(idx_dup, keep, expected):
result = idx_dup.duplicated(keep=keep)
+ expected = np.array(expected)
tm.assert_numpy_array_equal(result, expected)
@@ -319,14 +320,7 @@ def test_duplicated_drop_duplicates():
tm.assert_index_equal(idx.drop_duplicates(keep=False), expected)
-@pytest.mark.parametrize(
- "dtype",
- [
- np.complex64,
- np.complex128,
- ],
-)
-def test_duplicated_series_complex_numbers(dtype):
+def test_duplicated_series_complex_numbers(complex_dtype):
# GH 17927
expected = Series(
[False, False, False, True, False, False, False, True, False, True],
@@ -345,7 +339,7 @@ def test_duplicated_series_complex_numbers(dtype):
np.nan,
np.nan + np.nan * 1j,
],
- dtype=dtype,
+ dtype=complex_dtype,
).duplicated()
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/multi/test_indexing.py b/pandas/tests/indexes/multi/test_indexing.py
index 5e2d3c23da645..f426a3ee42566 100644
--- a/pandas/tests/indexes/multi/test_indexing.py
+++ b/pandas/tests/indexes/multi/test_indexing.py
@@ -560,27 +560,26 @@ def test_getitem_group_select(idx):
assert sorted_idx.get_loc("foo") == slice(0, 2)
-@pytest.mark.parametrize("ind1", [[True] * 5, Index([True] * 5)])
-@pytest.mark.parametrize(
- "ind2",
- [[True, False, True, False, False], Index([True, False, True, False, False])],
-)
-def test_getitem_bool_index_all(ind1, ind2):
+@pytest.mark.parametrize("box", [list, Index])
+def test_getitem_bool_index_all(box):
# GH#22533
+ ind1 = box([True] * 5)
idx = MultiIndex.from_tuples([(10, 1), (20, 2), (30, 3), (40, 4), (50, 5)])
tm.assert_index_equal(idx[ind1], idx)
+ ind2 = box([True, False, True, False, False])
expected = MultiIndex.from_tuples([(10, 1), (30, 3)])
tm.assert_index_equal(idx[ind2], expected)
-@pytest.mark.parametrize("ind1", [[True], Index([True])])
-@pytest.mark.parametrize("ind2", [[False], Index([False])])
-def test_getitem_bool_index_single(ind1, ind2):
+@pytest.mark.parametrize("box", [list, Index])
+def test_getitem_bool_index_single(box):
# GH#22533
+ ind1 = box([True])
idx = MultiIndex.from_tuples([(10, 1)])
tm.assert_index_equal(idx[ind1], idx)
+ ind2 = box([False])
expected = MultiIndex(
levels=[np.array([], dtype=np.int64), np.array([], dtype=np.int64)],
codes=[[], []],
diff --git a/pandas/tests/indexes/multi/test_isin.py b/pandas/tests/indexes/multi/test_isin.py
index 68fdf25359f1b..92ac2468d5993 100644
--- a/pandas/tests/indexes/multi/test_isin.py
+++ b/pandas/tests/indexes/multi/test_isin.py
@@ -75,15 +75,16 @@ def test_isin_level_kwarg():
@pytest.mark.parametrize(
"labels,expected,level",
[
- ([("b", np.nan)], np.array([False, False, True]), None),
- ([np.nan, "a"], np.array([True, True, False]), 0),
- (["d", np.nan], np.array([False, True, True]), 1),
+ ([("b", np.nan)], [False, False, True], None),
+ ([np.nan, "a"], [True, True, False], 0),
+ (["d", np.nan], [False, True, True], 1),
],
)
def test_isin_multi_index_with_missing_value(labels, expected, level):
# GH 19132
midx = MultiIndex.from_arrays([[np.nan, "a", "b"], ["c", "d", np.nan]])
result = midx.isin(labels, level=level)
+ expected = np.array(expected)
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/indexes/multi/test_join.py b/pandas/tests/indexes/multi/test_join.py
index edd0feaaa1159..3fb428fecea41 100644
--- a/pandas/tests/indexes/multi/test_join.py
+++ b/pandas/tests/indexes/multi/test_join.py
@@ -12,10 +12,9 @@
import pandas._testing as tm
-@pytest.mark.parametrize(
- "other", [Index(["three", "one", "two"]), Index(["one"]), Index(["one", "three"])]
-)
+@pytest.mark.parametrize("other", [["three", "one", "two"], ["one"], ["one", "three"]])
def test_join_level(idx, other, join_type):
+ other = Index(other)
join_index, lidx, ridx = other.join(
idx, how=join_type, level="second", return_indexers=True
)
diff --git a/pandas/tests/indexes/multi/test_setops.py b/pandas/tests/indexes/multi/test_setops.py
index 0abb56ecf9de7..9354984538c58 100644
--- a/pandas/tests/indexes/multi/test_setops.py
+++ b/pandas/tests/indexes/multi/test_setops.py
@@ -711,17 +711,11 @@ def test_intersection_lexsort_depth(levels1, levels2, codes1, codes2, names):
"a",
[pd.Categorical(["a", "b"], categories=["a", "b"]), ["a", "b"]],
)
-@pytest.mark.parametrize(
- "b",
- [
- pd.Categorical(["a", "b"], categories=["b", "a"], ordered=True),
- pd.Categorical(["a", "b"], categories=["b", "a"]),
- ],
-)
-def test_intersection_with_non_lex_sorted_categories(a, b):
+@pytest.mark.parametrize("b_ordered", [True, False])
+def test_intersection_with_non_lex_sorted_categories(a, b_ordered):
# GH#49974
other = ["1", "2"]
-
+ b = pd.Categorical(["a", "b"], categories=["b", "a"], ordered=b_ordered)
df1 = DataFrame({"x": a, "y": other})
df2 = DataFrame({"x": b, "y": other})
diff --git a/pandas/tests/indexes/numeric/test_indexing.py b/pandas/tests/indexes/numeric/test_indexing.py
index 29f8a0a5a5932..43adc09774914 100644
--- a/pandas/tests/indexes/numeric/test_indexing.py
+++ b/pandas/tests/indexes/numeric/test_indexing.py
@@ -110,16 +110,16 @@ def test_get_indexer(self):
@pytest.mark.parametrize(
"expected,method",
[
- (np.array([-1, 0, 0, 1, 1], dtype=np.intp), "pad"),
- (np.array([-1, 0, 0, 1, 1], dtype=np.intp), "ffill"),
- (np.array([0, 0, 1, 1, 2], dtype=np.intp), "backfill"),
- (np.array([0, 0, 1, 1, 2], dtype=np.intp), "bfill"),
+ ([-1, 0, 0, 1, 1], "pad"),
+ ([-1, 0, 0, 1, 1], "ffill"),
+ ([0, 0, 1, 1, 2], "backfill"),
+ ([0, 0, 1, 1, 2], "bfill"),
],
)
def test_get_indexer_methods(self, reverse, expected, method):
index1 = Index([1, 2, 3, 4, 5])
index2 = Index([2, 4, 6])
-
+ expected = np.array(expected, dtype=np.intp)
if reverse:
index1 = index1[::-1]
expected = expected[::-1]
@@ -166,12 +166,11 @@ def test_get_indexer_nearest(self, method, tolerance, indexer, expected):
@pytest.mark.parametrize("listtype", [list, tuple, Series, np.array])
@pytest.mark.parametrize(
"tolerance, expected",
- list(
- zip(
- [[0.3, 0.3, 0.1], [0.2, 0.1, 0.1], [0.1, 0.5, 0.5]],
- [[0, 2, -1], [0, -1, -1], [-1, 2, 9]],
- )
- ),
+ [
+ [[0.3, 0.3, 0.1], [0, 2, -1]],
+ [[0.2, 0.1, 0.1], [0, -1, -1]],
+ [[0.1, 0.5, 0.5], [-1, 2, 9]],
+ ],
)
def test_get_indexer_nearest_listlike_tolerance(
self, tolerance, expected, listtype
diff --git a/pandas/tests/indexes/numeric/test_setops.py b/pandas/tests/indexes/numeric/test_setops.py
index 102560852e8e4..e9e5a57dfe9e5 100644
--- a/pandas/tests/indexes/numeric/test_setops.py
+++ b/pandas/tests/indexes/numeric/test_setops.py
@@ -113,13 +113,14 @@ def test_intersection_uint64_outside_int64_range(self):
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize(
- "index2,keeps_name",
+ "index2_name,keeps_name",
[
- (Index([4, 7, 6, 5, 3], name="index"), True),
- (Index([4, 7, 6, 5, 3], name="other"), False),
+ ("index", True),
+ ("other", False),
],
)
- def test_intersection_monotonic(self, index2, keeps_name, sort):
+ def test_intersection_monotonic(self, index2_name, keeps_name, sort):
+ index2 = Index([4, 7, 6, 5, 3], name=index2_name)
index1 = Index([5, 3, 2, 4, 1], name="index")
expected = Index([5, 3, 4])
diff --git a/pandas/tests/indexes/object/test_indexing.py b/pandas/tests/indexes/object/test_indexing.py
index ebf9dac715f8d..039836da75cd5 100644
--- a/pandas/tests/indexes/object/test_indexing.py
+++ b/pandas/tests/indexes/object/test_indexing.py
@@ -18,11 +18,12 @@ class TestGetIndexer:
@pytest.mark.parametrize(
"method,expected",
[
- ("pad", np.array([-1, 0, 1, 1], dtype=np.intp)),
- ("backfill", np.array([0, 0, 1, -1], dtype=np.intp)),
+ ("pad", [-1, 0, 1, 1]),
+ ("backfill", [0, 0, 1, -1]),
],
)
def test_get_indexer_strings(self, method, expected):
+ expected = np.array(expected, dtype=np.intp)
index = Index(["b", "c"])
actual = index.get_indexer(["a", "b", "c", "d"], method=method)
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index 158cba9dfdded..77ce687d51693 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -844,12 +844,14 @@ def test_is_monotonic_incomparable(self, attr):
@pytest.mark.parametrize(
"index,expected",
[
- (Index(["qux", "baz", "foo", "bar"]), np.array([False, False, True, True])),
- (Index([]), np.array([], dtype=bool)), # empty
+ (["qux", "baz", "foo", "bar"], [False, False, True, True]),
+ ([], []), # empty
],
)
def test_isin(self, values, index, expected):
+ index = Index(index)
result = index.isin(values)
+ expected = np.array(expected, dtype=bool)
tm.assert_numpy_array_equal(result, expected)
def test_isin_nan_common_object(
@@ -918,11 +920,12 @@ def test_isin_nan_common_float64(self, nulls_fixture, float_numpy_dtype):
@pytest.mark.parametrize(
"index",
[
- Index(["qux", "baz", "foo", "bar"]),
- Index([1.0, 2.0, 3.0, 4.0], dtype=np.float64),
+ ["qux", "baz", "foo", "bar"],
+ np.array([1.0, 2.0, 3.0, 4.0], dtype=np.float64),
],
)
def test_isin_level_kwarg(self, level, index):
+ index = Index(index)
values = index.tolist()[-2:] + ["nonexisting"]
expected = np.array([False, False, True, True])
@@ -1078,10 +1081,11 @@ def test_str_bool_series_indexing(self):
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize(
- "index,expected", [(Index(list("abcd")), True), (Index(range(4)), False)]
+ "index,expected", [(list("abcd"), True), (range(4), False)]
)
def test_tab_completion(self, index, expected):
# GH 9910
+ index = Index(index)
result = "str" in dir(index)
assert result == expected
@@ -1164,15 +1168,11 @@ def test_reindex_preserves_type_if_target_is_empty_list_or_array(self, labels):
index = Index(list("abc"))
assert index.reindex(labels)[0].dtype.type == index.dtype.type
- @pytest.mark.parametrize(
- "labels,dtype",
- [
- (DatetimeIndex([]), np.datetime64),
- ],
- )
- def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self, labels, dtype):
+ def test_reindex_doesnt_preserve_type_if_target_is_empty_index(self):
# GH7774
index = Index(list("abc"))
+ labels = DatetimeIndex([])
+ dtype = np.datetime64
assert index.reindex(labels)[0].dtype.type == dtype
def test_reindex_doesnt_preserve_type_if_target_is_empty_index_numeric(
diff --git a/pandas/tests/indexes/test_index_new.py b/pandas/tests/indexes/test_index_new.py
index 72641077c90fe..867d32e5c86a2 100644
--- a/pandas/tests/indexes/test_index_new.py
+++ b/pandas/tests/indexes/test_index_new.py
@@ -80,14 +80,10 @@ def test_construction_list_tuples_nan(self, na_value, vtype):
expected = MultiIndex.from_tuples(values)
tm.assert_index_equal(result, expected)
- @pytest.mark.parametrize(
- "dtype",
- [int, "int64", "int32", "int16", "int8", "uint64", "uint32", "uint16", "uint8"],
- )
- def test_constructor_int_dtype_float(self, dtype):
+ def test_constructor_int_dtype_float(self, any_int_numpy_dtype):
# GH#18400
- expected = Index([0, 1, 2, 3], dtype=dtype)
- result = Index([0.0, 1.0, 2.0, 3.0], dtype=dtype)
+ expected = Index([0, 1, 2, 3], dtype=any_int_numpy_dtype)
+ result = Index([0.0, 1.0, 2.0, 3.0], dtype=any_int_numpy_dtype)
tm.assert_index_equal(result, expected)
@pytest.mark.parametrize("cast_index", [True, False])
@@ -332,11 +328,12 @@ def test_constructor_dtypes_to_categorical(self, vals):
@pytest.mark.parametrize(
"vals",
[
- Index(np.array([np.datetime64("2011-01-01"), np.datetime64("2011-01-02")])),
- Index([datetime(2011, 1, 1), datetime(2011, 1, 2)]),
+ np.array([np.datetime64("2011-01-01"), np.datetime64("2011-01-02")]),
+ [datetime(2011, 1, 1), datetime(2011, 1, 2)],
],
)
def test_constructor_dtypes_to_datetime(self, cast_index, vals):
+ vals = Index(vals)
if cast_index:
index = Index(vals, dtype=object)
assert isinstance(index, Index)
diff --git a/pandas/tests/indexes/test_indexing.py b/pandas/tests/indexes/test_indexing.py
index 1ea47f636ac9b..e6716239cca5a 100644
--- a/pandas/tests/indexes/test_indexing.py
+++ b/pandas/tests/indexes/test_indexing.py
@@ -92,15 +92,16 @@ class TestContains:
@pytest.mark.parametrize(
"index,val",
[
- (Index([0, 1, 2]), 2),
- (Index([0, 1, "2"]), "2"),
- (Index([0, 1, 2, np.inf, 4]), 4),
- (Index([0, 1, 2, np.nan, 4]), 4),
- (Index([0, 1, 2, np.inf]), np.inf),
- (Index([0, 1, 2, np.nan]), np.nan),
+ ([0, 1, 2], 2),
+ ([0, 1, "2"], "2"),
+ ([0, 1, 2, np.inf, 4], 4),
+ ([0, 1, 2, np.nan, 4], 4),
+ ([0, 1, 2, np.inf], np.inf),
+ ([0, 1, 2, np.nan], np.nan),
],
)
def test_index_contains(self, index, val):
+ index = Index(index)
assert val in index
@pytest.mark.parametrize(
@@ -123,18 +124,16 @@ def test_index_contains(self, index, val):
def test_index_not_contains(self, index, val):
assert val not in index
- @pytest.mark.parametrize(
- "index,val", [(Index([0, 1, "2"]), 0), (Index([0, 1, "2"]), "2")]
- )
- def test_mixed_index_contains(self, index, val):
+ @pytest.mark.parametrize("val", [0, "2"])
+ def test_mixed_index_contains(self, val):
# GH#19860
+ index = Index([0, 1, "2"])
assert val in index
- @pytest.mark.parametrize(
- "index,val", [(Index([0, 1, "2"]), "1"), (Index([0, 1, "2"]), 2)]
- )
+ @pytest.mark.parametrize("val", ["1", 2])
def test_mixed_index_not_contains(self, index, val):
# GH#19860
+ index = Index([0, 1, "2"])
assert val not in index
def test_contains_with_float_index(self, any_real_numpy_dtype):
@@ -303,12 +302,10 @@ def test_putmask_with_wrong_mask(self, index):
index.putmask("foo", fill)
-@pytest.mark.parametrize(
- "idx", [Index([1, 2, 3]), Index([0.1, 0.2, 0.3]), Index(["a", "b", "c"])]
-)
+@pytest.mark.parametrize("idx", [[1, 2, 3], [0.1, 0.2, 0.3], ["a", "b", "c"]])
def test_getitem_deprecated_float(idx):
# https://github.com/pandas-dev/pandas/issues/34191
-
+ idx = Index(idx)
msg = "Indexing with a float is no longer supported"
with pytest.raises(IndexError, match=msg):
idx[1.0]
diff --git a/pandas/tests/indexes/test_setops.py b/pandas/tests/indexes/test_setops.py
index 8f4dd1c64236a..27b54ea66f0ac 100644
--- a/pandas/tests/indexes/test_setops.py
+++ b/pandas/tests/indexes/test_setops.py
@@ -721,14 +721,15 @@ def test_intersection(self, index, sort):
assert inter is first
@pytest.mark.parametrize(
- "index2,keeps_name",
+ "index2_name,keeps_name",
[
- (Index([3, 4, 5, 6, 7], name="index"), True), # preserve same name
- (Index([3, 4, 5, 6, 7], name="other"), False), # drop diff names
- (Index([3, 4, 5, 6, 7]), False),
+ ("index", True), # preserve same name
+ ("other", False), # drop diff names
+ (None, False),
],
)
- def test_intersection_name_preservation(self, index2, keeps_name, sort):
+ def test_intersection_name_preservation(self, index2_name, keeps_name, sort):
+ index2 = Index([3, 4, 5, 6, 7], name=index2_name)
index1 = Index([1, 2, 3, 4, 5], name="index")
expected = Index([3, 4, 5])
result = index1.intersection(index2, sort)
@@ -915,11 +916,13 @@ def test_symmetric_difference_mi(self, sort):
@pytest.mark.parametrize(
"index2,expected",
[
- (Index([0, 1, np.nan]), Index([2.0, 3.0, 0.0])),
- (Index([0, 1]), Index([np.nan, 2.0, 3.0, 0.0])),
+ ([0, 1, np.nan], [2.0, 3.0, 0.0]),
+ ([0, 1], [np.nan, 2.0, 3.0, 0.0]),
],
)
def test_symmetric_difference_missing(self, index2, expected, sort):
+ index2 = Index(index2)
+ expected = Index(expected)
# GH#13514 change: {nan} - {nan} == {}
# (GH#6444, sorting of nans, is no longer an issue)
index1 = Index([1, np.nan, 2, 3])
diff --git a/pandas/tests/indexing/interval/test_interval.py b/pandas/tests/indexing/interval/test_interval.py
index cabfee9aa040a..b72ef57475305 100644
--- a/pandas/tests/indexing/interval/test_interval.py
+++ b/pandas/tests/indexing/interval/test_interval.py
@@ -211,10 +211,7 @@ def test_mi_intervalindex_slicing_with_scalar(self):
tm.assert_series_equal(result, expected)
@pytest.mark.xfail(not IS64, reason="GH 23440")
- @pytest.mark.parametrize(
- "base",
- [101, 1010],
- )
+ @pytest.mark.parametrize("base", [101, 1010])
def test_reindex_behavior_with_interval_index(self, base):
# GH 51826
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index c897afaeeee0e..a9aeba0c199f9 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -1383,14 +1383,12 @@ def test_loc_getitem_timedelta_0seconds(self):
result = df.loc["0s":, :]
tm.assert_frame_equal(result, expected)
- @pytest.mark.parametrize(
- "val,expected", [(2**63 - 1, Series([1])), (2**63, Series([2]))]
- )
+ @pytest.mark.parametrize("val,expected", [(2**63 - 1, 1), (2**63, 2)])
def test_loc_getitem_uint64_scalar(self, val, expected):
# see GH#19399
df = DataFrame([1, 2], index=[2**63 - 1, 2**63])
result = df.loc[val]
-
+ expected = Series([expected])
expected.name = val
tm.assert_series_equal(result, expected)
@@ -2168,12 +2166,11 @@ def test_loc_setitem_with_expansion_nonunique_index(self, index):
)
tm.assert_frame_equal(df, expected)
- @pytest.mark.parametrize(
- "dtype", ["Int32", "Int64", "UInt32", "UInt64", "Float32", "Float64"]
- )
- def test_loc_setitem_with_expansion_preserves_nullable_int(self, dtype):
+ def test_loc_setitem_with_expansion_preserves_nullable_int(
+ self, any_numeric_ea_dtype
+ ):
# GH#42099
- ser = Series([0, 1, 2, 3], dtype=dtype)
+ ser = Series([0, 1, 2, 3], dtype=any_numeric_ea_dtype)
df = DataFrame({"data": ser})
result = DataFrame(index=df.index)
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index 15712f36da4ca..04a25317c8017 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -562,25 +562,21 @@ def test_reader_dtype(self, read_ext):
[
(
None,
- DataFrame(
- {
- "a": [1, 2, 3, 4],
- "b": [2.5, 3.5, 4.5, 5.5],
- "c": [1, 2, 3, 4],
- "d": [1.0, 2.0, np.nan, 4.0],
- }
- ),
+ {
+ "a": [1, 2, 3, 4],
+ "b": [2.5, 3.5, 4.5, 5.5],
+ "c": [1, 2, 3, 4],
+ "d": [1.0, 2.0, np.nan, 4.0],
+ },
),
(
{"a": "float64", "b": "float32", "c": str, "d": str},
- DataFrame(
- {
- "a": Series([1, 2, 3, 4], dtype="float64"),
- "b": Series([2.5, 3.5, 4.5, 5.5], dtype="float32"),
- "c": Series(["001", "002", "003", "004"], dtype=object),
- "d": Series(["1", "2", np.nan, "4"], dtype=object),
- }
- ),
+ {
+ "a": Series([1, 2, 3, 4], dtype="float64"),
+ "b": Series([2.5, 3.5, 4.5, 5.5], dtype="float32"),
+ "c": Series(["001", "002", "003", "004"], dtype=object),
+ "d": Series(["1", "2", np.nan, "4"], dtype=object),
+ },
),
],
)
@@ -589,6 +585,7 @@ def test_reader_dtype_str(self, read_ext, dtype, expected):
basename = "testdtype"
actual = pd.read_excel(basename + read_ext, dtype=dtype)
+ expected = DataFrame(expected)
tm.assert_frame_equal(actual, expected)
def test_dtype_backend(self, read_ext, dtype_backend, engine):
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 8c003723c1c71..76a138a295bda 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -90,7 +90,7 @@ def set_engine(engine, ext):
class TestRoundTrip:
@pytest.mark.parametrize(
"header,expected",
- [(None, DataFrame([np.nan] * 4)), (0, DataFrame({"Unnamed: 0": [np.nan] * 3}))],
+ [(None, [np.nan] * 4), (0, {"Unnamed: 0": [np.nan] * 3})],
)
def test_read_one_empty_col_no_header(self, ext, header, expected):
# xref gh-12292
@@ -102,14 +102,14 @@ def test_read_one_empty_col_no_header(self, ext, header, expected):
result = pd.read_excel(
path, sheet_name=filename, usecols=[0], header=header
)
-
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
- "header,expected",
- [(None, DataFrame([0] + [np.nan] * 4)), (0, DataFrame([np.nan] * 4))],
+ "header,expected_extra",
+ [(None, [0]), (0, [])],
)
- def test_read_one_empty_col_with_header(self, ext, header, expected):
+ def test_read_one_empty_col_with_header(self, ext, header, expected_extra):
filename = "with_header"
df = DataFrame([["", 1, 100], ["", 2, 200], ["", 3, 300], ["", 4, 400]])
@@ -118,7 +118,7 @@ def test_read_one_empty_col_with_header(self, ext, header, expected):
result = pd.read_excel(
path, sheet_name=filename, usecols=[0], header=header
)
-
+ expected = DataFrame(expected_extra + [np.nan] * 4)
tm.assert_frame_equal(result, expected)
def test_set_column_names_in_parameter(self, ext):
diff --git a/pandas/tests/io/formats/style/test_highlight.py b/pandas/tests/io/formats/style/test_highlight.py
index 3d59719010ee0..5d19e9c14d534 100644
--- a/pandas/tests/io/formats/style/test_highlight.py
+++ b/pandas/tests/io/formats/style/test_highlight.py
@@ -198,16 +198,17 @@ def test_highlight_quantile(styler, kwargs):
],
)
@pytest.mark.parametrize(
- "df",
+ "dtype",
[
- DataFrame([[0, 10], [20, 30]], dtype=int),
- DataFrame([[0, 10], [20, 30]], dtype=float),
- DataFrame([[0, 10], [20, 30]], dtype="datetime64[ns]"),
- DataFrame([[0, 10], [20, 30]], dtype=str),
- DataFrame([[0, 10], [20, 30]], dtype="timedelta64[ns]"),
+ int,
+ float,
+ "datetime64[ns]",
+ str,
+ "timedelta64[ns]",
],
)
-def test_all_highlight_dtypes(f, kwargs, df):
+def test_all_highlight_dtypes(f, kwargs, dtype):
+ df = DataFrame([[0, 10], [20, 30]], dtype=dtype)
if f == "highlight_quantile" and isinstance(df.iloc[0, 0], (str)):
return None # quantile incompatible with str
if f == "highlight_between":
diff --git a/pandas/tests/io/formats/style/test_matplotlib.py b/pandas/tests/io/formats/style/test_matplotlib.py
index fb7a77f1ddb27..ef7bfb11d81d8 100644
--- a/pandas/tests/io/formats/style/test_matplotlib.py
+++ b/pandas/tests/io/formats/style/test_matplotlib.py
@@ -260,15 +260,10 @@ def test_background_gradient_gmap_series_align(styler_blank, gmap, axis, exp_gma
assert expected.ctx == result.ctx
-@pytest.mark.parametrize(
- "gmap, axis",
- [
- (DataFrame([[1, 2], [2, 1]], columns=["A", "B"], index=["X", "Y"]), 1),
- (DataFrame([[1, 2], [2, 1]], columns=["A", "B"], index=["X", "Y"]), 0),
- ],
-)
-def test_background_gradient_gmap_wrong_dataframe(styler_blank, gmap, axis):
+@pytest.mark.parametrize("axis", [1, 0])
+def test_background_gradient_gmap_wrong_dataframe(styler_blank, axis):
# test giving a gmap in DataFrame but with wrong axis
+ gmap = DataFrame([[1, 2], [2, 1]], columns=["A", "B"], index=["X", "Y"])
msg = "'gmap' is a DataFrame but underlying data for operations is a Series"
with pytest.raises(ValueError, match=msg):
styler_blank.background_gradient(gmap=gmap, axis=axis)._compute()
@@ -321,10 +316,7 @@ def test_bar_color_raises(df):
df.style.bar(color="something", cmap="something else").to_html()
-@pytest.mark.parametrize(
- "plot_method",
- ["scatter", "hexbin"],
-)
+@pytest.mark.parametrize("plot_method", ["scatter", "hexbin"])
def test_pass_colormap_instance(df, plot_method):
# https://github.com/pandas-dev/pandas/issues/49374
cmap = mpl.colors.ListedColormap([[1, 1, 1], [0, 0, 0]])
diff --git a/pandas/tests/io/formats/style/test_to_latex.py b/pandas/tests/io/formats/style/test_to_latex.py
index 7f1443c3ee66b..eb221686dd165 100644
--- a/pandas/tests/io/formats/style/test_to_latex.py
+++ b/pandas/tests/io/formats/style/test_to_latex.py
@@ -1058,10 +1058,10 @@ def test_concat_chain():
@pytest.mark.parametrize(
- "df, expected",
+ "columns, expected",
[
(
- DataFrame(),
+ None,
dedent(
"""\
\\begin{tabular}{l}
@@ -1070,7 +1070,7 @@ def test_concat_chain():
),
),
(
- DataFrame(columns=["a", "b", "c"]),
+ ["a", "b", "c"],
dedent(
"""\
\\begin{tabular}{llll}
@@ -1084,7 +1084,8 @@ def test_concat_chain():
@pytest.mark.parametrize(
"clines", [None, "all;data", "all;index", "skip-last;data", "skip-last;index"]
)
-def test_empty_clines(df: DataFrame, expected: str, clines: str):
+def test_empty_clines(columns, expected: str, clines: str):
# GH 47203
+ df = DataFrame(columns=columns)
result = df.style.to_latex(clines=clines)
assert result == expected
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 790ba92f70c40..e85b4cb29390e 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -136,13 +136,14 @@ def test_to_html_with_empty_string_label():
@pytest.mark.parametrize(
- "df,expected",
+ "df_data,expected",
[
- (DataFrame({"\u03c3": np.arange(10.0)}), "unicode_1"),
- (DataFrame({"A": ["\u03c3"]}), "unicode_2"),
+ ({"\u03c3": np.arange(10.0)}, "unicode_1"),
+ ({"A": ["\u03c3"]}, "unicode_2"),
],
)
-def test_to_html_unicode(df, expected, datapath):
+def test_to_html_unicode(df_data, expected, datapath):
+ df = DataFrame(df_data)
expected = expected_html(datapath, expected)
result = df.to_html()
assert result == expected
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 4c8cd4b6a2b8e..304aff0002209 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -283,14 +283,15 @@ def test_to_latex_longtable_without_index(self):
assert result == expected
@pytest.mark.parametrize(
- "df, expected_number",
+ "df_data, expected_number",
[
- (DataFrame({"a": [1, 2]}), 1),
- (DataFrame({"a": [1, 2], "b": [3, 4]}), 2),
- (DataFrame({"a": [1, 2], "b": [3, 4], "c": [5, 6]}), 3),
+ ({"a": [1, 2]}, 1),
+ ({"a": [1, 2], "b": [3, 4]}, 2),
+ ({"a": [1, 2], "b": [3, 4], "c": [5, 6]}, 3),
],
)
- def test_to_latex_longtable_continued_on_next_page(self, df, expected_number):
+ def test_to_latex_longtable_continued_on_next_page(self, df_data, expected_number):
+ df = DataFrame(df_data)
result = df.to_latex(index=False, longtable=True)
assert rf"\multicolumn{{{expected_number}}}" in result
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index d5ea470af79d6..28b613fa1f6f6 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -165,11 +165,9 @@ def test_as_json_table_type_bool_data(self, bool_type):
def test_as_json_table_type_date_data(self, date_data):
assert as_json_table_type(date_data.dtype) == "datetime"
- @pytest.mark.parametrize(
- "str_data",
- [pd.Series(["a", "b"], dtype=object), pd.Index(["a", "b"], dtype=object)],
- )
- def test_as_json_table_type_string_data(self, str_data):
+ @pytest.mark.parametrize("klass", [pd.Series, pd.Index])
+ def test_as_json_table_type_string_data(self, klass):
+ str_data = klass(["a", "b"], dtype=object)
assert as_json_table_type(str_data.dtype) == "string"
@pytest.mark.parametrize(
@@ -700,20 +698,27 @@ class TestTableOrientReader:
},
],
)
- def test_read_json_table_orient(self, index_nm, vals, recwarn):
+ def test_read_json_table_orient(self, index_nm, vals):
df = DataFrame(vals, index=pd.Index(range(4), name=index_nm))
- out = df.to_json(orient="table")
+ out = StringIO(df.to_json(orient="table"))
result = pd.read_json(out, orient="table")
tm.assert_frame_equal(df, result)
- @pytest.mark.parametrize("index_nm", [None, "idx", "index"])
@pytest.mark.parametrize(
- "vals",
- [{"timedeltas": pd.timedelta_range("1h", periods=4, freq="min")}],
+ "index_nm",
+ [
+ None,
+ "idx",
+ pytest.param(
+ "index",
+ marks=pytest.mark.filterwarnings("ignore:Index name:UserWarning"),
+ ),
+ ],
)
- def test_read_json_table_orient_raises(self, index_nm, vals, recwarn):
+ def test_read_json_table_orient_raises(self, index_nm):
+ vals = {"timedeltas": pd.timedelta_range("1h", periods=4, freq="min")}
df = DataFrame(vals, index=pd.Index(range(4), name=index_nm))
- out = df.to_json(orient="table")
+ out = StringIO(df.to_json(orient="table"))
with pytest.raises(NotImplementedError, match="can not yet read "):
pd.read_json(out, orient="table")
@@ -744,14 +749,14 @@ def test_read_json_table_orient_raises(self, index_nm, vals, recwarn):
},
],
)
- def test_read_json_table_period_orient(self, index_nm, vals, recwarn):
+ def test_read_json_table_period_orient(self, index_nm, vals):
df = DataFrame(
vals,
index=pd.Index(
(pd.Period(f"2022Q{q}") for q in range(1, 5)), name=index_nm
),
)
- out = df.to_json(orient="table")
+ out = StringIO(df.to_json(orient="table"))
result = pd.read_json(out, orient="table")
tm.assert_frame_equal(df, result)
@@ -787,10 +792,10 @@ def test_read_json_table_period_orient(self, index_nm, vals, recwarn):
},
],
)
- def test_read_json_table_timezones_orient(self, idx, vals, recwarn):
+ def test_read_json_table_timezones_orient(self, idx, vals):
# GH 35973
df = DataFrame(vals, index=idx)
- out = df.to_json(orient="table")
+ out = StringIO(df.to_json(orient="table"))
result = pd.read_json(out, orient="table")
tm.assert_frame_equal(df, result)
@@ -861,12 +866,12 @@ def test_read_json_orient_table_old_schema_version(self):
tm.assert_frame_equal(expected, result)
@pytest.mark.parametrize("freq", ["M", "2M", "Q", "2Q", "Y", "2Y"])
- def test_read_json_table_orient_period_depr_freq(self, freq, recwarn):
+ def test_read_json_table_orient_period_depr_freq(self, freq):
# GH#9586
df = DataFrame(
{"ints": [1, 2]},
index=pd.PeriodIndex(["2020-01", "2021-06"], freq=freq),
)
- out = df.to_json(orient="table")
+ out = StringIO(df.to_json(orient="table"))
result = pd.read_json(out, orient="table")
tm.assert_frame_equal(df, result)
diff --git a/pandas/tests/io/json/test_json_table_schema_ext_dtype.py b/pandas/tests/io/json/test_json_table_schema_ext_dtype.py
index 015b27d0b3606..68c7a96920533 100644
--- a/pandas/tests/io/json/test_json_table_schema_ext_dtype.py
+++ b/pandas/tests/io/json/test_json_table_schema_ext_dtype.py
@@ -61,54 +61,33 @@ def test_build_table_schema(self):
class TestTableSchemaType:
- @pytest.mark.parametrize(
- "date_data",
- [
- DateArray([dt.date(2021, 10, 10)]),
- DateArray(dt.date(2021, 10, 10)),
- Series(DateArray(dt.date(2021, 10, 10))),
- ],
- )
- def test_as_json_table_type_ext_date_array_dtype(self, date_data):
+ @pytest.mark.parametrize("box", [lambda x: x, Series])
+ def test_as_json_table_type_ext_date_array_dtype(self, box):
+ date_data = box(DateArray([dt.date(2021, 10, 10)]))
assert as_json_table_type(date_data.dtype) == "any"
def test_as_json_table_type_ext_date_dtype(self):
assert as_json_table_type(DateDtype()) == "any"
- @pytest.mark.parametrize(
- "decimal_data",
- [
- DecimalArray([decimal.Decimal(10)]),
- Series(DecimalArray([decimal.Decimal(10)])),
- ],
- )
- def test_as_json_table_type_ext_decimal_array_dtype(self, decimal_data):
+ @pytest.mark.parametrize("box", [lambda x: x, Series])
+ def test_as_json_table_type_ext_decimal_array_dtype(self, box):
+ decimal_data = box(DecimalArray([decimal.Decimal(10)]))
assert as_json_table_type(decimal_data.dtype) == "number"
def test_as_json_table_type_ext_decimal_dtype(self):
assert as_json_table_type(DecimalDtype()) == "number"
- @pytest.mark.parametrize(
- "string_data",
- [
- array(["pandas"], dtype="string"),
- Series(array(["pandas"], dtype="string")),
- ],
- )
- def test_as_json_table_type_ext_string_array_dtype(self, string_data):
+ @pytest.mark.parametrize("box", [lambda x: x, Series])
+ def test_as_json_table_type_ext_string_array_dtype(self, box):
+ string_data = box(array(["pandas"], dtype="string"))
assert as_json_table_type(string_data.dtype) == "any"
def test_as_json_table_type_ext_string_dtype(self):
assert as_json_table_type(StringDtype()) == "any"
- @pytest.mark.parametrize(
- "integer_data",
- [
- array([10], dtype="Int64"),
- Series(array([10], dtype="Int64")),
- ],
- )
- def test_as_json_table_type_ext_integer_array_dtype(self, integer_data):
+ @pytest.mark.parametrize("box", [lambda x: x, Series])
+ def test_as_json_table_type_ext_integer_array_dtype(self, box):
+ integer_data = box(array([10], dtype="Int64"))
assert as_json_table_type(integer_data.dtype) == "integer"
def test_as_json_table_type_ext_integer_dtype(self):
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 7254fd7cb345d..2a2b4053be565 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -1895,17 +1895,12 @@ def test_frame_int_overflow(self):
result = read_json(StringIO(encoded_json))
tm.assert_frame_equal(result, expected)
- @pytest.mark.parametrize(
- "dataframe,expected",
- [
- (
- DataFrame({"x": [1, 2, 3], "y": ["a", "b", "c"]}),
- '{"(0, \'x\')":1,"(0, \'y\')":"a","(1, \'x\')":2,'
- '"(1, \'y\')":"b","(2, \'x\')":3,"(2, \'y\')":"c"}',
- )
- ],
- )
- def test_json_multiindex(self, dataframe, expected):
+ def test_json_multiindex(self):
+ dataframe = DataFrame({"x": [1, 2, 3], "y": ["a", "b", "c"]})
+ expected = (
+ '{"(0, \'x\')":1,"(0, \'y\')":"a","(1, \'x\')":2,'
+ '"(1, \'y\')":"b","(2, \'x\')":3,"(2, \'y\')":"c"}'
+ )
series = dataframe.stack(future_stack=True)
result = series.to_json(orient="index")
assert result == expected
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index c7d2a5845b50e..ce7bb74240c53 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -177,10 +177,7 @@ def test_encode_dict_with_unicode_keys(self, unicode_key):
unicode_dict = {unicode_key: "value1"}
assert unicode_dict == ujson.ujson_loads(ujson.ujson_dumps(unicode_dict))
- @pytest.mark.parametrize(
- "double_input",
- [math.pi, -math.pi], # Should work with negatives too.
- )
+ @pytest.mark.parametrize("double_input", [math.pi, -math.pi])
def test_encode_double_conversion(self, double_input):
output = ujson.ujson_dumps(double_input)
assert round(double_input, 5) == round(json.loads(output), 5)
@@ -520,10 +517,7 @@ def test_decode_invalid_dict(self, invalid_dict):
with pytest.raises(ValueError, match=msg):
ujson.ujson_loads(invalid_dict)
- @pytest.mark.parametrize(
- "numeric_int_as_str",
- ["31337", "-31337"], # Should work with negatives.
- )
+ @pytest.mark.parametrize("numeric_int_as_str", ["31337", "-31337"])
def test_decode_numeric_int(self, numeric_int_as_str):
assert int(numeric_int_as_str) == ujson.ujson_loads(numeric_int_as_str)
diff --git a/pandas/tests/io/parser/common/test_index.py b/pandas/tests/io/parser/common/test_index.py
index 038c684c90c9e..7cdaac1a284cd 100644
--- a/pandas/tests/io/parser/common/test_index.py
+++ b/pandas/tests/io/parser/common/test_index.py
@@ -145,20 +145,21 @@ def test_multi_index_no_level_names_implicit(all_parsers):
@xfail_pyarrow # TypeError: an integer is required
@pytest.mark.parametrize(
- "data,expected,header",
+ "data,columns,header",
[
- ("a,b", DataFrame(columns=["a", "b"]), [0]),
+ ("a,b", ["a", "b"], [0]),
(
"a,b\nc,d",
- DataFrame(columns=MultiIndex.from_tuples([("a", "c"), ("b", "d")])),
+ MultiIndex.from_tuples([("a", "c"), ("b", "d")]),
[0, 1],
),
],
)
@pytest.mark.parametrize("round_trip", [True, False])
-def test_multi_index_blank_df(all_parsers, data, expected, header, round_trip):
+def test_multi_index_blank_df(all_parsers, data, columns, header, round_trip):
# see gh-14545
parser = all_parsers
+ expected = DataFrame(columns=columns)
data = expected.to_csv(index=False) if round_trip else data
result = parser.read_csv(StringIO(data), header=header)
diff --git a/pandas/tests/io/parser/common/test_ints.py b/pandas/tests/io/parser/common/test_ints.py
index a3167346c64ef..e77958b0e9acc 100644
--- a/pandas/tests/io/parser/common/test_ints.py
+++ b/pandas/tests/io/parser/common/test_ints.py
@@ -40,31 +40,29 @@ def test_int_conversion(all_parsers):
(
"A,B\nTrue,1\nFalse,2\nTrue,3",
{},
- DataFrame([[True, 1], [False, 2], [True, 3]], columns=["A", "B"]),
+ [[True, 1], [False, 2], [True, 3]],
),
(
"A,B\nYES,1\nno,2\nyes,3\nNo,3\nYes,3",
{"true_values": ["yes", "Yes", "YES"], "false_values": ["no", "NO", "No"]},
- DataFrame(
- [[True, 1], [False, 2], [True, 3], [False, 3], [True, 3]],
- columns=["A", "B"],
- ),
+ [[True, 1], [False, 2], [True, 3], [False, 3], [True, 3]],
),
(
"A,B\nTRUE,1\nFALSE,2\nTRUE,3",
{},
- DataFrame([[True, 1], [False, 2], [True, 3]], columns=["A", "B"]),
+ [[True, 1], [False, 2], [True, 3]],
),
(
"A,B\nfoo,bar\nbar,foo",
{"true_values": ["foo"], "false_values": ["bar"]},
- DataFrame([[True, False], [False, True]], columns=["A", "B"]),
+ [[True, False], [False, True]],
),
],
)
def test_parse_bool(all_parsers, data, kwargs, expected):
parser = all_parsers
result = parser.read_csv(StringIO(data), **kwargs)
+ expected = DataFrame(expected, columns=["A", "B"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index 155e52d76e895..f2d5c77121467 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -97,22 +97,22 @@ def test_unicode_encoding(all_parsers, csv_dir_path):
"data,kwargs,expected",
[
# Basic test
- ("a\n1", {}, DataFrame({"a": [1]})),
+ ("a\n1", {}, [1]),
# "Regular" quoting
- ('"a"\n1', {"quotechar": '"'}, DataFrame({"a": [1]})),
+ ('"a"\n1', {"quotechar": '"'}, [1]),
# Test in a data row instead of header
- ("b\n1", {"names": ["a"]}, DataFrame({"a": ["b", "1"]})),
+ ("b\n1", {"names": ["a"]}, ["b", "1"]),
# Test in empty data row with skipping
- ("\n1", {"names": ["a"], "skip_blank_lines": True}, DataFrame({"a": [1]})),
+ ("\n1", {"names": ["a"], "skip_blank_lines": True}, [1]),
# Test in empty data row without skipping
(
"\n1",
{"names": ["a"], "skip_blank_lines": False},
- DataFrame({"a": [np.nan, 1]}),
+ [np.nan, 1],
),
],
)
-def test_utf8_bom(all_parsers, data, kwargs, expected, request):
+def test_utf8_bom(all_parsers, data, kwargs, expected):
# see gh-4793
parser = all_parsers
bom = "\ufeff"
@@ -131,6 +131,7 @@ def _encode_data_with_bom(_data):
pytest.skip(reason="https://github.com/apache/arrow/issues/38676")
result = parser.read_csv(_encode_data_with_bom(data), encoding=utf8, **kwargs)
+ expected = DataFrame({"a": expected})
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_na_values.py b/pandas/tests/io/parser/test_na_values.py
index ca106fa772e82..6ebfc8f337c10 100644
--- a/pandas/tests/io/parser/test_na_values.py
+++ b/pandas/tests/io/parser/test_na_values.py
@@ -263,43 +263,35 @@ def test_na_value_dict_multi_index(all_parsers, index_col, expected):
[
(
{},
- DataFrame(
- {
- "A": ["a", "b", np.nan, "d", "e", np.nan, "g"],
- "B": [1, 2, 3, 4, 5, 6, 7],
- "C": ["one", "two", "three", np.nan, "five", np.nan, "seven"],
- }
- ),
+ {
+ "A": ["a", "b", np.nan, "d", "e", np.nan, "g"],
+ "B": [1, 2, 3, 4, 5, 6, 7],
+ "C": ["one", "two", "three", np.nan, "five", np.nan, "seven"],
+ },
),
(
{"na_values": {"A": [], "C": []}, "keep_default_na": False},
- DataFrame(
- {
- "A": ["a", "b", "", "d", "e", "nan", "g"],
- "B": [1, 2, 3, 4, 5, 6, 7],
- "C": ["one", "two", "three", "nan", "five", "", "seven"],
- }
- ),
+ {
+ "A": ["a", "b", "", "d", "e", "nan", "g"],
+ "B": [1, 2, 3, 4, 5, 6, 7],
+ "C": ["one", "two", "three", "nan", "five", "", "seven"],
+ },
),
(
{"na_values": ["a"], "keep_default_na": False},
- DataFrame(
- {
- "A": [np.nan, "b", "", "d", "e", "nan", "g"],
- "B": [1, 2, 3, 4, 5, 6, 7],
- "C": ["one", "two", "three", "nan", "five", "", "seven"],
- }
- ),
+ {
+ "A": [np.nan, "b", "", "d", "e", "nan", "g"],
+ "B": [1, 2, 3, 4, 5, 6, 7],
+ "C": ["one", "two", "three", "nan", "five", "", "seven"],
+ },
),
(
{"na_values": {"A": [], "C": []}},
- DataFrame(
- {
- "A": ["a", "b", np.nan, "d", "e", np.nan, "g"],
- "B": [1, 2, 3, 4, 5, 6, 7],
- "C": ["one", "two", "three", np.nan, "five", np.nan, "seven"],
- }
- ),
+ {
+ "A": ["a", "b", np.nan, "d", "e", np.nan, "g"],
+ "B": [1, 2, 3, 4, 5, 6, 7],
+ "C": ["one", "two", "three", np.nan, "five", np.nan, "seven"],
+ },
),
],
)
@@ -325,6 +317,7 @@ def test_na_values_keep_default(all_parsers, kwargs, expected, request):
request.applymarker(mark)
result = parser.read_csv(StringIO(data), **kwargs)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
@@ -561,10 +554,10 @@ def test_na_values_dict_col_index(all_parsers):
(
str(2**63) + "\n" + str(2**63 + 1),
{"na_values": [2**63]},
- DataFrame([str(2**63), str(2**63 + 1)]),
+ [str(2**63), str(2**63 + 1)],
),
- (str(2**63) + ",1" + "\n,2", {}, DataFrame([[str(2**63), 1], ["", 2]])),
- (str(2**63) + "\n1", {"na_values": [2**63]}, DataFrame([np.nan, 1])),
+ (str(2**63) + ",1" + "\n,2", {}, [[str(2**63), 1], ["", 2]]),
+ (str(2**63) + "\n1", {"na_values": [2**63]}, [np.nan, 1]),
],
)
def test_na_values_uint64(all_parsers, data, kwargs, expected, request):
@@ -581,6 +574,7 @@ def test_na_values_uint64(all_parsers, data, kwargs, expected, request):
request.applymarker(mark)
result = parser.read_csv(StringIO(data), header=None, **kwargs)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_parse_dates.py b/pandas/tests/io/parser/test_parse_dates.py
index 700dcde336cd1..9640d0dfe343f 100644
--- a/pandas/tests/io/parser/test_parse_dates.py
+++ b/pandas/tests/io/parser/test_parse_dates.py
@@ -1755,11 +1755,11 @@ def test_parse_date_column_with_empty_string(all_parsers):
[
(
"a\n135217135789158401\n1352171357E+5",
- DataFrame({"a": [135217135789158401, 135217135700000]}, dtype="float64"),
+ [135217135789158401, 135217135700000],
),
(
"a\n99999999999\n123456789012345\n1234E+0",
- DataFrame({"a": [99999999999, 123456789012345, 1234]}, dtype="float64"),
+ [99999999999, 123456789012345, 1234],
),
],
)
@@ -1772,6 +1772,7 @@ def test_parse_date_float(all_parsers, data, expected, parse_dates):
parser = all_parsers
result = parser.read_csv(StringIO(data), parse_dates=parse_dates)
+ expected = DataFrame({"a": expected}, dtype="float64")
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_python_parser_only.py b/pandas/tests/io/parser/test_python_parser_only.py
index dbd474c6ae0b9..dc3c527e82202 100644
--- a/pandas/tests/io/parser/test_python_parser_only.py
+++ b/pandas/tests/io/parser/test_python_parser_only.py
@@ -528,21 +528,17 @@ def test_no_thousand_convert_with_dot_for_non_numeric_cols(python_parser_only, d
[
(
{"a": str, "b": np.float64, "c": np.int64},
- DataFrame(
- {
- "b": [16000.1, 0, 23000],
- "c": [0, 4001, 131],
- }
- ),
+ {
+ "b": [16000.1, 0, 23000],
+ "c": [0, 4001, 131],
+ },
),
(
str,
- DataFrame(
- {
- "b": ["16,000.1", "0", "23,000"],
- "c": ["0", "4,001", "131"],
- }
- ),
+ {
+ "b": ["16,000.1", "0", "23,000"],
+ "c": ["0", "4,001", "131"],
+ },
),
],
)
@@ -560,5 +556,6 @@ def test_no_thousand_convert_for_non_numeric_cols(python_parser_only, dtype, exp
dtype=dtype,
thousands=",",
)
+ expected = DataFrame(expected)
expected.insert(0, "a", ["0000,7995", "3,03,001,00514", "4923,600,041"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/test_skiprows.py b/pandas/tests/io/parser/test_skiprows.py
index 0ca47ded7ba8a..3cd2351f84c7a 100644
--- a/pandas/tests/io/parser/test_skiprows.py
+++ b/pandas/tests/io/parser/test_skiprows.py
@@ -246,8 +246,8 @@ def test_skiprows_infield_quote(all_parsers):
@pytest.mark.parametrize(
"kwargs,expected",
[
- ({}, DataFrame({"1": [3, 5]})),
- ({"header": 0, "names": ["foo"]}, DataFrame({"foo": [3, 5]})),
+ ({}, "1"),
+ ({"header": 0, "names": ["foo"]}, "foo"),
],
)
def test_skip_rows_callable(all_parsers, kwargs, expected):
@@ -255,6 +255,7 @@ def test_skip_rows_callable(all_parsers, kwargs, expected):
data = "a\n1\n2\n3\n4\n5"
result = parser.read_csv(StringIO(data), skiprows=lambda x: x % 2 == 0, **kwargs)
+ expected = DataFrame({expected: [3, 5]})
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/parser/usecols/test_usecols_basic.py b/pandas/tests/io/parser/usecols/test_usecols_basic.py
index 767fba666e417..214070b1ac5f2 100644
--- a/pandas/tests/io/parser/usecols/test_usecols_basic.py
+++ b/pandas/tests/io/parser/usecols/test_usecols_basic.py
@@ -389,20 +389,18 @@ def test_incomplete_first_row(all_parsers, usecols):
"19,29,39\n" * 2 + "10,20,30,40",
[0, 1, 2],
{"header": None},
- DataFrame([[19, 29, 39], [19, 29, 39], [10, 20, 30]]),
+ [[19, 29, 39], [19, 29, 39], [10, 20, 30]],
),
# see gh-9549
(
("A,B,C\n1,2,3\n3,4,5\n1,2,4,5,1,6\n1,2,3,,,1,\n1,2,3\n5,6,7"),
["A", "B", "C"],
{},
- DataFrame(
- {
- "A": [1, 3, 1, 1, 1, 5],
- "B": [2, 4, 2, 2, 2, 6],
- "C": [3, 5, 4, 3, 3, 7],
- }
- ),
+ {
+ "A": [1, 3, 1, 1, 1, 5],
+ "B": [2, 4, 2, 2, 2, 6],
+ "C": [3, 5, 4, 3, 3, 7],
+ },
),
],
)
@@ -410,6 +408,7 @@ def test_uneven_length_cols(all_parsers, data, usecols, kwargs, expected):
# see gh-8985
parser = all_parsers
result = parser.read_csv(StringIO(data), usecols=usecols, **kwargs)
+ expected = DataFrame(expected)
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/pytables/test_categorical.py b/pandas/tests/io/pytables/test_categorical.py
index 58ebdfe7696b4..2ab9f1ac8be1c 100644
--- a/pandas/tests/io/pytables/test_categorical.py
+++ b/pandas/tests/io/pytables/test_categorical.py
@@ -190,25 +190,19 @@ def test_categorical_nan_only_columns(tmp_path, setup_path):
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize(
- "where, df, expected",
- [
- ('col=="q"', DataFrame({"col": ["a", "b", "s"]}), DataFrame({"col": []})),
- ('col=="a"', DataFrame({"col": ["a", "b", "s"]}), DataFrame({"col": ["a"]})),
- ],
-)
-def test_convert_value(
- tmp_path, setup_path, where: str, df: DataFrame, expected: DataFrame
-):
+@pytest.mark.parametrize("where, expected", [["q", []], ["a", ["a"]]])
+def test_convert_value(tmp_path, setup_path, where: str, expected):
# GH39420
# Check that read_hdf with categorical columns can filter by where condition.
+ df = DataFrame({"col": ["a", "b", "s"]})
df.col = df.col.astype("category")
max_widths = {"col": 1}
categorical_values = sorted(df.col.unique())
+ expected = DataFrame({"col": expected})
expected.col = expected.col.astype("category")
expected.col = expected.col.cat.set_categories(categorical_values)
path = tmp_path / setup_path
df.to_hdf(path, key="df", format="table", min_itemsize=max_widths)
- result = read_hdf(path, where=where)
+ result = read_hdf(path, where=f'col=="{where}"')
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 6a2d460232165..420d82c3af7e3 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -1334,8 +1334,8 @@ def test_to_html_borderless(self):
@pytest.mark.parametrize(
"displayed_only,exp0,exp1",
[
- (True, DataFrame(["foo"]), None),
- (False, DataFrame(["foo bar baz qux"]), DataFrame(["foo"])),
+ (True, ["foo"], None),
+ (False, ["foo bar baz qux"], DataFrame(["foo"])),
],
)
def test_displayed_only(self, displayed_only, exp0, exp1, flavor_read_html):
@@ -1360,6 +1360,7 @@ def test_displayed_only(self, displayed_only, exp0, exp1, flavor_read_html):
</body>
</html>"""
+ exp0 = DataFrame(exp0)
dfs = flavor_read_html(StringIO(data), displayed_only=displayed_only)
tm.assert_frame_equal(dfs[0], exp0)
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index a6967732cf702..83a962ec26a7e 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -827,13 +827,7 @@ def test_s3_roundtrip(self, df_compat, s3_public_bucket, pa, s3so):
)
@pytest.mark.single_cpu
- @pytest.mark.parametrize(
- "partition_col",
- [
- ["A"],
- [],
- ],
- )
+ @pytest.mark.parametrize("partition_col", [["A"], []])
def test_s3_roundtrip_for_dir(
self, df_compat, s3_public_bucket, pa, partition_col, s3so
):
diff --git a/pandas/tests/io/xml/test_to_xml.py b/pandas/tests/io/xml/test_to_xml.py
index 37251a58b0c11..a123f6dd52c08 100644
--- a/pandas/tests/io/xml/test_to_xml.py
+++ b/pandas/tests/io/xml/test_to_xml.py
@@ -298,10 +298,8 @@ def test_index_false_rename_row_root(xml_books, parser):
assert output == expected
-@pytest.mark.parametrize(
- "offset_index", [list(range(10, 13)), [str(i) for i in range(10, 13)]]
-)
-def test_index_false_with_offset_input_index(parser, offset_index, geom_df):
+@pytest.mark.parametrize("typ", [int, str])
+def test_index_false_with_offset_input_index(parser, typ, geom_df):
"""
Tests that the output does not contain the `<index>` field when the index of the
input Dataframe has an offset.
@@ -328,7 +326,7 @@ def test_index_false_with_offset_input_index(parser, offset_index, geom_df):
<sides>3.0</sides>
</row>
</data>"""
-
+ offset_index = [typ(i) for i in range(10, 13)]
offset_geom_df = geom_df.copy()
offset_geom_df.index = Index(offset_index)
output = offset_geom_df.to_xml(index=False, parser=parser)
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56738 | 2024-01-05T01:12:19Z | 2024-01-10T21:50:17Z | 2024-01-10T21:50:17Z | 2024-01-10T21:50:20Z |
TST/CLN: Test parametrizations | diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index 75259cb7e2f05..2dafaf277be8f 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -182,12 +182,12 @@ class TestDatetime64SeriesComparison:
@pytest.mark.parametrize(
"op, expected",
[
- (operator.eq, Series([False, False, True])),
- (operator.ne, Series([True, True, False])),
- (operator.lt, Series([False, False, False])),
- (operator.gt, Series([False, False, False])),
- (operator.ge, Series([False, False, True])),
- (operator.le, Series([False, False, True])),
+ (operator.eq, [False, False, True]),
+ (operator.ne, [True, True, False]),
+ (operator.lt, [False, False, False]),
+ (operator.gt, [False, False, False]),
+ (operator.ge, [False, False, True]),
+ (operator.le, [False, False, True]),
],
)
def test_nat_comparisons(
@@ -210,7 +210,7 @@ def test_nat_comparisons(
result = op(left, right)
- tm.assert_series_equal(result, expected)
+ tm.assert_series_equal(result, Series(expected))
@pytest.mark.parametrize(
"data",
@@ -1485,11 +1485,10 @@ def test_dt64arr_add_sub_DateOffsets(
@pytest.mark.parametrize(
"other",
[
- np.array([pd.offsets.MonthEnd(), pd.offsets.Day(n=2)]),
- np.array([pd.offsets.DateOffset(years=1), pd.offsets.MonthEnd()]),
- np.array( # matching offsets
- [pd.offsets.DateOffset(years=1), pd.offsets.DateOffset(years=1)]
- ),
+ [pd.offsets.MonthEnd(), pd.offsets.Day(n=2)],
+ [pd.offsets.DateOffset(years=1), pd.offsets.MonthEnd()],
+ # matching offsets
+ [pd.offsets.DateOffset(years=1), pd.offsets.DateOffset(years=1)],
],
)
@pytest.mark.parametrize("op", [operator.add, roperator.radd, operator.sub])
@@ -1502,7 +1501,7 @@ def test_dt64arr_add_sub_offset_array(
tz = tz_naive_fixture
dti = date_range("2017-01-01", periods=2, tz=tz)
dtarr = tm.box_expected(dti, box_with_array)
-
+ other = np.array(other)
expected = DatetimeIndex([op(dti[n], other[n]) for n in range(len(dti))])
expected = tm.box_expected(expected, box_with_array).astype(object)
diff --git a/pandas/tests/arithmetic/test_timedelta64.py b/pandas/tests/arithmetic/test_timedelta64.py
index b2007209dd5b9..3e9508bd2f504 100644
--- a/pandas/tests/arithmetic/test_timedelta64.py
+++ b/pandas/tests/arithmetic/test_timedelta64.py
@@ -1960,19 +1960,20 @@ def test_td64arr_floordiv_numeric_scalar(self, box_with_array, two):
two // tdser
@pytest.mark.parametrize(
- "vector",
- [np.array([20, 30, 40]), Index([20, 30, 40]), Series([20, 30, 40])],
- ids=lambda x: type(x).__name__,
+ "klass",
+ [np.array, Index, Series],
+ ids=lambda x: x.__name__,
)
def test_td64arr_rmul_numeric_array(
self,
box_with_array,
- vector,
+ klass,
any_real_numpy_dtype,
):
# GH#4521
# divide/multiply by integers
+ vector = klass([20, 30, 40])
tdser = Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
vector = vector.astype(any_real_numpy_dtype)
@@ -1990,16 +1991,17 @@ def test_td64arr_rmul_numeric_array(
tm.assert_equal(result, expected)
@pytest.mark.parametrize(
- "vector",
- [np.array([20, 30, 40]), Index([20, 30, 40]), Series([20, 30, 40])],
- ids=lambda x: type(x).__name__,
+ "klass",
+ [np.array, Index, Series],
+ ids=lambda x: x.__name__,
)
def test_td64arr_div_numeric_array(
- self, box_with_array, vector, any_real_numpy_dtype
+ self, box_with_array, klass, any_real_numpy_dtype
):
# GH#4521
# divide/multiply by integers
+ vector = klass([20, 30, 40])
tdser = Series(["59 Days", "59 Days", "NaT"], dtype="m8[ns]")
vector = vector.astype(any_real_numpy_dtype)
diff --git a/pandas/tests/arrays/categorical/test_indexing.py b/pandas/tests/arrays/categorical/test_indexing.py
index 5e1c5c64fa660..33c55b2090bd6 100644
--- a/pandas/tests/arrays/categorical/test_indexing.py
+++ b/pandas/tests/arrays/categorical/test_indexing.py
@@ -51,12 +51,10 @@ def test_setitem(self):
tm.assert_categorical_equal(c, expected)
- @pytest.mark.parametrize(
- "other",
- [Categorical(["b", "a"]), Categorical(["b", "a"], categories=["b", "a"])],
- )
- def test_setitem_same_but_unordered(self, other):
+ @pytest.mark.parametrize("categories", [None, ["b", "a"]])
+ def test_setitem_same_but_unordered(self, categories):
# GH-24142
+ other = Categorical(["b", "a"], categories=categories)
target = Categorical(["a", "b"], categories=["a", "b"])
mask = np.array([True, False])
target[mask] = other[mask]
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index 4174d2adc810b..8778df832d4d7 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -307,29 +307,23 @@ def test_comparisons(self, data, reverse, base):
with pytest.raises(TypeError, match=msg):
a < cat_rev
- @pytest.mark.parametrize(
- "ctor",
- [
- lambda *args, **kwargs: Categorical(*args, **kwargs),
- lambda *args, **kwargs: Series(Categorical(*args, **kwargs)),
- ],
- )
- def test_unordered_different_order_equal(self, ctor):
+ @pytest.mark.parametrize("box", [lambda x: x, Series])
+ def test_unordered_different_order_equal(self, box):
# https://github.com/pandas-dev/pandas/issues/16014
- c1 = ctor(["a", "b"], categories=["a", "b"], ordered=False)
- c2 = ctor(["a", "b"], categories=["b", "a"], ordered=False)
+ c1 = box(Categorical(["a", "b"], categories=["a", "b"], ordered=False))
+ c2 = box(Categorical(["a", "b"], categories=["b", "a"], ordered=False))
assert (c1 == c2).all()
- c1 = ctor(["a", "b"], categories=["a", "b"], ordered=False)
- c2 = ctor(["b", "a"], categories=["b", "a"], ordered=False)
+ c1 = box(Categorical(["a", "b"], categories=["a", "b"], ordered=False))
+ c2 = box(Categorical(["b", "a"], categories=["b", "a"], ordered=False))
assert (c1 != c2).all()
- c1 = ctor(["a", "a"], categories=["a", "b"], ordered=False)
- c2 = ctor(["b", "b"], categories=["b", "a"], ordered=False)
+ c1 = box(Categorical(["a", "a"], categories=["a", "b"], ordered=False))
+ c2 = box(Categorical(["b", "b"], categories=["b", "a"], ordered=False))
assert (c1 != c2).all()
- c1 = ctor(["a", "a"], categories=["a", "b"], ordered=False)
- c2 = ctor(["a", "b"], categories=["b", "a"], ordered=False)
+ c1 = box(Categorical(["a", "a"], categories=["a", "b"], ordered=False))
+ c2 = box(Categorical(["a", "b"], categories=["b", "a"], ordered=False))
result = c1 == c2
tm.assert_numpy_array_equal(np.array(result), np.array([True, False]))
diff --git a/pandas/tests/arrays/sparse/test_dtype.py b/pandas/tests/arrays/sparse/test_dtype.py
index 234f4092421e5..6fcbfe96a3df7 100644
--- a/pandas/tests/arrays/sparse/test_dtype.py
+++ b/pandas/tests/arrays/sparse/test_dtype.py
@@ -99,15 +99,15 @@ def test_construct_from_string_raises():
@pytest.mark.parametrize(
"dtype, expected",
[
- (SparseDtype(int), True),
- (SparseDtype(float), True),
- (SparseDtype(bool), True),
- (SparseDtype(object), False),
- (SparseDtype(str), False),
+ (int, True),
+ (float, True),
+ (bool, True),
+ (object, False),
+ (str, False),
],
)
def test_is_numeric(dtype, expected):
- assert dtype._is_numeric is expected
+ assert SparseDtype(dtype)._is_numeric is expected
def test_str_uses_object():
diff --git a/pandas/tests/arrays/sparse/test_reductions.py b/pandas/tests/arrays/sparse/test_reductions.py
index f44423d5e635c..4171d1213a0dc 100644
--- a/pandas/tests/arrays/sparse/test_reductions.py
+++ b/pandas/tests/arrays/sparse/test_reductions.py
@@ -126,13 +126,13 @@ def test_sum(self):
@pytest.mark.parametrize(
"arr",
- [np.array([0, 1, np.nan, 1]), np.array([0, 1, 1])],
+ [[0, 1, np.nan, 1], [0, 1, 1]],
)
@pytest.mark.parametrize("fill_value", [0, 1, np.nan])
@pytest.mark.parametrize("min_count, expected", [(3, 2), (4, np.nan)])
def test_sum_min_count(self, arr, fill_value, min_count, expected):
# GH#25777
- sparray = SparseArray(arr, fill_value=fill_value)
+ sparray = SparseArray(np.array(arr), fill_value=fill_value)
result = sparray.sum(min_count=min_count)
if np.isnan(expected):
assert np.isnan(result)
@@ -296,11 +296,9 @@ def test_argmax_argmin(self, arr, argmax_expected, argmin_expected):
assert argmax_result == argmax_expected
assert argmin_result == argmin_expected
- @pytest.mark.parametrize(
- "arr,method",
- [(SparseArray([]), "argmax"), (SparseArray([]), "argmin")],
- )
- def test_empty_array(self, arr, method):
+ @pytest.mark.parametrize("method", ["argmax", "argmin"])
+ def test_empty_array(self, method):
msg = f"attempt to get {method} of an empty sequence"
+ arr = SparseArray([])
with pytest.raises(ValueError, match=msg):
- arr.argmax() if method == "argmax" else arr.argmin()
+ getattr(arr, method)()
diff --git a/pandas/tests/base/test_conversion.py b/pandas/tests/base/test_conversion.py
index fe0f1f1454a55..ad35742a7b337 100644
--- a/pandas/tests/base/test_conversion.py
+++ b/pandas/tests/base/test_conversion.py
@@ -499,22 +499,23 @@ def test_to_numpy_dataframe_na_value(data, dtype, na_value):
@pytest.mark.parametrize(
- "data, expected",
+ "data, expected_data",
[
(
{"a": pd.array([1, 2, None])},
- np.array([[1.0], [2.0], [np.nan]], dtype=float),
+ [[1.0], [2.0], [np.nan]],
),
(
{"a": [1, 2, 3], "b": [1, 2, 3]},
- np.array([[1, 1], [2, 2], [3, 3]], dtype=float),
+ [[1, 1], [2, 2], [3, 3]],
),
],
)
-def test_to_numpy_dataframe_single_block(data, expected):
+def test_to_numpy_dataframe_single_block(data, expected_data):
# https://github.com/pandas-dev/pandas/issues/33820
df = pd.DataFrame(data)
result = df.to_numpy(dtype=float, na_value=np.nan)
+ expected = np.array(expected_data, dtype=float)
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 7969e684f5b04..b69fb573987f9 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -522,14 +522,15 @@ def test_series_negate(self, engine, parser):
"lhs",
[
# Float
- DataFrame(np.random.default_rng(2).standard_normal((5, 2))),
+ np.random.default_rng(2).standard_normal((5, 2)),
# Int
- DataFrame(np.random.default_rng(2).integers(5, size=(5, 2))),
+ np.random.default_rng(2).integers(5, size=(5, 2)),
# bool doesn't work with numexpr but works elsewhere
- DataFrame(np.random.default_rng(2).standard_normal((5, 2)) > 0.5),
+ np.array([True, False, True, False, True], dtype=np.bool_),
],
)
def test_frame_pos(self, lhs, engine, parser):
+ lhs = DataFrame(lhs)
expr = "+lhs"
expect = lhs
@@ -540,14 +541,15 @@ def test_frame_pos(self, lhs, engine, parser):
"lhs",
[
# Float
- Series(np.random.default_rng(2).standard_normal(5)),
+ np.random.default_rng(2).standard_normal(5),
# Int
- Series(np.random.default_rng(2).integers(5, size=5)),
+ np.random.default_rng(2).integers(5, size=5),
# bool doesn't work with numexpr but works elsewhere
- Series(np.random.default_rng(2).standard_normal(5) > 0.5),
+ np.array([True, False, True, False, True], dtype=np.bool_),
],
)
def test_series_pos(self, lhs, engine, parser):
+ lhs = Series(lhs)
expr = "+lhs"
expect = lhs
diff --git a/pandas/tests/copy_view/index/test_datetimeindex.py b/pandas/tests/copy_view/index/test_datetimeindex.py
index b023297c9549d..5dd1f45a94ff3 100644
--- a/pandas/tests/copy_view/index/test_datetimeindex.py
+++ b/pandas/tests/copy_view/index/test_datetimeindex.py
@@ -13,17 +13,11 @@
)
-@pytest.mark.parametrize(
- "cons",
- [
- lambda x: DatetimeIndex(x),
- lambda x: DatetimeIndex(DatetimeIndex(x)),
- ],
-)
-def test_datetimeindex(using_copy_on_write, cons):
+@pytest.mark.parametrize("box", [lambda x: x, DatetimeIndex])
+def test_datetimeindex(using_copy_on_write, box):
dt = date_range("2019-12-31", periods=3, freq="D")
ser = Series(dt)
- idx = cons(ser)
+ idx = box(DatetimeIndex(ser))
expected = idx.copy(deep=True)
ser.iloc[0] = Timestamp("2020-12-31")
if using_copy_on_write:
diff --git a/pandas/tests/copy_view/index/test_periodindex.py b/pandas/tests/copy_view/index/test_periodindex.py
index b80ce1d3d838f..753304a1a8963 100644
--- a/pandas/tests/copy_view/index/test_periodindex.py
+++ b/pandas/tests/copy_view/index/test_periodindex.py
@@ -13,17 +13,11 @@
)
-@pytest.mark.parametrize(
- "cons",
- [
- lambda x: PeriodIndex(x),
- lambda x: PeriodIndex(PeriodIndex(x)),
- ],
-)
-def test_periodindex(using_copy_on_write, cons):
+@pytest.mark.parametrize("box", [lambda x: x, PeriodIndex])
+def test_periodindex(using_copy_on_write, box):
dt = period_range("2019-12-31", periods=3, freq="D")
ser = Series(dt)
- idx = cons(ser)
+ idx = box(PeriodIndex(ser))
expected = idx.copy(deep=True)
ser.iloc[0] = Period("2020-12-31")
if using_copy_on_write:
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 5eeab778c184c..f1f5cb1620345 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1078,15 +1078,15 @@ def test_integers(self):
@pytest.mark.parametrize(
"arr, skipna",
[
- (np.array([1, 2, np.nan, np.nan, 3], dtype="O"), False),
- (np.array([1, 2, np.nan, np.nan, 3], dtype="O"), True),
- (np.array([1, 2, 3, np.int64(4), np.int32(5), np.nan], dtype="O"), False),
- (np.array([1, 2, 3, np.int64(4), np.int32(5), np.nan], dtype="O"), True),
+ ([1, 2, np.nan, np.nan, 3], False),
+ ([1, 2, np.nan, np.nan, 3], True),
+ ([1, 2, 3, np.int64(4), np.int32(5), np.nan], False),
+ ([1, 2, 3, np.int64(4), np.int32(5), np.nan], True),
],
)
def test_integer_na(self, arr, skipna):
# GH 27392
- result = lib.infer_dtype(arr, skipna=skipna)
+ result = lib.infer_dtype(np.array(arr, dtype="O"), skipna=skipna)
expected = "integer" if skipna else "integer-na"
assert result == expected
@@ -1287,13 +1287,13 @@ def test_infer_dtype_mixed_integer(self):
@pytest.mark.parametrize(
"arr",
[
- np.array([Timestamp("2011-01-01"), Timestamp("2011-01-02")]),
- np.array([datetime(2011, 1, 1), datetime(2012, 2, 1)]),
- np.array([datetime(2011, 1, 1), Timestamp("2011-01-02")]),
+ [Timestamp("2011-01-01"), Timestamp("2011-01-02")],
+ [datetime(2011, 1, 1), datetime(2012, 2, 1)],
+ [datetime(2011, 1, 1), Timestamp("2011-01-02")],
],
)
def test_infer_dtype_datetime(self, arr):
- assert lib.infer_dtype(arr, skipna=True) == "datetime"
+ assert lib.infer_dtype(np.array(arr), skipna=True) == "datetime"
@pytest.mark.parametrize("na_value", [pd.NaT, np.nan])
@pytest.mark.parametrize(
@@ -1902,14 +1902,15 @@ def test_is_scalar_numpy_array_scalars(self):
@pytest.mark.parametrize(
"zerodim",
[
- np.array(1),
- np.array("foobar"),
- np.array(np.datetime64("2014-01-01")),
- np.array(np.timedelta64(1, "h")),
- np.array(np.datetime64("NaT")),
+ 1,
+ "foobar",
+ np.datetime64("2014-01-01"),
+ np.timedelta64(1, "h"),
+ np.datetime64("NaT"),
],
)
def test_is_scalar_numpy_zerodim_arrays(self, zerodim):
+ zerodim = np.array(zerodim)
assert not is_scalar(zerodim)
assert is_scalar(lib.item_from_zerodim(zerodim))
diff --git a/pandas/tests/frame/indexing/test_get.py b/pandas/tests/frame/indexing/test_get.py
index 5f2651eec683c..75bad0ec1f159 100644
--- a/pandas/tests/frame/indexing/test_get.py
+++ b/pandas/tests/frame/indexing/test_get.py
@@ -15,13 +15,13 @@ def test_get(self, float_frame):
)
@pytest.mark.parametrize(
- "df",
+ "columns, index",
[
- DataFrame(),
- DataFrame(columns=list("AB")),
- DataFrame(columns=list("AB"), index=range(3)),
+ [None, None],
+ [list("AB"), None],
+ [list("AB"), range(3)],
],
)
- def test_get_none(self, df):
+ def test_get_none(self, columns, index):
# see gh-5652
- assert df.get(None) is None
+ assert DataFrame(columns=columns, index=index).get(None) is None
diff --git a/pandas/tests/frame/methods/test_drop.py b/pandas/tests/frame/methods/test_drop.py
index 06cd51b43a0aa..a3ae3991522c2 100644
--- a/pandas/tests/frame/methods/test_drop.py
+++ b/pandas/tests/frame/methods/test_drop.py
@@ -232,15 +232,13 @@ def test_drop_api_equivalence(self):
with pytest.raises(ValueError, match=msg):
df.drop(axis=1)
- data = [[1, 2, 3], [1, 2, 3]]
-
@pytest.mark.parametrize(
"actual",
[
- DataFrame(data=data, index=["a", "a"]),
- DataFrame(data=data, index=["a", "b"]),
- DataFrame(data=data, index=["a", "b"]).set_index([0, 1]),
- DataFrame(data=data, index=["a", "a"]).set_index([0, 1]),
+ DataFrame([[1, 2, 3], [1, 2, 3]], index=["a", "a"]),
+ DataFrame([[1, 2, 3], [1, 2, 3]], index=["a", "b"]),
+ DataFrame([[1, 2, 3], [1, 2, 3]], index=["a", "b"]).set_index([0, 1]),
+ DataFrame([[1, 2, 3], [1, 2, 3]], index=["a", "a"]).set_index([0, 1]),
],
)
def test_raise_on_drop_duplicate_index(self, actual):
diff --git a/pandas/tests/frame/methods/test_filter.py b/pandas/tests/frame/methods/test_filter.py
index 382615aaef627..dc84e2adf1239 100644
--- a/pandas/tests/frame/methods/test_filter.py
+++ b/pandas/tests/frame/methods/test_filter.py
@@ -98,15 +98,16 @@ def test_filter_regex_search(self, float_frame):
tm.assert_frame_equal(result, exp)
@pytest.mark.parametrize(
- "name,expected",
+ "name,expected_data",
[
- ("a", DataFrame({"a": [1, 2]})),
- ("あ", DataFrame({"あ": [3, 4]})),
+ ("a", {"a": [1, 2]}),
+ ("あ", {"あ": [3, 4]}),
],
)
- def test_filter_unicode(self, name, expected):
+ def test_filter_unicode(self, name, expected_data):
# GH13101
df = DataFrame({"a": [1, 2], "あ": [3, 4]})
+ expected = DataFrame(expected_data)
tm.assert_frame_equal(df.filter(like=name), expected)
tm.assert_frame_equal(df.filter(regex=name), expected)
diff --git a/pandas/tests/frame/methods/test_reindex.py b/pandas/tests/frame/methods/test_reindex.py
index 2a889efe79064..da6d69f36f900 100644
--- a/pandas/tests/frame/methods/test_reindex.py
+++ b/pandas/tests/frame/methods/test_reindex.py
@@ -1219,13 +1219,7 @@ def test_reindex_empty_frame(self, kwargs):
expected = DataFrame({"a": [np.nan] * 3}, index=idx, dtype=object)
tm.assert_frame_equal(result, expected)
- @pytest.mark.parametrize(
- "src_idx",
- [
- Index([]),
- CategoricalIndex([]),
- ],
- )
+ @pytest.mark.parametrize("src_idx", [Index, CategoricalIndex])
@pytest.mark.parametrize(
"cat_idx",
[
@@ -1240,7 +1234,7 @@ def test_reindex_empty_frame(self, kwargs):
],
)
def test_reindex_empty(self, src_idx, cat_idx):
- df = DataFrame(columns=src_idx, index=["K"], dtype="f8")
+ df = DataFrame(columns=src_idx([]), index=["K"], dtype="f8")
result = df.reindex(columns=cat_idx)
expected = DataFrame(index=["K"], columns=cat_idx, dtype="f8")
@@ -1281,36 +1275,14 @@ def test_reindex_datetimelike_to_object(self, dtype):
assert res.iloc[-1, 1] is fv
tm.assert_frame_equal(res, expected)
- @pytest.mark.parametrize(
- "index_df,index_res,index_exp",
- [
- (
- CategoricalIndex([], categories=["A"]),
- Index(["A"]),
- Index(["A"]),
- ),
- (
- CategoricalIndex([], categories=["A"]),
- Index(["B"]),
- Index(["B"]),
- ),
- (
- CategoricalIndex([], categories=["A"]),
- CategoricalIndex(["A"]),
- CategoricalIndex(["A"]),
- ),
- (
- CategoricalIndex([], categories=["A"]),
- CategoricalIndex(["B"]),
- CategoricalIndex(["B"]),
- ),
- ],
- )
- def test_reindex_not_category(self, index_df, index_res, index_exp):
+ @pytest.mark.parametrize("klass", [Index, CategoricalIndex])
+ @pytest.mark.parametrize("data", ["A", "B"])
+ def test_reindex_not_category(self, klass, data):
# GH#28690
- df = DataFrame(index=index_df)
- result = df.reindex(index=index_res)
- expected = DataFrame(index=index_exp)
+ df = DataFrame(index=CategoricalIndex([], categories=["A"]))
+ idx = klass([data])
+ result = df.reindex(index=idx)
+ expected = DataFrame(index=idx)
tm.assert_frame_equal(result, expected)
def test_invalid_method(self):
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index d44de380d243a..6d52bf161f4fa 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -234,16 +234,9 @@ def test_empty_constructor(self, constructor):
assert len(result.columns) == 0
tm.assert_frame_equal(result, expected)
- @pytest.mark.parametrize(
- "constructor",
- [
- lambda: DataFrame({}),
- lambda: DataFrame(data={}),
- ],
- )
- def test_empty_constructor_object_index(self, constructor):
+ def test_empty_constructor_object_index(self):
expected = DataFrame(index=RangeIndex(0), columns=RangeIndex(0))
- result = constructor()
+ result = DataFrame({})
assert len(result.index) == 0
assert len(result.columns) == 0
tm.assert_frame_equal(result, expected, check_index_type=True)
diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py
index 90524861ce311..1d8f50668cee2 100644
--- a/pandas/tests/frame/test_stack_unstack.py
+++ b/pandas/tests/frame/test_stack_unstack.py
@@ -1217,16 +1217,16 @@ def test_stack_preserve_categorical_dtype_values(self, future_stack):
)
@pytest.mark.filterwarnings("ignore:Downcasting object dtype arrays:FutureWarning")
@pytest.mark.parametrize(
- "index, columns",
+ "index",
[
- ([0, 0, 1, 1], MultiIndex.from_product([[1, 2], ["a", "b"]])),
- ([0, 0, 2, 3], MultiIndex.from_product([[1, 2], ["a", "b"]])),
- ([0, 1, 2, 3], MultiIndex.from_product([[1, 2], ["a", "b"]])),
+ [0, 0, 1, 1],
+ [0, 0, 2, 3],
+ [0, 1, 2, 3],
],
)
- def test_stack_multi_columns_non_unique_index(self, index, columns, future_stack):
+ def test_stack_multi_columns_non_unique_index(self, index, future_stack):
# GH-28301
-
+ columns = MultiIndex.from_product([[1, 2], ["a", "b"]])
df = DataFrame(index=index, columns=columns).fillna(1)
stacked = df.stack(future_stack=future_stack)
new_index = MultiIndex.from_tuples(stacked.index.to_numpy())
@@ -1720,11 +1720,10 @@ def test_stack(self, multiindex_year_month_day_dataframe_random_data, future_sta
tm.assert_equal(result, expected)
@pytest.mark.parametrize(
- "idx, columns, exp_idx",
+ "idx, exp_idx",
[
[
list("abab"),
- ["1st", "2nd", "1st"],
MultiIndex(
levels=[["a", "b"], ["1st", "2nd"]],
codes=[np.tile(np.arange(2).repeat(3), 2), np.tile([0, 1, 0], 4)],
@@ -1732,7 +1731,6 @@ def test_stack(self, multiindex_year_month_day_dataframe_random_data, future_sta
],
[
MultiIndex.from_tuples((("a", 2), ("b", 1), ("a", 1), ("b", 2))),
- ["1st", "2nd", "1st"],
MultiIndex(
levels=[["a", "b"], [1, 2], ["1st", "2nd"]],
codes=[
@@ -1744,12 +1742,12 @@ def test_stack(self, multiindex_year_month_day_dataframe_random_data, future_sta
],
],
)
- def test_stack_duplicate_index(self, idx, columns, exp_idx, future_stack):
+ def test_stack_duplicate_index(self, idx, exp_idx, future_stack):
# GH10417
df = DataFrame(
np.arange(12).reshape(4, 3),
index=idx,
- columns=columns,
+ columns=["1st", "2nd", "1st"],
)
if future_stack:
msg = "Columns with duplicate values are not supported in stack"
diff --git a/pandas/tests/frame/test_unary.py b/pandas/tests/frame/test_unary.py
index 850c92013694f..e89175ceff0c1 100644
--- a/pandas/tests/frame/test_unary.py
+++ b/pandas/tests/frame/test_unary.py
@@ -13,17 +13,16 @@ class TestDataFrameUnaryOperators:
# __pos__, __neg__, __invert__
@pytest.mark.parametrize(
- "df,expected",
+ "df_data,expected_data",
[
- (pd.DataFrame({"a": [-1, 1]}), pd.DataFrame({"a": [1, -1]})),
- (pd.DataFrame({"a": [False, True]}), pd.DataFrame({"a": [True, False]})),
- (
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([-1, 1]))}),
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([1, -1]))}),
- ),
+ ([-1, 1], [1, -1]),
+ ([False, True], [True, False]),
+ (pd.to_timedelta([-1, 1]), pd.to_timedelta([1, -1])),
],
)
- def test_neg_numeric(self, df, expected):
+ def test_neg_numeric(self, df_data, expected_data):
+ df = pd.DataFrame({"a": df_data})
+ expected = pd.DataFrame({"a": expected_data})
tm.assert_frame_equal(-df, expected)
tm.assert_series_equal(-df["a"], expected["a"])
@@ -42,13 +41,14 @@ def test_neg_object(self, df, expected):
tm.assert_series_equal(-df["a"], expected["a"])
@pytest.mark.parametrize(
- "df",
+ "df_data",
[
- pd.DataFrame({"a": ["a", "b"]}),
- pd.DataFrame({"a": pd.to_datetime(["2017-01-22", "1970-01-01"])}),
+ ["a", "b"],
+ pd.to_datetime(["2017-01-22", "1970-01-01"]),
],
)
- def test_neg_raises(self, df, using_infer_string):
+ def test_neg_raises(self, df_data, using_infer_string):
+ df = pd.DataFrame({"a": df_data})
msg = (
"bad operand type for unary -: 'str'|"
r"bad operand type for unary -: 'DatetimeArray'"
@@ -102,44 +102,36 @@ def test_invert_empty_not_input(self):
assert df is not result
@pytest.mark.parametrize(
- "df",
+ "df_data",
[
- pd.DataFrame({"a": [-1, 1]}),
- pd.DataFrame({"a": [False, True]}),
- pd.DataFrame({"a": pd.Series(pd.to_timedelta([-1, 1]))}),
+ [-1, 1],
+ [False, True],
+ pd.to_timedelta([-1, 1]),
],
)
- def test_pos_numeric(self, df):
+ def test_pos_numeric(self, df_data):
# GH#16073
+ df = pd.DataFrame({"a": df_data})
tm.assert_frame_equal(+df, df)
tm.assert_series_equal(+df["a"], df["a"])
@pytest.mark.parametrize(
- "df",
+ "df_data",
[
- pd.DataFrame({"a": np.array([-1, 2], dtype=object)}),
- pd.DataFrame({"a": [Decimal("-1.0"), Decimal("2.0")]}),
+ np.array([-1, 2], dtype=object),
+ [Decimal("-1.0"), Decimal("2.0")],
],
)
- def test_pos_object(self, df):
+ def test_pos_object(self, df_data):
# GH#21380
+ df = pd.DataFrame({"a": df_data})
tm.assert_frame_equal(+df, df)
tm.assert_series_equal(+df["a"], df["a"])
- @pytest.mark.parametrize(
- "df",
- [
- pytest.param(
- pd.DataFrame({"a": ["a", "b"]}),
- # filterwarnings removable once min numpy version is 1.25
- marks=[
- pytest.mark.filterwarnings("ignore:Applying:DeprecationWarning")
- ],
- ),
- ],
- )
- def test_pos_object_raises(self, df):
+ @pytest.mark.filterwarnings("ignore:Applying:DeprecationWarning")
+ def test_pos_object_raises(self):
# GH#21380
+ df = pd.DataFrame({"a": ["a", "b"]})
if np_version_gte1p25:
with pytest.raises(
TypeError, match=r"^bad operand type for unary \+: \'str\'$"
@@ -148,10 +140,8 @@ def test_pos_object_raises(self, df):
else:
tm.assert_series_equal(+df["a"], df["a"])
- @pytest.mark.parametrize(
- "df", [pd.DataFrame({"a": pd.to_datetime(["2017-01-22", "1970-01-01"])})]
- )
- def test_pos_raises(self, df):
+ def test_pos_raises(self):
+ df = pd.DataFrame({"a": pd.to_datetime(["2017-01-22", "1970-01-01"])})
msg = r"bad operand type for unary \+: 'DatetimeArray'"
with pytest.raises(TypeError, match=msg):
(+df)
diff --git a/pandas/tests/generic/test_duplicate_labels.py b/pandas/tests/generic/test_duplicate_labels.py
index f54db07824daf..07f76810cbfc8 100644
--- a/pandas/tests/generic/test_duplicate_labels.py
+++ b/pandas/tests/generic/test_duplicate_labels.py
@@ -45,12 +45,11 @@ def test_preserved_series(self, func):
s = pd.Series([0, 1], index=["a", "b"]).set_flags(allows_duplicate_labels=False)
assert func(s).flags.allows_duplicate_labels is False
- @pytest.mark.parametrize(
- "other", [pd.Series(0, index=["a", "b", "c"]), pd.Series(0, index=["a", "b"])]
- )
+ @pytest.mark.parametrize("index", [["a", "b", "c"], ["a", "b"]])
# TODO: frame
@not_implemented
- def test_align(self, other):
+ def test_align(self, index):
+ other = pd.Series(0, index=index)
s = pd.Series([0, 1], index=["a", "b"]).set_flags(allows_duplicate_labels=False)
a, b = s.align(other)
assert a.flags.allows_duplicate_labels is False
@@ -298,23 +297,15 @@ def test_getitem_raises(self, getter, target):
with pytest.raises(pd.errors.DuplicateLabelError, match=msg):
getter(target)
- @pytest.mark.parametrize(
- "objs, kwargs",
- [
- (
- [
- pd.Series(1, index=[0, 1], name="a"),
- pd.Series(2, index=[0, 1], name="a"),
- ],
- {"axis": 1},
- )
- ],
- )
- def test_concat_raises(self, objs, kwargs):
+ def test_concat_raises(self):
+ objs = [
+ pd.Series(1, index=[0, 1], name="a"),
+ pd.Series(2, index=[0, 1], name="a"),
+ ]
objs = [x.set_flags(allows_duplicate_labels=False) for x in objs]
msg = "Index has duplicates."
with pytest.raises(pd.errors.DuplicateLabelError, match=msg):
- pd.concat(objs, **kwargs)
+ pd.concat(objs, axis=1)
@not_implemented
def test_merge_raises(self):
diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py
index 0596193c137e1..f5818d95020aa 100644
--- a/pandas/tests/groupby/aggregate/test_other.py
+++ b/pandas/tests/groupby/aggregate/test_other.py
@@ -540,46 +540,44 @@ def test_sum_uint64_overflow():
@pytest.mark.parametrize(
- "structure, expected",
+ "structure, cast_as",
[
- (tuple, DataFrame({"C": {(1, 1): (1, 1, 1), (3, 4): (3, 4, 4)}})),
- (list, DataFrame({"C": {(1, 1): [1, 1, 1], (3, 4): [3, 4, 4]}})),
- (
- lambda x: tuple(x),
- DataFrame({"C": {(1, 1): (1, 1, 1), (3, 4): (3, 4, 4)}}),
- ),
- (
- lambda x: list(x),
- DataFrame({"C": {(1, 1): [1, 1, 1], (3, 4): [3, 4, 4]}}),
- ),
+ (tuple, tuple),
+ (list, list),
+ (lambda x: tuple(x), tuple),
+ (lambda x: list(x), list),
],
)
-def test_agg_structs_dataframe(structure, expected):
+def test_agg_structs_dataframe(structure, cast_as):
df = DataFrame(
{"A": [1, 1, 1, 3, 3, 3], "B": [1, 1, 1, 4, 4, 4], "C": [1, 1, 1, 3, 4, 4]}
)
result = df.groupby(["A", "B"]).aggregate(structure)
+ expected = DataFrame(
+ {"C": {(1, 1): cast_as([1, 1, 1]), (3, 4): cast_as([3, 4, 4])}}
+ )
expected.index.names = ["A", "B"]
tm.assert_frame_equal(result, expected)
@pytest.mark.parametrize(
- "structure, expected",
+ "structure, cast_as",
[
- (tuple, Series([(1, 1, 1), (3, 4, 4)], index=[1, 3], name="C")),
- (list, Series([[1, 1, 1], [3, 4, 4]], index=[1, 3], name="C")),
- (lambda x: tuple(x), Series([(1, 1, 1), (3, 4, 4)], index=[1, 3], name="C")),
- (lambda x: list(x), Series([[1, 1, 1], [3, 4, 4]], index=[1, 3], name="C")),
+ (tuple, tuple),
+ (list, list),
+ (lambda x: tuple(x), tuple),
+ (lambda x: list(x), list),
],
)
-def test_agg_structs_series(structure, expected):
+def test_agg_structs_series(structure, cast_as):
# Issue #18079
df = DataFrame(
{"A": [1, 1, 1, 3, 3, 3], "B": [1, 1, 1, 4, 4, 4], "C": [1, 1, 1, 3, 4, 4]}
)
result = df.groupby("A")["C"].aggregate(structure)
+ expected = Series([cast_as([1, 1, 1]), cast_as([3, 4, 4])], index=[1, 3], name="C")
expected.index.name = "A"
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index ac5374597585a..07d52308e308a 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -41,24 +41,27 @@ def test_mgr_locs_updated(func):
"binner,closed,expected",
[
(
- np.array([0, 3, 6, 9], dtype=np.int64),
+ [0, 3, 6, 9],
"left",
- np.array([2, 5, 6], dtype=np.int64),
+ [2, 5, 6],
),
(
- np.array([0, 3, 6, 9], dtype=np.int64),
+ [0, 3, 6, 9],
"right",
- np.array([3, 6, 6], dtype=np.int64),
+ [3, 6, 6],
),
- (np.array([0, 3, 6], dtype=np.int64), "left", np.array([2, 5], dtype=np.int64)),
+ ([0, 3, 6], "left", [2, 5]),
(
- np.array([0, 3, 6], dtype=np.int64),
+ [0, 3, 6],
"right",
- np.array([3, 6], dtype=np.int64),
+ [3, 6],
),
],
)
def test_generate_bins(binner, closed, expected):
values = np.array([1, 2, 3, 4, 5, 6], dtype=np.int64)
- result = lib.generate_bins_dt64(values, binner, closed=closed)
+ result = lib.generate_bins_dt64(
+ values, np.array(binner, dtype=np.int64), closed=closed
+ )
+ expected = np.array(expected, dtype=np.int64)
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index 038f59f8ea80f..14c5c21d41772 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -350,12 +350,9 @@ def test_basic_regression():
grouped.mean()
-@pytest.mark.parametrize(
- "dtype", ["float64", "float32", "int64", "int32", "int16", "int8"]
-)
-def test_with_na_groups(dtype):
+def test_with_na_groups(any_real_numpy_dtype):
index = Index(np.arange(10))
- values = Series(np.ones(10), index, dtype=dtype)
+ values = Series(np.ones(10), index, dtype=any_real_numpy_dtype)
labels = Series(
[np.nan, "foo", "bar", "bar", np.nan, np.nan, "bar", "bar", np.nan, "foo"],
index=index,
| Moves similar parametrization setup to test bodies | https://api.github.com/repos/pandas-dev/pandas/pulls/56737 | 2024-01-04T22:52:46Z | 2024-01-06T15:11:31Z | 2024-01-06T15:11:31Z | 2024-01-08T00:06:11Z |
fix: add pytest-qt deps to dockerfile | diff --git a/Dockerfile b/Dockerfile
index 7230dcab20f6e..c697f0c1c66c7 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -5,7 +5,8 @@ RUN apt-get update && apt-get -y upgrade
RUN apt-get install -y build-essential
# hdf5 needed for pytables installation
-RUN apt-get install -y libhdf5-dev
+# libgles2-mesa needed for pytest-qt
+RUN apt-get install -y libhdf5-dev libgles2-mesa-dev
RUN python -m pip install --upgrade pip
RUN python -m pip install \
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56731 | 2024-01-04T04:21:10Z | 2024-01-04T16:13:43Z | 2024-01-04T16:13:43Z | 2024-01-04T16:13:50Z |
Backport PR #56543 on branch 2.2.x (DOC: Update docstring for read_excel) | diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst
index 6148086452d54..b3ad23e0d4104 100644
--- a/doc/source/user_guide/io.rst
+++ b/doc/source/user_guide/io.rst
@@ -3471,20 +3471,15 @@ saving a ``DataFrame`` to Excel. Generally the semantics are
similar to working with :ref:`csv<io.read_csv_table>` data.
See the :ref:`cookbook<cookbook.excel>` for some advanced strategies.
-.. warning::
-
- The `xlrd <https://xlrd.readthedocs.io/en/latest/>`__ package is now only for reading
- old-style ``.xls`` files.
+.. note::
- Before pandas 1.3.0, the default argument ``engine=None`` to :func:`~pandas.read_excel`
- would result in using the ``xlrd`` engine in many cases, including new
- Excel 2007+ (``.xlsx``) files. pandas will now default to using the
- `openpyxl <https://openpyxl.readthedocs.io/en/stable/>`__ engine.
+ When ``engine=None``, the following logic will be used to determine the engine:
- It is strongly encouraged to install ``openpyxl`` to read Excel 2007+
- (``.xlsx``) files.
- **Please do not report issues when using ``xlrd`` to read ``.xlsx`` files.**
- This is no longer supported, switch to using ``openpyxl`` instead.
+ - If ``path_or_buffer`` is an OpenDocument format (.odf, .ods, .odt),
+ then `odf <https://pypi.org/project/odfpy/>`_ will be used.
+ - Otherwise if ``path_or_buffer`` is an xls format, ``xlrd`` will be used.
+ - Otherwise if ``path_or_buffer`` is in xlsb format, ``pyxlsb`` will be used.
+ - Otherwise ``openpyxl`` will be used.
.. _io.excel_reader:
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index bce890c6f73b0..786f719337b84 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -160,36 +160,24 @@
If converters are specified, they will be applied INSTEAD
of dtype conversion.
If you use ``None``, it will infer the dtype of each column based on the data.
-engine : str, default None
+engine : {{'openpyxl', 'calamine', 'odf', 'pyxlsb', 'xlrd'}}, default None
If io is not a buffer or path, this must be set to identify io.
- Supported engines: "xlrd", "openpyxl", "odf", "pyxlsb", "calamine".
Engine compatibility :
- - ``xlr`` supports old-style Excel files (.xls).
- ``openpyxl`` supports newer Excel file formats.
- - ``odf`` supports OpenDocument file formats (.odf, .ods, .odt).
- - ``pyxlsb`` supports Binary Excel files.
- ``calamine`` supports Excel (.xls, .xlsx, .xlsm, .xlsb)
and OpenDocument (.ods) file formats.
+ - ``odf`` supports OpenDocument file formats (.odf, .ods, .odt).
+ - ``pyxlsb`` supports Binary Excel files.
+ - ``xlrd`` supports old-style Excel files (.xls).
- .. versionchanged:: 1.2.0
- The engine `xlrd <https://xlrd.readthedocs.io/en/latest/>`_
- now only supports old-style ``.xls`` files.
- When ``engine=None``, the following logic will be
- used to determine the engine:
-
- - If ``path_or_buffer`` is an OpenDocument format (.odf, .ods, .odt),
- then `odf <https://pypi.org/project/odfpy/>`_ will be used.
- - Otherwise if ``path_or_buffer`` is an xls format,
- ``xlrd`` will be used.
- - Otherwise if ``path_or_buffer`` is in xlsb format,
- ``pyxlsb`` will be used.
-
- .. versionadded:: 1.3.0
- - Otherwise ``openpyxl`` will be used.
-
- .. versionchanged:: 1.3.0
+ When ``engine=None``, the following logic will be used to determine the engine:
+ - If ``path_or_buffer`` is an OpenDocument format (.odf, .ods, .odt),
+ then `odf <https://pypi.org/project/odfpy/>`_ will be used.
+ - Otherwise if ``path_or_buffer`` is an xls format, ``xlrd`` will be used.
+ - Otherwise if ``path_or_buffer`` is in xlsb format, ``pyxlsb`` will be used.
+ - Otherwise ``openpyxl`` will be used.
converters : dict, default None
Dict of functions for converting values in certain columns. Keys can
either be integers or column labels, values are functions that take one
| Backport PR #56543: DOC: Update docstring for read_excel | https://api.github.com/repos/pandas-dev/pandas/pulls/56730 | 2024-01-04T03:30:29Z | 2024-01-04T08:21:26Z | 2024-01-04T08:21:26Z | 2024-01-04T08:21:27Z |
TST/CLN: Reuse more fixtures | diff --git a/pandas/tests/base/test_misc.py b/pandas/tests/base/test_misc.py
index 65e234e799353..f6a4396ca5be0 100644
--- a/pandas/tests/base/test_misc.py
+++ b/pandas/tests/base/test_misc.py
@@ -17,7 +17,6 @@
Index,
Series,
)
-import pandas._testing as tm
def test_isnull_notnull_docstrings():
@@ -130,9 +129,13 @@ def test_memory_usage_components_series(series_with_simple_index):
assert total_usage == non_index_usage + index_usage
-@pytest.mark.parametrize("dtype", tm.NARROW_NP_DTYPES)
-def test_memory_usage_components_narrow_series(dtype):
- series = Series(range(5), dtype=dtype, index=[f"i-{i}" for i in range(5)], name="a")
+def test_memory_usage_components_narrow_series(any_real_numpy_dtype):
+ series = Series(
+ range(5),
+ dtype=any_real_numpy_dtype,
+ index=[f"i-{i}" for i in range(5)],
+ name="a",
+ )
total_usage = series.memory_usage(index=True)
non_index_usage = series.memory_usage(index=False)
index_usage = series.index.memory_usage()
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index f36ddff223a9a..f9c6939654ea1 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -75,8 +75,7 @@ class TestiLocBaseIndependent:
np.asarray([0, 1, 2]),
],
)
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
- def test_iloc_setitem_fullcol_categorical(self, indexer, key):
+ def test_iloc_setitem_fullcol_categorical(self, indexer_li, key):
frame = DataFrame({0: range(3)}, dtype=object)
cat = Categorical(["alpha", "beta", "gamma"])
@@ -86,7 +85,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key):
df = frame.copy()
orig_vals = df.values
- indexer(df)[key, 0] = cat
+ indexer_li(df)[key, 0] = cat
expected = DataFrame({0: cat}).astype(object)
assert np.shares_memory(df[0].values, orig_vals)
@@ -102,7 +101,7 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key):
# we retain the object dtype.
frame = DataFrame({0: np.array([0, 1, 2], dtype=object), 1: range(3)})
df = frame.copy()
- indexer(df)[key, 0] = cat
+ indexer_li(df)[key, 0] = cat
expected = DataFrame({0: Series(cat.astype(object), dtype=object), 1: range(3)})
tm.assert_frame_equal(df, expected)
@@ -985,8 +984,7 @@ def test_iloc_setitem_empty_frame_raises_with_3d_ndarray(self):
with pytest.raises(ValueError, match=msg):
obj.iloc[nd3] = 0
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
- def test_iloc_getitem_read_only_values(self, indexer):
+ def test_iloc_getitem_read_only_values(self, indexer_li):
# GH#10043 this is fundamentally a test for iloc, but test loc while
# we're here
rw_array = np.eye(10)
@@ -996,10 +994,12 @@ def test_iloc_getitem_read_only_values(self, indexer):
ro_array.setflags(write=False)
ro_df = DataFrame(ro_array)
- tm.assert_frame_equal(indexer(rw_df)[[1, 2, 3]], indexer(ro_df)[[1, 2, 3]])
- tm.assert_frame_equal(indexer(rw_df)[[1]], indexer(ro_df)[[1]])
- tm.assert_series_equal(indexer(rw_df)[1], indexer(ro_df)[1])
- tm.assert_frame_equal(indexer(rw_df)[1:3], indexer(ro_df)[1:3])
+ tm.assert_frame_equal(
+ indexer_li(rw_df)[[1, 2, 3]], indexer_li(ro_df)[[1, 2, 3]]
+ )
+ tm.assert_frame_equal(indexer_li(rw_df)[[1]], indexer_li(ro_df)[[1]])
+ tm.assert_series_equal(indexer_li(rw_df)[1], indexer_li(ro_df)[1])
+ tm.assert_frame_equal(indexer_li(rw_df)[1:3], indexer_li(ro_df)[1:3])
def test_iloc_getitem_readonly_key(self):
# GH#17192 iloc with read-only array raising TypeError
diff --git a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
index ce02e752fb90b..70fd0b02cc79d 100644
--- a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
+++ b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py
@@ -132,15 +132,12 @@ def test_dtype_with_converters(all_parsers):
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize(
- "dtype", list(np.typecodes["AllInteger"] + np.typecodes["Float"])
-)
-def test_numeric_dtype(all_parsers, dtype):
+def test_numeric_dtype(all_parsers, any_real_numpy_dtype):
data = "0\n1"
parser = all_parsers
- expected = DataFrame([0, 1], dtype=dtype)
+ expected = DataFrame([0, 1], dtype=any_real_numpy_dtype)
- result = parser.read_csv(StringIO(data), header=None, dtype=dtype)
+ result = parser.read_csv(StringIO(data), header=None, dtype=any_real_numpy_dtype)
tm.assert_frame_equal(expected, result)
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index fcb5b65e59402..4cdd50d70d078 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -1417,13 +1417,10 @@ def test_mode_empty(self, dropna, expected):
(False, [1, 1, 1, 2, 3, 3, 3], [1, 3]),
],
)
- @pytest.mark.parametrize(
- "dt", list(np.typecodes["AllInteger"] + np.typecodes["Float"])
- )
- def test_mode_numerical(self, dropna, data, expected, dt):
- s = Series(data, dtype=dt)
+ def test_mode_numerical(self, dropna, data, expected, any_real_numpy_dtype):
+ s = Series(data, dtype=any_real_numpy_dtype)
result = s.mode(dropna)
- expected = Series(expected, dtype=dt)
+ expected = Series(expected, dtype=any_real_numpy_dtype)
tm.assert_series_equal(result, expected)
@pytest.mark.parametrize("dropna, expected", [(True, [1.0]), (False, [1, np.nan])])
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 21a38c43f4294..3b37ffa7baa82 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1488,12 +1488,9 @@ def test_different(self, right_vals):
result = merge(left, right, on="A")
assert is_object_dtype(result.A.dtype)
- @pytest.mark.parametrize(
- "d1", [np.int64, np.int32, np.intc, np.int16, np.int8, np.uint8]
- )
@pytest.mark.parametrize("d2", [np.int64, np.float64, np.float32, np.float16])
- def test_join_multi_dtypes(self, d1, d2):
- dtype1 = np.dtype(d1)
+ def test_join_multi_dtypes(self, any_int_numpy_dtype, d2):
+ dtype1 = np.dtype(any_int_numpy_dtype)
dtype2 = np.dtype(d2)
left = DataFrame(
diff --git a/pandas/tests/scalar/timestamp/test_formats.py b/pandas/tests/scalar/timestamp/test_formats.py
index d7160597ea6d6..6a578b0a9eb09 100644
--- a/pandas/tests/scalar/timestamp/test_formats.py
+++ b/pandas/tests/scalar/timestamp/test_formats.py
@@ -88,9 +88,9 @@ def test_isoformat(ts, timespec, expected_iso):
class TestTimestampRendering:
- timezones = ["UTC", "Asia/Tokyo", "US/Eastern", "dateutil/US/Pacific"]
-
- @pytest.mark.parametrize("tz", timezones)
+ @pytest.mark.parametrize(
+ "tz", ["UTC", "Asia/Tokyo", "US/Eastern", "dateutil/US/Pacific"]
+ )
@pytest.mark.parametrize("freq", ["D", "M", "S", "N"])
@pytest.mark.parametrize(
"date", ["2014-03-07", "2014-01-01 09:00", "2014-01-01 00:00:00.000000001"]
diff --git a/pandas/tests/series/methods/test_astype.py b/pandas/tests/series/methods/test_astype.py
index 46f55fff91e41..4d2cd2ba963fd 100644
--- a/pandas/tests/series/methods/test_astype.py
+++ b/pandas/tests/series/methods/test_astype.py
@@ -342,10 +342,9 @@ def test_astype_ignores_errors_for_extension_dtypes(self, data, dtype, errors):
with pytest.raises((ValueError, TypeError), match=msg):
ser.astype(float, errors=errors)
- @pytest.mark.parametrize("dtype", [np.float16, np.float32, np.float64])
- def test_astype_from_float_to_str(self, dtype):
+ def test_astype_from_float_to_str(self, any_float_dtype):
# https://github.com/pandas-dev/pandas/issues/36451
- ser = Series([0.1], dtype=dtype)
+ ser = Series([0.1], dtype=any_float_dtype)
result = ser.astype(str)
expected = Series(["0.1"], dtype=object)
tm.assert_series_equal(result, expected)
@@ -374,21 +373,19 @@ def test_astype(self, dtype):
assert as_typed.name == ser.name
@pytest.mark.parametrize("value", [np.nan, np.inf])
- @pytest.mark.parametrize("dtype", [np.int32, np.int64])
- def test_astype_cast_nan_inf_int(self, dtype, value):
+ def test_astype_cast_nan_inf_int(self, any_int_numpy_dtype, value):
# gh-14265: check NaN and inf raise error when converting to int
msg = "Cannot convert non-finite values \\(NA or inf\\) to integer"
ser = Series([value])
with pytest.raises(ValueError, match=msg):
- ser.astype(dtype)
+ ser.astype(any_int_numpy_dtype)
- @pytest.mark.parametrize("dtype", [int, np.int8, np.int64])
- def test_astype_cast_object_int_fail(self, dtype):
+ def test_astype_cast_object_int_fail(self, any_int_numpy_dtype):
arr = Series(["car", "house", "tree", "1"])
msg = r"invalid literal for int\(\) with base 10: 'car'"
with pytest.raises(ValueError, match=msg):
- arr.astype(dtype)
+ arr.astype(any_int_numpy_dtype)
def test_astype_float_to_uint_negatives_raise(
self, float_numpy_dtype, any_unsigned_int_numpy_dtype
diff --git a/pandas/tests/series/methods/test_compare.py b/pandas/tests/series/methods/test_compare.py
index fe2016a245ec7..304045e46702b 100644
--- a/pandas/tests/series/methods/test_compare.py
+++ b/pandas/tests/series/methods/test_compare.py
@@ -5,15 +5,14 @@
import pandas._testing as tm
-@pytest.mark.parametrize("align_axis", [0, 1, "index", "columns"])
-def test_compare_axis(align_axis):
+def test_compare_axis(axis):
# GH#30429
s1 = pd.Series(["a", "b", "c"])
s2 = pd.Series(["x", "b", "z"])
- result = s1.compare(s2, align_axis=align_axis)
+ result = s1.compare(s2, align_axis=axis)
- if align_axis in (1, "columns"):
+ if axis in (1, "columns"):
indices = pd.Index([0, 2])
columns = pd.Index(["self", "other"])
expected = pd.DataFrame(
diff --git a/pandas/tests/series/methods/test_cov_corr.py b/pandas/tests/series/methods/test_cov_corr.py
index a369145b4e884..bd60265582652 100644
--- a/pandas/tests/series/methods/test_cov_corr.py
+++ b/pandas/tests/series/methods/test_cov_corr.py
@@ -56,11 +56,10 @@ def test_cov_ddof(self, test_ddof, dtype):
class TestSeriesCorr:
- @pytest.mark.parametrize("dtype", ["float64", "Float64"])
- def test_corr(self, datetime_series, dtype):
+ def test_corr(self, datetime_series, any_float_dtype):
stats = pytest.importorskip("scipy.stats")
- datetime_series = datetime_series.astype(dtype)
+ datetime_series = datetime_series.astype(any_float_dtype)
# full overlap
tm.assert_almost_equal(datetime_series.corr(datetime_series), 1)
diff --git a/pandas/tests/series/methods/test_fillna.py b/pandas/tests/series/methods/test_fillna.py
index 293259661cd9a..f38e4a622cffa 100644
--- a/pandas/tests/series/methods/test_fillna.py
+++ b/pandas/tests/series/methods/test_fillna.py
@@ -838,12 +838,11 @@ def test_fillna_categorical_raises(self):
ser.fillna(DataFrame({1: ["a"], 3: ["b"]}))
@pytest.mark.parametrize("dtype", [float, "float32", "float64"])
- @pytest.mark.parametrize("fill_type", tm.ALL_REAL_NUMPY_DTYPES)
@pytest.mark.parametrize("scalar", [True, False])
- def test_fillna_float_casting(self, dtype, fill_type, scalar):
+ def test_fillna_float_casting(self, dtype, any_real_numpy_dtype, scalar):
# GH-43424
ser = Series([np.nan, 1.2], dtype=dtype)
- fill_values = Series([2, 2], dtype=fill_type)
+ fill_values = Series([2, 2], dtype=any_real_numpy_dtype)
if scalar:
fill_values = fill_values.dtype.type(2)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 02cd7b77c9b7d..de0338b39d91a 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1795,11 +1795,10 @@ def test_scipy_compat(self, arr):
exp[mask] = np.nan
tm.assert_almost_equal(result, exp)
- @pytest.mark.parametrize("dtype", np.typecodes["AllInteger"])
- def test_basic(self, writable, dtype):
+ def test_basic(self, writable, any_int_numpy_dtype):
exp = np.array([1, 2], dtype=np.float64)
- data = np.array([1, 100], dtype=dtype)
+ data = np.array([1, 100], dtype=any_int_numpy_dtype)
data.setflags(write=writable)
ser = Series(data)
result = algos.rank(ser)
@@ -1836,8 +1835,7 @@ def test_no_mode(self):
exp = Series([], dtype=np.float64, index=Index([], dtype=int))
tm.assert_numpy_array_equal(algos.mode(np.array([])), exp.values)
- @pytest.mark.parametrize("dt", np.typecodes["AllInteger"] + np.typecodes["Float"])
- def test_mode_single(self, dt):
+ def test_mode_single(self, any_real_numpy_dtype):
# GH 15714
exp_single = [1]
data_single = [1]
@@ -1845,13 +1843,13 @@ def test_mode_single(self, dt):
exp_multi = [1]
data_multi = [1, 1]
- ser = Series(data_single, dtype=dt)
- exp = Series(exp_single, dtype=dt)
+ ser = Series(data_single, dtype=any_real_numpy_dtype)
+ exp = Series(exp_single, dtype=any_real_numpy_dtype)
tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values)
tm.assert_series_equal(ser.mode(), exp)
- ser = Series(data_multi, dtype=dt)
- exp = Series(exp_multi, dtype=dt)
+ ser = Series(data_multi, dtype=any_real_numpy_dtype)
+ exp = Series(exp_multi, dtype=any_real_numpy_dtype)
tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values)
tm.assert_series_equal(ser.mode(), exp)
@@ -1862,21 +1860,20 @@ def test_mode_obj_int(self):
exp = Series(["a", "b", "c"], dtype=object)
tm.assert_numpy_array_equal(algos.mode(exp.values), exp.values)
- @pytest.mark.parametrize("dt", np.typecodes["AllInteger"] + np.typecodes["Float"])
- def test_number_mode(self, dt):
+ def test_number_mode(self, any_real_numpy_dtype):
exp_single = [1]
data_single = [1] * 5 + [2] * 3
exp_multi = [1, 3]
data_multi = [1] * 5 + [2] * 3 + [3] * 5
- ser = Series(data_single, dtype=dt)
- exp = Series(exp_single, dtype=dt)
+ ser = Series(data_single, dtype=any_real_numpy_dtype)
+ exp = Series(exp_single, dtype=any_real_numpy_dtype)
tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values)
tm.assert_series_equal(ser.mode(), exp)
- ser = Series(data_multi, dtype=dt)
- exp = Series(exp_multi, dtype=dt)
+ ser = Series(data_multi, dtype=any_real_numpy_dtype)
+ exp = Series(exp_multi, dtype=any_real_numpy_dtype)
tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values)
tm.assert_series_equal(ser.mode(), exp)
diff --git a/pandas/tests/util/test_assert_extension_array_equal.py b/pandas/tests/util/test_assert_extension_array_equal.py
index 674e9307d8bb9..5d82ae9af0e95 100644
--- a/pandas/tests/util/test_assert_extension_array_equal.py
+++ b/pandas/tests/util/test_assert_extension_array_equal.py
@@ -108,11 +108,10 @@ def test_assert_extension_array_equal_non_extension_array(side):
tm.assert_extension_array_equal(*args)
-@pytest.mark.parametrize("right_dtype", ["Int32", "int64"])
-def test_assert_extension_array_equal_ignore_dtype_mismatch(right_dtype):
+def test_assert_extension_array_equal_ignore_dtype_mismatch(any_int_dtype):
# https://github.com/pandas-dev/pandas/issues/35715
left = array([1, 2, 3], dtype="Int64")
- right = array([1, 2, 3], dtype=right_dtype)
+ right = array([1, 2, 3], dtype=any_int_dtype)
tm.assert_extension_array_equal(left, right, check_dtype=False)
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index c4ffc197298f0..70efa4293c46d 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -106,12 +106,11 @@ def test_series_not_equal_metadata_mismatch(kwargs):
@pytest.mark.parametrize("data1,data2", [(0.12345, 0.12346), (0.1235, 0.1236)])
-@pytest.mark.parametrize("dtype", ["float32", "float64", "Float32"])
@pytest.mark.parametrize("decimals", [0, 1, 2, 3, 5, 10])
-def test_less_precise(data1, data2, dtype, decimals):
+def test_less_precise(data1, data2, any_float_dtype, decimals):
rtol = 10**-decimals
- s1 = Series([data1], dtype=dtype)
- s2 = Series([data2], dtype=dtype)
+ s1 = Series([data1], dtype=any_float_dtype)
+ s2 = Series([data2], dtype=any_float_dtype)
if decimals in (5, 10) or (decimals >= 3 and abs(data1 - data2) >= 0.0005):
msg = "Series values are different"
diff --git a/pandas/tests/window/test_rolling_functions.py b/pandas/tests/window/test_rolling_functions.py
index 5906ff52db098..f77a98ae9a7d9 100644
--- a/pandas/tests/window/test_rolling_functions.py
+++ b/pandas/tests/window/test_rolling_functions.py
@@ -471,20 +471,19 @@ def test_rolling_median_memory_error():
).median()
-@pytest.mark.parametrize(
- "data_type",
- [np.dtype(f"f{width}") for width in [4, 8]]
- + [np.dtype(f"{sign}{width}") for width in [1, 2, 4, 8] for sign in "ui"],
-)
-def test_rolling_min_max_numeric_types(data_type):
+def test_rolling_min_max_numeric_types(any_real_numpy_dtype):
# GH12373
# Just testing that these don't throw exceptions and that
# the return type is float64. Other tests will cover quantitative
# correctness
- result = DataFrame(np.arange(20, dtype=data_type)).rolling(window=5).max()
+ result = (
+ DataFrame(np.arange(20, dtype=any_real_numpy_dtype)).rolling(window=5).max()
+ )
assert result.dtypes[0] == np.dtype("f8")
- result = DataFrame(np.arange(20, dtype=data_type)).rolling(window=5).min()
+ result = (
+ DataFrame(np.arange(20, dtype=any_real_numpy_dtype)).rolling(window=5).min()
+ )
assert result.dtypes[0] == np.dtype("f8")
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56726 | 2024-01-04T00:37:34Z | 2024-01-04T19:52:12Z | 2024-01-04T19:52:12Z | 2024-01-04T22:52:55Z |
Backport PR #56721 on branch 2.2.x (DOC: Fixup read_csv docstring) | diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index a9b41b45aba2f..e26e7e7470461 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -396,7 +396,7 @@
- Callable, function with signature
as described in `pyarrow documentation
<https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html
- #pyarrow.csv.ParseOptions.invalid_row_handler>_` when ``engine='pyarrow'``
+ #pyarrow.csv.ParseOptions.invalid_row_handler>`_ when ``engine='pyarrow'``
delim_whitespace : bool, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``'\\t'``) will be
| Backport PR #56721: DOC: Fixup read_csv docstring | https://api.github.com/repos/pandas-dev/pandas/pulls/56725 | 2024-01-03T23:31:50Z | 2024-01-03T23:33:36Z | 2024-01-03T23:33:36Z | 2024-01-03T23:33:37Z |
TST: Don't ignore tolerance for integer series | diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index d0f38c85868d4..3de982498e996 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -10,6 +10,7 @@
import numpy as np
+from pandas._libs import lib
from pandas._libs.missing import is_matching_na
from pandas._libs.sparse import SparseIndex
import pandas._libs.testing as _testing
@@ -698,9 +699,9 @@ def assert_extension_array_equal(
right,
check_dtype: bool | Literal["equiv"] = True,
index_values=None,
- check_exact: bool = False,
- rtol: float = 1.0e-5,
- atol: float = 1.0e-8,
+ check_exact: bool | lib.NoDefault = lib.no_default,
+ rtol: float | lib.NoDefault = lib.no_default,
+ atol: float | lib.NoDefault = lib.no_default,
obj: str = "ExtensionArray",
) -> None:
"""
@@ -715,7 +716,12 @@ def assert_extension_array_equal(
index_values : Index | numpy.ndarray, default None
Optional index (shared by both left and right), used in output.
check_exact : bool, default False
- Whether to compare number exactly. Only takes effect for float dtypes.
+ Whether to compare number exactly.
+
+ .. versionchanged:: 2.2.0
+
+ Defaults to True for integer dtypes if none of
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
rtol : float, default 1e-5
Relative tolerance. Only used when check_exact is False.
atol : float, default 1e-8
@@ -739,6 +745,23 @@ def assert_extension_array_equal(
>>> b, c = a.array, a.array
>>> tm.assert_extension_array_equal(b, c)
"""
+ if (
+ check_exact is lib.no_default
+ and rtol is lib.no_default
+ and atol is lib.no_default
+ ):
+ check_exact = (
+ is_numeric_dtype(left.dtype)
+ and not is_float_dtype(left.dtype)
+ or is_numeric_dtype(right.dtype)
+ and not is_float_dtype(right.dtype)
+ )
+ elif check_exact is lib.no_default:
+ check_exact = False
+
+ rtol = rtol if rtol is not lib.no_default else 1.0e-5
+ atol = atol if atol is not lib.no_default else 1.0e-8
+
assert isinstance(left, ExtensionArray), "left is not an ExtensionArray"
assert isinstance(right, ExtensionArray), "right is not an ExtensionArray"
if check_dtype:
@@ -784,10 +807,7 @@ def assert_extension_array_equal(
left_valid = left[~left_na].to_numpy(dtype=object)
right_valid = right[~right_na].to_numpy(dtype=object)
- if check_exact or (
- (is_numeric_dtype(left.dtype) and not is_float_dtype(left.dtype))
- or (is_numeric_dtype(right.dtype) and not is_float_dtype(right.dtype))
- ):
+ if check_exact:
assert_numpy_array_equal(
left_valid, right_valid, obj=obj, index_values=index_values
)
@@ -811,14 +831,14 @@ def assert_series_equal(
check_index_type: bool | Literal["equiv"] = "equiv",
check_series_type: bool = True,
check_names: bool = True,
- check_exact: bool = False,
+ check_exact: bool | lib.NoDefault = lib.no_default,
check_datetimelike_compat: bool = False,
check_categorical: bool = True,
check_category_order: bool = True,
check_freq: bool = True,
check_flags: bool = True,
- rtol: float = 1.0e-5,
- atol: float = 1.0e-8,
+ rtol: float | lib.NoDefault = lib.no_default,
+ atol: float | lib.NoDefault = lib.no_default,
obj: str = "Series",
*,
check_index: bool = True,
@@ -841,7 +861,12 @@ def assert_series_equal(
check_names : bool, default True
Whether to check the Series and Index names attribute.
check_exact : bool, default False
- Whether to compare number exactly. Only takes effect for float dtypes.
+ Whether to compare number exactly.
+
+ .. versionchanged:: 2.2.0
+
+ Defaults to True for integer dtypes if none of
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
@@ -877,6 +902,22 @@ def assert_series_equal(
>>> tm.assert_series_equal(a, b)
"""
__tracebackhide__ = True
+ if (
+ check_exact is lib.no_default
+ and rtol is lib.no_default
+ and atol is lib.no_default
+ ):
+ check_exact = (
+ is_numeric_dtype(left.dtype)
+ and not is_float_dtype(left.dtype)
+ or is_numeric_dtype(right.dtype)
+ and not is_float_dtype(right.dtype)
+ )
+ elif check_exact is lib.no_default:
+ check_exact = False
+
+ rtol = rtol if rtol is not lib.no_default else 1.0e-5
+ atol = atol if atol is not lib.no_default else 1.0e-8
if not check_index and check_like:
raise ValueError("check_like must be False if check_index is False")
@@ -931,10 +972,7 @@ def assert_series_equal(
pass
else:
assert_attr_equal("dtype", left, right, obj=f"Attributes of {obj}")
- if check_exact or (
- (is_numeric_dtype(left.dtype) and not is_float_dtype(left.dtype))
- or (is_numeric_dtype(right.dtype) and not is_float_dtype(right.dtype))
- ):
+ if check_exact:
left_values = left._values
right_values = right._values
# Only check exact if dtype is numeric
@@ -1061,14 +1099,14 @@ def assert_frame_equal(
check_frame_type: bool = True,
check_names: bool = True,
by_blocks: bool = False,
- check_exact: bool = False,
+ check_exact: bool | lib.NoDefault = lib.no_default,
check_datetimelike_compat: bool = False,
check_categorical: bool = True,
check_like: bool = False,
check_freq: bool = True,
check_flags: bool = True,
- rtol: float = 1.0e-5,
- atol: float = 1.0e-8,
+ rtol: float | lib.NoDefault = lib.no_default,
+ atol: float | lib.NoDefault = lib.no_default,
obj: str = "DataFrame",
) -> None:
"""
@@ -1103,7 +1141,12 @@ def assert_frame_equal(
Specify how to compare internal data. If False, compare by columns.
If True, compare by blocks.
check_exact : bool, default False
- Whether to compare number exactly. Only takes effect for float dtypes.
+ Whether to compare number exactly.
+
+ .. versionchanged:: 2.2.0
+
+ Defaults to True for integer dtypes if none of
+ ``check_exact``, ``rtol`` and ``atol`` are specified.
check_datetimelike_compat : bool, default False
Compare datetime-like which is comparable ignoring dtype.
check_categorical : bool, default True
@@ -1158,6 +1201,9 @@ def assert_frame_equal(
>>> assert_frame_equal(df1, df2, check_dtype=False)
"""
__tracebackhide__ = True
+ _rtol = rtol if rtol is not lib.no_default else 1.0e-5
+ _atol = atol if atol is not lib.no_default else 1.0e-8
+ _check_exact = check_exact if check_exact is not lib.no_default else False
# instance validation
_check_isinstance(left, right, DataFrame)
@@ -1181,11 +1227,11 @@ def assert_frame_equal(
right.index,
exact=check_index_type,
check_names=check_names,
- check_exact=check_exact,
+ check_exact=_check_exact,
check_categorical=check_categorical,
check_order=not check_like,
- rtol=rtol,
- atol=atol,
+ rtol=_rtol,
+ atol=_atol,
obj=f"{obj}.index",
)
@@ -1195,11 +1241,11 @@ def assert_frame_equal(
right.columns,
exact=check_column_type,
check_names=check_names,
- check_exact=check_exact,
+ check_exact=_check_exact,
check_categorical=check_categorical,
check_order=not check_like,
- rtol=rtol,
- atol=atol,
+ rtol=_rtol,
+ atol=_atol,
obj=f"{obj}.columns",
)
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index c4ffc197298f0..784a0347cf92b 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -462,3 +462,15 @@ def test_ea_and_numpy_no_dtype_check(val, check_exact, dtype):
left = Series([1, 2, val], dtype=dtype)
right = Series(pd.array([1, 2, val]))
tm.assert_series_equal(left, right, check_dtype=False, check_exact=check_exact)
+
+
+def test_assert_series_equal_int_tol():
+ # GH#56646
+ left = Series([81, 18, 121, 38, 74, 72, 81, 81, 146, 81, 81, 170, 74, 74])
+ right = Series([72, 9, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72, 72])
+ tm.assert_series_equal(left, right, rtol=1.5)
+
+ tm.assert_frame_equal(left.to_frame(), right.to_frame(), rtol=1.5)
+ tm.assert_extension_array_equal(
+ left.astype("Int64").values, right.astype("Int64").values, rtol=1.5
+ )
| - [ ] closes #56646 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
cc @lithomas1 | https://api.github.com/repos/pandas-dev/pandas/pulls/56724 | 2024-01-03T23:00:57Z | 2024-01-08T21:40:05Z | 2024-01-08T21:40:05Z | 2024-01-08T21:40:08Z |
Backport PR #56672 on branch 2.2.x (BUG: dictionary type astype categorical using dictionary as categories) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 043646457f604..7df6e7d0e3166 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -740,6 +740,7 @@ Categorical
^^^^^^^^^^^
- :meth:`Categorical.isin` raising ``InvalidIndexError`` for categorical containing overlapping :class:`Interval` values (:issue:`34974`)
- Bug in :meth:`CategoricalDtype.__eq__` returning ``False`` for unordered categorical data with mixed types (:issue:`55468`)
+- Bug when casting ``pa.dictionary`` to :class:`CategoricalDtype` using a ``pa.DictionaryArray`` as categories (:issue:`56672`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 065a942cae768..b87c5375856dc 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -44,7 +44,9 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
CategoricalDtype,
+ CategoricalDtypeType,
ExtensionDtype,
)
from pandas.core.dtypes.generic import (
@@ -443,24 +445,32 @@ def __init__(
values = arr
if dtype.categories is None:
- if not isinstance(values, ABCIndex):
- # in particular RangeIndex xref test_index_equal_range_categories
- values = sanitize_array(values, None)
- try:
- codes, categories = factorize(values, sort=True)
- except TypeError as err:
- codes, categories = factorize(values, sort=False)
- if dtype.ordered:
- # raise, as we don't have a sortable data structure and so
- # the user should give us one by specifying categories
- raise TypeError(
- "'values' is not ordered, please "
- "explicitly specify the categories order "
- "by passing in a categories argument."
- ) from err
-
- # we're inferring from values
- dtype = CategoricalDtype(categories, dtype.ordered)
+ if isinstance(values.dtype, ArrowDtype) and issubclass(
+ values.dtype.type, CategoricalDtypeType
+ ):
+ arr = values._pa_array.combine_chunks()
+ categories = arr.dictionary.to_pandas(types_mapper=ArrowDtype)
+ codes = arr.indices.to_numpy()
+ dtype = CategoricalDtype(categories, values.dtype.pyarrow_dtype.ordered)
+ else:
+ if not isinstance(values, ABCIndex):
+ # in particular RangeIndex xref test_index_equal_range_categories
+ values = sanitize_array(values, None)
+ try:
+ codes, categories = factorize(values, sort=True)
+ except TypeError as err:
+ codes, categories = factorize(values, sort=False)
+ if dtype.ordered:
+ # raise, as we don't have a sortable data structure and so
+ # the user should give us one by specifying categories
+ raise TypeError(
+ "'values' is not ordered, please "
+ "explicitly specify the categories order "
+ "by passing in a categories argument."
+ ) from err
+
+ # we're inferring from values
+ dtype = CategoricalDtype(categories, dtype.ordered)
elif isinstance(values.dtype, CategoricalDtype):
old_codes = extract_array(values)._codes
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index e709e6fcfe456..6689fb34f2ae3 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3234,6 +3234,22 @@ def test_factorize_chunked_dictionary():
tm.assert_index_equal(res_uniques, exp_uniques)
+def test_dictionary_astype_categorical():
+ # GH#56672
+ arrs = [
+ pa.array(np.array(["a", "x", "c", "a"])).dictionary_encode(),
+ pa.array(np.array(["a", "d", "c"])).dictionary_encode(),
+ ]
+ ser = pd.Series(ArrowExtensionArray(pa.chunked_array(arrs)))
+ result = ser.astype("category")
+ categories = pd.Index(["a", "x", "c", "d"], dtype=ArrowDtype(pa.string()))
+ expected = pd.Series(
+ ["a", "x", "c", "a", "a", "d", "c"],
+ dtype=pd.CategoricalDtype(categories=categories),
+ )
+ tm.assert_series_equal(result, expected)
+
+
def test_arrow_floordiv():
# GH 55561
a = pd.Series([-7], dtype="int64[pyarrow]")
| Backport PR #56672: BUG: dictionary type astype categorical using dictionary as categories | https://api.github.com/repos/pandas-dev/pandas/pulls/56723 | 2024-01-03T22:49:08Z | 2024-01-03T23:48:10Z | 2024-01-03T23:48:10Z | 2024-01-03T23:48:11Z |
DOC: Fixup read_csv docstring | diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py
index a9b41b45aba2f..e26e7e7470461 100644
--- a/pandas/io/parsers/readers.py
+++ b/pandas/io/parsers/readers.py
@@ -396,7 +396,7 @@
- Callable, function with signature
as described in `pyarrow documentation
<https://arrow.apache.org/docs/python/generated/pyarrow.csv.ParseOptions.html
- #pyarrow.csv.ParseOptions.invalid_row_handler>_` when ``engine='pyarrow'``
+ #pyarrow.csv.ParseOptions.invalid_row_handler>`_ when ``engine='pyarrow'``
delim_whitespace : bool, default False
Specifies whether or not whitespace (e.g. ``' '`` or ``'\\t'``) will be
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56721 | 2024-01-03T22:35:06Z | 2024-01-03T23:31:42Z | 2024-01-03T23:31:42Z | 2024-01-04T08:21:53Z |
Backport PR #56616 on branch 2.2.x (BUG: Add limit_area to EA ffill/bfill) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 043646457f604..75ba7c9f72c1b 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -321,7 +321,7 @@ Other enhancements
- :meth:`DataFrame.apply` now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
- :meth:`ExtensionArray._explode` interface method added to allow extension type implementations of the ``explode`` method (:issue:`54833`)
- :meth:`ExtensionArray.duplicated` added to allow extension type implementations of the ``duplicated`` method (:issue:`55255`)
-- :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, and :meth:`DataFrame.bfill` have gained the argument ``limit_area`` (:issue:`56492`)
+- :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, and :meth:`DataFrame.bfill` have gained the argument ``limit_area``; 3rd party :class:`.ExtensionArray` authors need to add this argument to the method ``_pad_or_backfill`` (:issue:`56492`)
- Allow passing ``read_only``, ``data_only`` and ``keep_links`` arguments to openpyxl using ``engine_kwargs`` of :func:`read_excel` (:issue:`55027`)
- Implement masked algorithms for :meth:`Series.value_counts` (:issue:`54984`)
- Implemented :meth:`Series.dt` methods and attributes for :class:`ArrowDtype` with ``pyarrow.duration`` type (:issue:`52284`)
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 9ece12cf51a7b..0da121c36644a 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -305,7 +305,12 @@ def _fill_mask_inplace(
func(self._ndarray.T, limit=limit, mask=mask.T)
def _pad_or_backfill(
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
mask = self.isna()
if mask.any():
@@ -315,7 +320,7 @@ def _pad_or_backfill(
npvalues = self._ndarray.T
if copy:
npvalues = npvalues.copy()
- func(npvalues, limit=limit, mask=mask.T)
+ func(npvalues, limit=limit, limit_area=limit_area, mask=mask.T)
npvalues = npvalues.T
if copy:
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 5427cee55dfb1..0bc01d2da330a 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -1005,13 +1005,18 @@ def dropna(self) -> Self:
return type(self)(pc.drop_null(self._pa_array))
def _pad_or_backfill(
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
if not self._hasna:
# TODO(CoW): Not necessary anymore when CoW is the default
return self.copy()
- if limit is None:
+ if limit is None and limit_area is None:
method = missing.clean_fill_method(method)
try:
if method == "pad":
@@ -1027,7 +1032,9 @@ def _pad_or_backfill(
# TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
# this method entirely.
- return super()._pad_or_backfill(method=method, limit=limit, copy=copy)
+ return super()._pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
@doc(ExtensionArray.fillna)
def fillna(
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 59c6d911cfaef..ea0e2e54e3339 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -70,6 +70,7 @@
unique,
)
from pandas.core.array_algos.quantile import quantile_with_mask
+from pandas.core.missing import _fill_limit_area_1d
from pandas.core.sorting import (
nargminmax,
nargsort,
@@ -954,7 +955,12 @@ def interpolate(
)
def _pad_or_backfill(
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
"""
Pad or backfill values, used by Series/DataFrame ffill and bfill.
@@ -1012,6 +1018,12 @@ def _pad_or_backfill(
DeprecationWarning,
stacklevel=find_stack_level(),
)
+ if limit_area is not None:
+ raise NotImplementedError(
+ f"{type(self).__name__} does not implement limit_area "
+ "(added in pandas 2.2). 3rd-party ExtnsionArray authors "
+ "need to add this argument to _pad_or_backfill."
+ )
return self.fillna(method=method, limit=limit)
mask = self.isna()
@@ -1021,6 +1033,8 @@ def _pad_or_backfill(
meth = missing.clean_fill_method(method)
npmask = np.asarray(mask)
+ if limit_area is not None and not npmask.all():
+ _fill_limit_area_1d(npmask, limit_area)
if meth == "pad":
indexer = libalgos.get_fill_indexer(npmask, limit=limit)
return self.take(indexer, allow_fill=True)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index a19b304529383..904c87c68e211 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -890,11 +890,18 @@ def max(self, *, axis: AxisInt | None = None, skipna: bool = True) -> IntervalOr
return obj[indexer]
def _pad_or_backfill( # pylint: disable=useless-parent-delegation
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
# TODO(3.0): after EA.fillna 'method' deprecation is enforced, we can remove
# this method entirely.
- return super()._pad_or_backfill(method=method, limit=limit, copy=copy)
+ return super()._pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
def fillna(
self, value=None, method=None, limit: int | None = None, copy: bool = True
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 03c09c5b2fd18..fc092ef6eb463 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -192,7 +192,12 @@ def __getitem__(self, item: PositionalIndexer) -> Self | Any:
return self._simple_new(self._data[item], newmask)
def _pad_or_backfill(
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
mask = self._mask
@@ -204,7 +209,21 @@ def _pad_or_backfill(
if copy:
npvalues = npvalues.copy()
new_mask = new_mask.copy()
+ elif limit_area is not None:
+ mask = mask.copy()
func(npvalues, limit=limit, mask=new_mask)
+
+ if limit_area is not None and not mask.all():
+ mask = mask.T
+ neg_mask = ~mask
+ first = neg_mask.argmax()
+ last = len(neg_mask) - neg_mask[::-1].argmax() - 1
+ if limit_area == "inside":
+ new_mask[:first] |= mask[:first]
+ new_mask[last + 1 :] |= mask[last + 1 :]
+ elif limit_area == "outside":
+ new_mask[first + 1 : last] |= mask[first + 1 : last]
+
if copy:
return self._simple_new(npvalues.T, new_mask.T)
else:
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 2930b979bfe78..28f25d38b2363 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -810,12 +810,19 @@ def searchsorted(
return m8arr.searchsorted(npvalue, side=side, sorter=sorter)
def _pad_or_backfill(
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
# view as dt64 so we get treated as timelike in core.missing,
# similar to dtl._period_dispatch
dta = self.view("M8[ns]")
- result = dta._pad_or_backfill(method=method, limit=limit, copy=copy)
+ result = dta._pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
if copy:
return cast("Self", result.view(self.dtype))
else:
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 5db77db2a9c66..98d84d899094b 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -716,11 +716,18 @@ def isna(self) -> Self: # type: ignore[override]
return type(self)(mask, fill_value=False, dtype=dtype)
def _pad_or_backfill( # pylint: disable=useless-parent-delegation
- self, *, method: FillnaOptions, limit: int | None = None, copy: bool = True
+ self,
+ *,
+ method: FillnaOptions,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ copy: bool = True,
) -> Self:
# TODO(3.0): We can remove this method once deprecation for fillna method
# keyword is enforced.
- return super()._pad_or_backfill(method=method, limit=limit, copy=copy)
+ return super()._pad_or_backfill(
+ method=method, limit=limit, limit_area=limit_area, copy=copy
+ )
def fillna(
self,
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 20eff9315bc80..fa409f00e9ff6 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1,6 +1,7 @@
from __future__ import annotations
from functools import wraps
+import inspect
import re
from typing import (
TYPE_CHECKING,
@@ -2256,11 +2257,21 @@ def pad_or_backfill(
) -> list[Block]:
values = self.values
+ kwargs: dict[str, Any] = {"method": method, "limit": limit}
+ if "limit_area" in inspect.signature(values._pad_or_backfill).parameters:
+ kwargs["limit_area"] = limit_area
+ elif limit_area is not None:
+ raise NotImplementedError(
+ f"{type(values).__name__} does not implement limit_area "
+ "(added in pandas 2.2). 3rd-party ExtnsionArray authors "
+ "need to add this argument to _pad_or_backfill."
+ )
+
if values.ndim == 2 and axis == 1:
# NDArrayBackedExtensionArray.fillna assumes axis=0
- new_values = values.T._pad_or_backfill(method=method, limit=limit).T
+ new_values = values.T._pad_or_backfill(**kwargs).T
else:
- new_values = values._pad_or_backfill(method=method, limit=limit)
+ new_values = values._pad_or_backfill(**kwargs)
return [self.make_block_same_class(new_values)]
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index d275445983b6f..5dd9aaf5fbb4a 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -3,10 +3,7 @@
"""
from __future__ import annotations
-from functools import (
- partial,
- wraps,
-)
+from functools import wraps
from typing import (
TYPE_CHECKING,
Any,
@@ -823,6 +820,7 @@ def _interpolate_with_limit_area(
values,
method=method,
limit=limit,
+ limit_area=limit_area,
)
if limit_area == "inside":
@@ -863,27 +861,6 @@ def pad_or_backfill_inplace(
-----
Modifies values in-place.
"""
- if limit_area is not None:
- np.apply_along_axis(
- # error: Argument 1 to "apply_along_axis" has incompatible type
- # "partial[None]"; expected
- # "Callable[..., Union[_SupportsArray[dtype[<nothing>]],
- # Sequence[_SupportsArray[dtype[<nothing>]]],
- # Sequence[Sequence[_SupportsArray[dtype[<nothing>]]]],
- # Sequence[Sequence[Sequence[_SupportsArray[dtype[<nothing>]]]]],
- # Sequence[Sequence[Sequence[Sequence[_
- # SupportsArray[dtype[<nothing>]]]]]]]]"
- partial( # type: ignore[arg-type]
- _interpolate_with_limit_area,
- method=method,
- limit=limit,
- limit_area=limit_area,
- ),
- axis,
- values,
- )
- return
-
transf = (lambda x: x) if axis == 0 else (lambda x: x.T)
# reshape a 1 dim if needed
@@ -897,8 +874,7 @@ def pad_or_backfill_inplace(
func = get_fill_func(method, ndim=2)
# _pad_2d and _backfill_2d both modify tvalues inplace
- func(tvalues, limit=limit)
- return
+ func(tvalues, limit=limit, limit_area=limit_area)
def _fillna_prep(
@@ -909,7 +885,6 @@ def _fillna_prep(
if mask is None:
mask = isna(values)
- mask = mask.view(np.uint8)
return mask
@@ -919,16 +894,23 @@ def _datetimelike_compat(func: F) -> F:
"""
@wraps(func)
- def new_func(values, limit: int | None = None, mask=None):
+ def new_func(
+ values,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ mask=None,
+ ):
if needs_i8_conversion(values.dtype):
if mask is None:
# This needs to occur before casting to int64
mask = isna(values)
- result, mask = func(values.view("i8"), limit=limit, mask=mask)
+ result, mask = func(
+ values.view("i8"), limit=limit, limit_area=limit_area, mask=mask
+ )
return result.view(values.dtype), mask
- return func(values, limit=limit, mask=mask)
+ return func(values, limit=limit, limit_area=limit_area, mask=mask)
return cast(F, new_func)
@@ -937,9 +919,12 @@ def new_func(values, limit: int | None = None, mask=None):
def _pad_1d(
values: np.ndarray,
limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
mask: npt.NDArray[np.bool_] | None = None,
) -> tuple[np.ndarray, npt.NDArray[np.bool_]]:
mask = _fillna_prep(values, mask)
+ if limit_area is not None and not mask.all():
+ _fill_limit_area_1d(mask, limit_area)
algos.pad_inplace(values, mask, limit=limit)
return values, mask
@@ -948,9 +933,12 @@ def _pad_1d(
def _backfill_1d(
values: np.ndarray,
limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
mask: npt.NDArray[np.bool_] | None = None,
) -> tuple[np.ndarray, npt.NDArray[np.bool_]]:
mask = _fillna_prep(values, mask)
+ if limit_area is not None and not mask.all():
+ _fill_limit_area_1d(mask, limit_area)
algos.backfill_inplace(values, mask, limit=limit)
return values, mask
@@ -959,9 +947,12 @@ def _backfill_1d(
def _pad_2d(
values: np.ndarray,
limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
mask: npt.NDArray[np.bool_] | None = None,
):
mask = _fillna_prep(values, mask)
+ if limit_area is not None:
+ _fill_limit_area_2d(mask, limit_area)
if values.size:
algos.pad_2d_inplace(values, mask, limit=limit)
@@ -973,9 +964,14 @@ def _pad_2d(
@_datetimelike_compat
def _backfill_2d(
- values, limit: int | None = None, mask: npt.NDArray[np.bool_] | None = None
+ values,
+ limit: int | None = None,
+ limit_area: Literal["inside", "outside"] | None = None,
+ mask: npt.NDArray[np.bool_] | None = None,
):
mask = _fillna_prep(values, mask)
+ if limit_area is not None:
+ _fill_limit_area_2d(mask, limit_area)
if values.size:
algos.backfill_2d_inplace(values, mask, limit=limit)
@@ -985,6 +981,63 @@ def _backfill_2d(
return values, mask
+def _fill_limit_area_1d(
+ mask: npt.NDArray[np.bool_], limit_area: Literal["outside", "inside"]
+) -> None:
+ """Prepare 1d mask for ffill/bfill with limit_area.
+
+ Caller is responsible for checking at least one value of mask is False.
+ When called, mask will no longer faithfully represent when
+ the corresponding are NA or not.
+
+ Parameters
+ ----------
+ mask : np.ndarray[bool, ndim=1]
+ Mask representing NA values when filling.
+ limit_area : { "outside", "inside" }
+ Whether to limit filling to outside or inside the outer most non-NA value.
+ """
+ neg_mask = ~mask
+ first = neg_mask.argmax()
+ last = len(neg_mask) - neg_mask[::-1].argmax() - 1
+ if limit_area == "inside":
+ mask[:first] = False
+ mask[last + 1 :] = False
+ elif limit_area == "outside":
+ mask[first + 1 : last] = False
+
+
+def _fill_limit_area_2d(
+ mask: npt.NDArray[np.bool_], limit_area: Literal["outside", "inside"]
+) -> None:
+ """Prepare 2d mask for ffill/bfill with limit_area.
+
+ When called, mask will no longer faithfully represent when
+ the corresponding are NA or not.
+
+ Parameters
+ ----------
+ mask : np.ndarray[bool, ndim=1]
+ Mask representing NA values when filling.
+ limit_area : { "outside", "inside" }
+ Whether to limit filling to outside or inside the outer most non-NA value.
+ """
+ neg_mask = ~mask.T
+ if limit_area == "outside":
+ # Identify inside
+ la_mask = (
+ np.maximum.accumulate(neg_mask, axis=0)
+ & np.maximum.accumulate(neg_mask[::-1], axis=0)[::-1]
+ )
+ else:
+ # Identify outside
+ la_mask = (
+ ~np.maximum.accumulate(neg_mask, axis=0)
+ | ~np.maximum.accumulate(neg_mask[::-1], axis=0)[::-1]
+ )
+ mask[la_mask.T] = False
+
+
_fill_methods = {"pad": _pad_1d, "backfill": _backfill_1d}
diff --git a/pandas/tests/extension/base/missing.py b/pandas/tests/extension/base/missing.py
index ffb7a24b4b390..dbd6682c12123 100644
--- a/pandas/tests/extension/base/missing.py
+++ b/pandas/tests/extension/base/missing.py
@@ -77,6 +77,28 @@ def test_fillna_limit_pad(self, data_missing):
expected = pd.Series(data_missing.take([1, 1, 1, 0, 1]))
tm.assert_series_equal(result, expected)
+ @pytest.mark.parametrize(
+ "limit_area, input_ilocs, expected_ilocs",
+ [
+ ("outside", [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]),
+ ("outside", [1, 0, 1, 0, 1], [1, 0, 1, 0, 1]),
+ ("outside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 1]),
+ ("outside", [0, 1, 0, 1, 0], [0, 1, 0, 1, 1]),
+ ("inside", [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [1, 0, 1, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 0]),
+ ("inside", [0, 1, 0, 1, 0], [0, 1, 1, 1, 0]),
+ ],
+ )
+ def test_ffill_limit_area(
+ self, data_missing, limit_area, input_ilocs, expected_ilocs
+ ):
+ # GH#56616
+ arr = data_missing.take(input_ilocs)
+ result = pd.Series(arr).ffill(limit_area=limit_area)
+ expected = pd.Series(data_missing.take(expected_ilocs))
+ tm.assert_series_equal(result, expected)
+
@pytest.mark.filterwarnings(
"ignore:Series.fillna with 'method' is deprecated:FutureWarning"
)
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index b3c57ee49a724..9907e345ada63 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -156,6 +156,36 @@ def test_fillna_limit_pad(self, data_missing):
):
super().test_fillna_limit_pad(data_missing)
+ @pytest.mark.parametrize(
+ "limit_area, input_ilocs, expected_ilocs",
+ [
+ ("outside", [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]),
+ ("outside", [1, 0, 1, 0, 1], [1, 0, 1, 0, 1]),
+ ("outside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 1]),
+ ("outside", [0, 1, 0, 1, 0], [0, 1, 0, 1, 1]),
+ ("inside", [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [1, 0, 1, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 0]),
+ ("inside", [0, 1, 0, 1, 0], [0, 1, 1, 1, 0]),
+ ],
+ )
+ def test_ffill_limit_area(
+ self, data_missing, limit_area, input_ilocs, expected_ilocs
+ ):
+ # GH#56616
+ msg = "ExtensionArray.fillna 'method' keyword is deprecated"
+ with tm.assert_produces_warning(
+ DeprecationWarning,
+ match=msg,
+ check_stacklevel=False,
+ raise_on_extra_warnings=False,
+ ):
+ msg = "DecimalArray does not implement limit_area"
+ with pytest.raises(NotImplementedError, match=msg):
+ super().test_ffill_limit_area(
+ data_missing, limit_area, input_ilocs, expected_ilocs
+ )
+
def test_fillna_limit_backfill(self, data_missing):
msg = "Series.fillna with 'method' is deprecated"
with tm.assert_produces_warning(
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index d3d9dcc4a4712..31f44f886add7 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -235,6 +235,10 @@ def _values_for_argsort(self):
frozen = [tuple(x.items()) for x in self]
return construct_1d_object_array_from_listlike(frozen)
+ def _pad_or_backfill(self, *, method, limit=None, copy=True):
+ # GH#56616 - test EA method without limit_area argument
+ return super()._pad_or_backfill(method=method, limit=limit, copy=copy)
+
def make_data():
# TODO: Use a regular dict. See _NDFrameIndexer._setitem_with_indexer
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 7686bc5abb44c..a18edac9aef93 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -149,6 +149,29 @@ def test_fillna_frame(self):
"""We treat dictionaries as a mapping in fillna, not a scalar."""
super().test_fillna_frame()
+ @pytest.mark.parametrize(
+ "limit_area, input_ilocs, expected_ilocs",
+ [
+ ("outside", [1, 0, 0, 0, 1], [1, 0, 0, 0, 1]),
+ ("outside", [1, 0, 1, 0, 1], [1, 0, 1, 0, 1]),
+ ("outside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 1]),
+ ("outside", [0, 1, 0, 1, 0], [0, 1, 0, 1, 1]),
+ ("inside", [1, 0, 0, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [1, 0, 1, 0, 1], [1, 1, 1, 1, 1]),
+ ("inside", [0, 1, 1, 1, 0], [0, 1, 1, 1, 0]),
+ ("inside", [0, 1, 0, 1, 0], [0, 1, 1, 1, 0]),
+ ],
+ )
+ def test_ffill_limit_area(
+ self, data_missing, limit_area, input_ilocs, expected_ilocs
+ ):
+ # GH#56616
+ msg = "JSONArray does not implement limit_area"
+ with pytest.raises(NotImplementedError, match=msg):
+ super().test_ffill_limit_area(
+ data_missing, limit_area, input_ilocs, expected_ilocs
+ )
+
@unhashable
def test_value_counts(self, all_data, dropna):
super().test_value_counts(all_data, dropna)
diff --git a/pandas/tests/frame/methods/test_fillna.py b/pandas/tests/frame/methods/test_fillna.py
index 6757669351c5c..89c50a8c21e1c 100644
--- a/pandas/tests/frame/methods/test_fillna.py
+++ b/pandas/tests/frame/methods/test_fillna.py
@@ -862,41 +862,29 @@ def test_pad_backfill_deprecated(func):
@pytest.mark.parametrize(
"data, expected_data, method, kwargs",
(
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, 3.0, 3.0, 3.0, 7.0, np.nan, np.nan],
"ffill",
{"limit_area": "inside"},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, 3.0, np.nan, np.nan, 7.0, np.nan, np.nan],
"ffill",
{"limit_area": "inside", "limit": 1},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, np.nan, np.nan, np.nan, 7.0, 7.0, 7.0],
"ffill",
{"limit_area": "outside"},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, np.nan, np.nan, np.nan, 7.0, 7.0, np.nan],
"ffill",
{"limit_area": "outside", "limit": 1},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
(
[np.nan, np.nan, np.nan, np.nan, np.nan, np.nan, np.nan],
@@ -910,41 +898,29 @@ def test_pad_backfill_deprecated(func):
"ffill",
{"limit_area": "outside", "limit": 1},
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, 7.0, 7.0, 7.0, 7.0, np.nan, np.nan],
"bfill",
{"limit_area": "inside"},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, np.nan, 3.0, np.nan, np.nan, 7.0, 7.0, np.nan, np.nan],
"bfill",
{"limit_area": "inside", "limit": 1},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[3.0, 3.0, 3.0, np.nan, np.nan, np.nan, 7.0, np.nan, np.nan],
"bfill",
{"limit_area": "outside"},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
- pytest.param(
+ (
[np.nan, np.nan, 3, np.nan, np.nan, np.nan, 7, np.nan, np.nan],
[np.nan, 3.0, 3.0, np.nan, np.nan, np.nan, 7.0, np.nan, np.nan],
"bfill",
{"limit_area": "outside", "limit": 1},
- marks=pytest.mark.xfail(
- reason="GH#41813 - limit_area applied to the wrong axis"
- ),
),
),
)
diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py
index 89b67ddd9f5b6..0d724779abfda 100755
--- a/scripts/validate_unwanted_patterns.py
+++ b/scripts/validate_unwanted_patterns.py
@@ -58,6 +58,7 @@
"_iLocIndexer",
# TODO(3.0): GH#55043 - remove upon removal of ArrayManager
"_get_option",
+ "_fill_limit_area_1d",
}
| Backport PR #56616: BUG: Add limit_area to EA ffill/bfill | https://api.github.com/repos/pandas-dev/pandas/pulls/56720 | 2024-01-03T22:14:40Z | 2024-01-03T23:31:08Z | 2024-01-03T23:31:08Z | 2024-01-03T23:31:08Z |
Backport PR #56699 on branch 2.2.x (DOC: Corrected typo in warning on coerce) | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 51b4c4f297b07..d4eb5742ef928 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -432,7 +432,7 @@ In a future version, these will raise an error and you should cast to a common d
In [3]: ser[0] = 'not an int64'
FutureWarning:
- Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas.
+ Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas.
Value 'not an int64' has dtype incompatible with int64, please explicitly cast to a compatible dtype first.
In [4]: ser
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 20eff9315bc80..b7af545bd523e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -512,7 +512,7 @@ def coerce_to_target_dtype(self, other, warn_on_upcast: bool = False) -> Block:
if warn_on_upcast:
warnings.warn(
f"Setting an item of incompatible dtype is deprecated "
- "and will raise in a future error of pandas. "
+ "and will raise an error in a future version of pandas. "
f"Value '{other}' has dtype incompatible with {self.values.dtype}, "
"please explicitly cast to a compatible dtype first.",
FutureWarning,
| Backport PR #56699: DOC: Corrected typo in warning on coerce | https://api.github.com/repos/pandas-dev/pandas/pulls/56719 | 2024-01-03T21:38:13Z | 2024-01-03T22:16:56Z | 2024-01-03T22:16:56Z | 2024-01-03T22:16:56Z |
Backport PR #56691 on branch 2.2.x (Bug pyarrow implementation of str.fullmatch matches partial string. issue #56652) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 15e98cbb2a4d7..043646457f604 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -805,6 +805,7 @@ Strings
- Bug in :meth:`Series.str.replace` when ``n < 0`` for :class:`ArrowDtype` with ``pyarrow.string`` (:issue:`56404`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for :class:`ArrowDtype` with ``pyarrow.string`` dtype (:issue:`56579`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for ``string[pyarrow]`` (:issue:`54942`)
+- Bug in :meth:`str.fullmatch` when ``dtype=pandas.ArrowDtype(pyarrow.string()))`` allows partial matches when regex ends in literal //$ (:issue:`56652`)
- Bug in comparison operations for ``dtype="string[pyarrow_numpy]"`` raising if dtypes can't be compared (:issue:`56008`)
Interval
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index b1164301e6d79..5427cee55dfb1 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -2277,7 +2277,7 @@ def _str_match(
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
):
- if not pat.endswith("$") or pat.endswith("//$"):
+ if not pat.endswith("$") or pat.endswith("\\$"):
pat = f"{pat}$"
return self._str_match(pat, case, flags, na)
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index d5a76811a12e6..e8f614ff855c0 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -433,7 +433,7 @@ def _str_match(
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
):
- if not pat.endswith("$") or pat.endswith("//$"):
+ if not pat.endswith("$") or pat.endswith("\\$"):
pat = f"{pat}$"
return self._str_match(pat, case, flags, na)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index ed1b7b199a16f..e709e6fcfe456 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -1903,16 +1903,21 @@ def test_str_match(pat, case, na, exp):
@pytest.mark.parametrize(
"pat, case, na, exp",
[
- ["abc", False, None, [True, None]],
- ["Abc", True, None, [False, None]],
- ["bc", True, None, [False, None]],
- ["ab", False, True, [True, True]],
- ["a[a-z]{2}", False, None, [True, None]],
- ["A[a-z]{1}", True, None, [False, None]],
+ ["abc", False, None, [True, True, False, None]],
+ ["Abc", True, None, [False, False, False, None]],
+ ["bc", True, None, [False, False, False, None]],
+ ["ab", False, None, [True, True, False, None]],
+ ["a[a-z]{2}", False, None, [True, True, False, None]],
+ ["A[a-z]{1}", True, None, [False, False, False, None]],
+ # GH Issue: #56652
+ ["abc$", False, None, [True, False, False, None]],
+ ["abc\\$", False, None, [False, True, False, None]],
+ ["Abc$", True, None, [False, False, False, None]],
+ ["Abc\\$", True, None, [False, False, False, None]],
],
)
def test_str_fullmatch(pat, case, na, exp):
- ser = pd.Series(["abc", None], dtype=ArrowDtype(pa.string()))
+ ser = pd.Series(["abc", "abc$", "$abc", None], dtype=ArrowDtype(pa.string()))
result = ser.str.match(pat, case=case, na=na)
expected = pd.Series(exp, dtype=ArrowDtype(pa.bool_()))
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 3f58c6d703f8f..cd4707ac405de 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -730,6 +730,15 @@ def test_fullmatch(any_string_dtype):
tm.assert_series_equal(result, expected)
+def test_fullmatch_dollar_literal(any_string_dtype):
+ # GH 56652
+ ser = Series(["foo", "foo$foo", np.nan, "foo$"], dtype=any_string_dtype)
+ result = ser.str.fullmatch("foo\\$")
+ expected_dtype = "object" if any_string_dtype in object_pyarrow_numpy else "boolean"
+ expected = Series([False, False, np.nan, True], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+
def test_fullmatch_na_kwarg(any_string_dtype):
ser = Series(
["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype
| xref https://github.com/pandas-dev/pandas/pull/56691 | https://api.github.com/repos/pandas-dev/pandas/pulls/56715 | 2024-01-03T18:51:14Z | 2024-01-03T21:38:27Z | 2024-01-03T21:38:27Z | 2024-01-03T21:38:32Z |
TST/CLN: Reuse more existing fixtures | diff --git a/pandas/tests/arrays/test_timedeltas.py b/pandas/tests/arrays/test_timedeltas.py
index a3f15467feb14..bcc52f197ee51 100644
--- a/pandas/tests/arrays/test_timedeltas.py
+++ b/pandas/tests/arrays/test_timedeltas.py
@@ -194,18 +194,17 @@ def test_add_timedeltaarraylike(self, tda):
class TestTimedeltaArray:
- @pytest.mark.parametrize("dtype", [int, np.int32, np.int64, "uint32", "uint64"])
- def test_astype_int(self, dtype):
+ def test_astype_int(self, any_int_numpy_dtype):
arr = TimedeltaArray._from_sequence(
[Timedelta("1h"), Timedelta("2h")], dtype="m8[ns]"
)
- if np.dtype(dtype) != np.int64:
+ if np.dtype(any_int_numpy_dtype) != np.int64:
with pytest.raises(TypeError, match=r"Do obj.astype\('int64'\)"):
- arr.astype(dtype)
+ arr.astype(any_int_numpy_dtype)
return
- result = arr.astype(dtype)
+ result = arr.astype(any_int_numpy_dtype)
expected = arr._ndarray.view("i8")
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 17630f14b08c7..ed3ea1b0bd0dc 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -606,11 +606,10 @@ def test_unary_in_array(self):
)
tm.assert_numpy_array_equal(result, expected)
- @pytest.mark.parametrize("dtype", [np.float32, np.float64])
@pytest.mark.parametrize("expr", ["x < -0.1", "-5 > x"])
- def test_float_comparison_bin_op(self, dtype, expr):
+ def test_float_comparison_bin_op(self, float_numpy_dtype, expr):
# GH 16363
- df = DataFrame({"x": np.array([0], dtype=dtype)})
+ df = DataFrame({"x": np.array([0], dtype=float_numpy_dtype)})
res = df.eval(expr)
assert res.values == np.array([False])
@@ -747,15 +746,16 @@ class TestTypeCasting:
@pytest.mark.parametrize("op", ["+", "-", "*", "**", "/"])
# maybe someday... numexpr has too many upcasting rules now
# chain(*(np.core.sctypes[x] for x in ['uint', 'int', 'float']))
- @pytest.mark.parametrize("dt", [np.float32, np.float64])
@pytest.mark.parametrize("left_right", [("df", "3"), ("3", "df")])
- def test_binop_typecasting(self, engine, parser, op, dt, left_right):
- df = DataFrame(np.random.default_rng(2).standard_normal((5, 3)), dtype=dt)
+ def test_binop_typecasting(self, engine, parser, op, float_numpy_dtype, left_right):
+ df = DataFrame(
+ np.random.default_rng(2).standard_normal((5, 3)), dtype=float_numpy_dtype
+ )
left, right = left_right
s = f"{left} {op} {right}"
res = pd.eval(s, engine=engine, parser=parser)
- assert df.values.dtype == dt
- assert res.values.dtype == dt
+ assert df.values.dtype == float_numpy_dtype
+ assert res.values.dtype == float_numpy_dtype
tm.assert_frame_equal(res, eval(s))
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index 1b83c048411a8..a1868919be685 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -1682,16 +1682,15 @@ def exp_single_cats_value(self):
)
return exp_single_cats_value
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
- def test_loc_iloc_setitem_list_of_lists(self, orig, indexer):
+ def test_loc_iloc_setitem_list_of_lists(self, orig, indexer_li):
# - assign multiple rows (mixed values) -> exp_multi_row
df = orig.copy()
key = slice(2, 4)
- if indexer is tm.loc:
+ if indexer_li is tm.loc:
key = slice("j", "k")
- indexer(df)[key, :] = [["b", 2], ["b", 2]]
+ indexer_li(df)[key, :] = [["b", 2], ["b", 2]]
cats2 = Categorical(["a", "a", "b", "b", "a", "a", "a"], categories=["a", "b"])
idx2 = Index(["h", "i", "j", "k", "l", "m", "n"])
@@ -1701,7 +1700,7 @@ def test_loc_iloc_setitem_list_of_lists(self, orig, indexer):
df = orig.copy()
with pytest.raises(TypeError, match=msg1):
- indexer(df)[key, :] = [["c", 2], ["c", 2]]
+ indexer_li(df)[key, :] = [["c", 2], ["c", 2]]
@pytest.mark.parametrize("indexer", [tm.loc, tm.iloc, tm.at, tm.iat])
def test_loc_iloc_at_iat_setitem_single_value_in_categories(
@@ -1722,32 +1721,30 @@ def test_loc_iloc_at_iat_setitem_single_value_in_categories(
with pytest.raises(TypeError, match=msg1):
indexer(df)[key] = "c"
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
def test_loc_iloc_setitem_mask_single_value_in_categories(
- self, orig, exp_single_cats_value, indexer
+ self, orig, exp_single_cats_value, indexer_li
):
# mask with single True
df = orig.copy()
mask = df.index == "j"
key = 0
- if indexer is tm.loc:
+ if indexer_li is tm.loc:
key = df.columns[key]
- indexer(df)[mask, key] = "b"
+ indexer_li(df)[mask, key] = "b"
tm.assert_frame_equal(df, exp_single_cats_value)
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
- def test_loc_iloc_setitem_full_row_non_categorical_rhs(self, orig, indexer):
+ def test_loc_iloc_setitem_full_row_non_categorical_rhs(self, orig, indexer_li):
# - assign a complete row (mixed values) -> exp_single_row
df = orig.copy()
key = 2
- if indexer is tm.loc:
+ if indexer_li is tm.loc:
key = df.index[2]
# not categorical dtype, but "b" _is_ among the categories for df["cat"]
- indexer(df)[key, :] = ["b", 2]
+ indexer_li(df)[key, :] = ["b", 2]
cats1 = Categorical(["a", "a", "b", "a", "a", "a", "a"], categories=["a", "b"])
idx1 = Index(["h", "i", "j", "k", "l", "m", "n"])
values1 = [1, 1, 2, 1, 1, 1, 1]
@@ -1756,23 +1753,22 @@ def test_loc_iloc_setitem_full_row_non_categorical_rhs(self, orig, indexer):
# "c" is not among the categories for df["cat"]
with pytest.raises(TypeError, match=msg1):
- indexer(df)[key, :] = ["c", 2]
+ indexer_li(df)[key, :] = ["c", 2]
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
def test_loc_iloc_setitem_partial_col_categorical_rhs(
- self, orig, exp_parts_cats_col, indexer
+ self, orig, exp_parts_cats_col, indexer_li
):
# assign a part of a column with dtype == categorical ->
# exp_parts_cats_col
df = orig.copy()
key = (slice(2, 4), 0)
- if indexer is tm.loc:
+ if indexer_li is tm.loc:
key = (slice("j", "k"), df.columns[0])
# same categories as we currently have in df["cats"]
compat = Categorical(["b", "b"], categories=["a", "b"])
- indexer(df)[key] = compat
+ indexer_li(df)[key] = compat
tm.assert_frame_equal(df, exp_parts_cats_col)
# categories do not match df["cat"]'s, but "b" is among them
@@ -1780,32 +1776,31 @@ def test_loc_iloc_setitem_partial_col_categorical_rhs(
with pytest.raises(TypeError, match=msg2):
# different categories but holdable values
# -> not sure if this should fail or pass
- indexer(df)[key] = semi_compat
+ indexer_li(df)[key] = semi_compat
# categories do not match df["cat"]'s, and "c" is not among them
incompat = Categorical(list("cc"), categories=list("abc"))
with pytest.raises(TypeError, match=msg2):
# different values
- indexer(df)[key] = incompat
+ indexer_li(df)[key] = incompat
- @pytest.mark.parametrize("indexer", [tm.loc, tm.iloc])
def test_loc_iloc_setitem_non_categorical_rhs(
- self, orig, exp_parts_cats_col, indexer
+ self, orig, exp_parts_cats_col, indexer_li
):
# assign a part of a column with dtype != categorical -> exp_parts_cats_col
df = orig.copy()
key = (slice(2, 4), 0)
- if indexer is tm.loc:
+ if indexer_li is tm.loc:
key = (slice("j", "k"), df.columns[0])
# "b" is among the categories for df["cat"]
- indexer(df)[key] = ["b", "b"]
+ indexer_li(df)[key] = ["b", "b"]
tm.assert_frame_equal(df, exp_parts_cats_col)
# "c" not part of the categories
with pytest.raises(TypeError, match=msg1):
- indexer(df)[key] = ["c", "c"]
+ indexer_li(df)[key] = ["c", "c"]
@pytest.mark.parametrize("indexer", [tm.getitem, tm.loc, tm.iloc])
def test_getitem_preserve_object_index_with_dates(self, indexer):
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 0e0f8cf61d3d7..3f13718cfc77a 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -1000,13 +1000,12 @@ def test_setitem_slice_position(self):
expected = DataFrame(arr)
tm.assert_frame_equal(df, expected)
- @pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
@pytest.mark.parametrize("box", [Series, np.array, list, pd.array])
@pytest.mark.parametrize("n", [1, 2, 3])
- def test_setitem_slice_indexer_broadcasting_rhs(self, n, box, indexer):
+ def test_setitem_slice_indexer_broadcasting_rhs(self, n, box, indexer_si):
# GH#40440
df = DataFrame([[1, 3, 5]] + [[2, 4, 6]] * n, columns=["a", "b", "c"])
- indexer(df)[1:] = box([10, 11, 12])
+ indexer_si(df)[1:] = box([10, 11, 12])
expected = DataFrame([[1, 3, 5]] + [[10, 11, 12]] * n, columns=["a", "b", "c"])
tm.assert_frame_equal(df, expected)
@@ -1019,15 +1018,14 @@ def test_setitem_list_indexer_broadcasting_rhs(self, n, box):
expected = DataFrame([[1, 3, 5]] + [[10, 11, 12]] * n, columns=["a", "b", "c"])
tm.assert_frame_equal(df, expected)
- @pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
@pytest.mark.parametrize("box", [Series, np.array, list, pd.array])
@pytest.mark.parametrize("n", [1, 2, 3])
- def test_setitem_slice_broadcasting_rhs_mixed_dtypes(self, n, box, indexer):
+ def test_setitem_slice_broadcasting_rhs_mixed_dtypes(self, n, box, indexer_si):
# GH#40440
df = DataFrame(
[[1, 3, 5], ["x", "y", "z"]] + [[2, 4, 6]] * n, columns=["a", "b", "c"]
)
- indexer(df)[1:] = box([10, 11, 12])
+ indexer_si(df)[1:] = box([10, 11, 12])
expected = DataFrame(
[[1, 3, 5]] + [[10, 11, 12]] * (n + 1),
columns=["a", "b", "c"],
@@ -1105,13 +1103,12 @@ def test_setitem_loc_only_false_indexer_dtype_changed(self, box):
df.loc[indexer, ["b"]] = 9
tm.assert_frame_equal(df, expected)
- @pytest.mark.parametrize("indexer", [tm.setitem, tm.loc])
- def test_setitem_boolean_mask_aligning(self, indexer):
+ def test_setitem_boolean_mask_aligning(self, indexer_sl):
# GH#39931
df = DataFrame({"a": [1, 4, 2, 3], "b": [5, 6, 7, 8]})
expected = df.copy()
mask = df["a"] >= 3
- indexer(df)[mask] = indexer(df)[mask].sort_values("a")
+ indexer_sl(df)[mask] = indexer_sl(df)[mask].sort_values("a")
tm.assert_frame_equal(df, expected)
def test_setitem_mask_categorical(self):
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index b73c759518b0e..eab8dbd2787f7 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -134,9 +134,8 @@ def test_astype_with_view_mixed_float(self, mixed_float_frame):
tf.astype(np.int64)
tf.astype(np.float32)
- @pytest.mark.parametrize("dtype", [np.int32, np.int64])
@pytest.mark.parametrize("val", [np.nan, np.inf])
- def test_astype_cast_nan_inf_int(self, val, dtype):
+ def test_astype_cast_nan_inf_int(self, val, any_int_numpy_dtype):
# see GH#14265
#
# Check NaN and inf --> raise error when converting to int.
@@ -144,7 +143,7 @@ def test_astype_cast_nan_inf_int(self, val, dtype):
df = DataFrame([val])
with pytest.raises(ValueError, match=msg):
- df.astype(dtype)
+ df.astype(any_int_numpy_dtype)
def test_astype_str(self):
# see GH#9757
@@ -323,9 +322,9 @@ def test_astype_categoricaldtype_class_raises(self, cls):
with pytest.raises(TypeError, match=xpr):
df["A"].astype(cls)
- @pytest.mark.parametrize("dtype", ["Int64", "Int32", "Int16"])
- def test_astype_extension_dtypes(self, dtype):
+ def test_astype_extension_dtypes(self, any_int_ea_dtype):
# GH#22578
+ dtype = any_int_ea_dtype
df = DataFrame([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], columns=["a", "b"])
expected1 = DataFrame(
@@ -348,9 +347,9 @@ def test_astype_extension_dtypes(self, dtype):
tm.assert_frame_equal(df.astype(dtype), expected1)
tm.assert_frame_equal(df.astype("int64").astype(dtype), expected1)
- @pytest.mark.parametrize("dtype", ["Int64", "Int32", "Int16"])
- def test_astype_extension_dtypes_1d(self, dtype):
+ def test_astype_extension_dtypes_1d(self, any_int_ea_dtype):
# GH#22578
+ dtype = any_int_ea_dtype
df = DataFrame({"a": [1.0, 2.0, 3.0]})
expected1 = DataFrame({"a": pd.array([1, 2, 3], dtype=dtype)})
@@ -433,14 +432,13 @@ def test_astype_from_datetimelike_to_object(self, dtype, unit):
else:
assert result.iloc[0, 0] == Timedelta(1, unit=unit)
- @pytest.mark.parametrize("arr_dtype", [np.int64, np.float64])
@pytest.mark.parametrize("dtype", ["M8", "m8"])
@pytest.mark.parametrize("unit", ["ns", "us", "ms", "s", "h", "m", "D"])
- def test_astype_to_datetimelike_unit(self, arr_dtype, dtype, unit):
+ def test_astype_to_datetimelike_unit(self, any_real_numpy_dtype, dtype, unit):
# tests all units from numeric origination
# GH#19223 / GH#12425
dtype = f"{dtype}[{unit}]"
- arr = np.array([[1, 2, 3]], dtype=arr_dtype)
+ arr = np.array([[1, 2, 3]], dtype=any_real_numpy_dtype)
df = DataFrame(arr)
result = df.astype(dtype)
expected = DataFrame(arr.astype(dtype))
diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py
index be6ed91973e80..d33a7cdcf21c3 100644
--- a/pandas/tests/frame/test_arithmetic.py
+++ b/pandas/tests/frame/test_arithmetic.py
@@ -304,8 +304,7 @@ def test_df_string_comparison(self):
class TestFrameFlexComparisons:
# TODO: test_bool_flex_frame needs a better name
- @pytest.mark.parametrize("op", ["eq", "ne", "gt", "lt", "ge", "le"])
- def test_bool_flex_frame(self, op):
+ def test_bool_flex_frame(self, comparison_op):
data = np.random.default_rng(2).standard_normal((5, 3))
other_data = np.random.default_rng(2).standard_normal((5, 3))
df = DataFrame(data)
@@ -315,8 +314,8 @@ def test_bool_flex_frame(self, op):
# DataFrame
assert df.eq(df).values.all()
assert not df.ne(df).values.any()
- f = getattr(df, op)
- o = getattr(operator, op)
+ f = getattr(df, comparison_op.__name__)
+ o = comparison_op
# No NAs
tm.assert_frame_equal(f(other), o(df, other))
# Unaligned
@@ -459,25 +458,23 @@ def test_flex_comparison_nat(self):
result = df.ne(pd.NaT)
assert result.iloc[0, 0].item() is True
- @pytest.mark.parametrize("opname", ["eq", "ne", "gt", "lt", "ge", "le"])
- def test_df_flex_cmp_constant_return_types(self, opname):
+ def test_df_flex_cmp_constant_return_types(self, comparison_op):
# GH 15077, non-empty DataFrame
df = DataFrame({"x": [1, 2, 3], "y": [1.0, 2.0, 3.0]})
const = 2
- result = getattr(df, opname)(const).dtypes.value_counts()
+ result = getattr(df, comparison_op.__name__)(const).dtypes.value_counts()
tm.assert_series_equal(
result, Series([2], index=[np.dtype(bool)], name="count")
)
- @pytest.mark.parametrize("opname", ["eq", "ne", "gt", "lt", "ge", "le"])
- def test_df_flex_cmp_constant_return_types_empty(self, opname):
+ def test_df_flex_cmp_constant_return_types_empty(self, comparison_op):
# GH 15077 empty DataFrame
df = DataFrame({"x": [1, 2, 3], "y": [1.0, 2.0, 3.0]})
const = 2
empty = df.iloc[:0]
- result = getattr(empty, opname)(const).dtypes.value_counts()
+ result = getattr(empty, comparison_op.__name__)(const).dtypes.value_counts()
tm.assert_series_equal(
result, Series([2], index=[np.dtype(bool)], name="count")
)
@@ -664,11 +661,12 @@ def test_arith_flex_series(self, simple_frame):
tm.assert_frame_equal(df.div(row), df / row)
tm.assert_frame_equal(df.div(col, axis=0), (df.T / col).T)
- @pytest.mark.parametrize("dtype", ["int64", "float64"])
- def test_arith_flex_series_broadcasting(self, dtype):
+ def test_arith_flex_series_broadcasting(self, any_real_numpy_dtype):
# broadcasting issue in GH 7325
- df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype=dtype)
+ df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype=any_real_numpy_dtype)
expected = DataFrame([[np.nan, np.inf], [1.0, 1.5], [1.0, 1.25]])
+ if any_real_numpy_dtype == "float32":
+ expected = expected.astype(any_real_numpy_dtype)
result = df.div(df[0], axis="index")
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index f7a4233b3ddc9..134a585651d72 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -706,7 +706,6 @@ def test_cython_transform_series(op, args, targop):
@pytest.mark.parametrize("op", ["cumprod", "cumsum"])
-@pytest.mark.parametrize("skipna", [False, True])
@pytest.mark.parametrize(
"input, exp",
[
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56709 | 2024-01-03T02:44:37Z | 2024-01-03T19:16:11Z | 2024-01-03T19:16:11Z | 2024-01-03T19:18:47Z |
TST/CLN: Use more shared fixtures | diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index 121bfb78fe5c8..1b8ad1922b9d2 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -1116,11 +1116,10 @@ def test_ufunc_compat(self, holder, dtype):
tm.assert_equal(result, expected)
# TODO: add more dtypes
- @pytest.mark.parametrize("holder", [Index, Series])
@pytest.mark.parametrize("dtype", [np.int64, np.uint64, np.float64])
- def test_ufunc_coercions(self, holder, dtype):
- idx = holder([1, 2, 3, 4, 5], dtype=dtype, name="x")
- box = Series if holder is Series else Index
+ def test_ufunc_coercions(self, index_or_series, dtype):
+ idx = index_or_series([1, 2, 3, 4, 5], dtype=dtype, name="x")
+ box = index_or_series
result = np.sqrt(idx)
assert result.dtype == "f8" and isinstance(result, box)
diff --git a/pandas/tests/copy_view/test_constructors.py b/pandas/tests/copy_view/test_constructors.py
index c325e49e8156e..cbd0e6899bfc9 100644
--- a/pandas/tests/copy_view/test_constructors.py
+++ b/pandas/tests/copy_view/test_constructors.py
@@ -283,14 +283,13 @@ def test_dataframe_from_dict_of_series_with_reindex(dtype):
assert np.shares_memory(arr_before, arr_after)
-@pytest.mark.parametrize("cons", [Series, Index])
@pytest.mark.parametrize(
"data, dtype", [([1, 2], None), ([1, 2], "int64"), (["a", "b"], None)]
)
def test_dataframe_from_series_or_index(
- using_copy_on_write, warn_copy_on_write, data, dtype, cons
+ using_copy_on_write, warn_copy_on_write, data, dtype, index_or_series
):
- obj = cons(data, dtype=dtype)
+ obj = index_or_series(data, dtype=dtype)
obj_orig = obj.copy()
df = DataFrame(obj, dtype=dtype)
assert np.shares_memory(get_array(obj), get_array(df, 0))
@@ -303,9 +302,10 @@ def test_dataframe_from_series_or_index(
tm.assert_equal(obj, obj_orig)
-@pytest.mark.parametrize("cons", [Series, Index])
-def test_dataframe_from_series_or_index_different_dtype(using_copy_on_write, cons):
- obj = cons([1, 2], dtype="int64")
+def test_dataframe_from_series_or_index_different_dtype(
+ using_copy_on_write, index_or_series
+):
+ obj = index_or_series([1, 2], dtype="int64")
df = DataFrame(obj, dtype="int32")
assert not np.shares_memory(get_array(obj), get_array(df, 0))
if using_copy_on_write:
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index e2ef83c243957..475473218f712 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -1701,19 +1701,19 @@ def test_interval_mismatched_subtype(self):
arr = np.array([first, flt_interval], dtype=object)
assert lib.infer_dtype(arr, skipna=False) == "interval"
- @pytest.mark.parametrize("klass", [pd.array, Series])
@pytest.mark.parametrize("data", [["a", "b", "c"], ["a", "b", pd.NA]])
- def test_string_dtype(self, data, skipna, klass, nullable_string_dtype):
+ def test_string_dtype(
+ self, data, skipna, index_or_series_or_array, nullable_string_dtype
+ ):
# StringArray
- val = klass(data, dtype=nullable_string_dtype)
+ val = index_or_series_or_array(data, dtype=nullable_string_dtype)
inferred = lib.infer_dtype(val, skipna=skipna)
assert inferred == "string"
- @pytest.mark.parametrize("klass", [pd.array, Series])
@pytest.mark.parametrize("data", [[True, False, True], [True, False, pd.NA]])
- def test_boolean_dtype(self, data, skipna, klass):
+ def test_boolean_dtype(self, data, skipna, index_or_series_or_array):
# BooleanArray
- val = klass(data, dtype="boolean")
+ val = index_or_series_or_array(data, dtype="boolean")
inferred = lib.infer_dtype(val, skipna=skipna)
assert inferred == "boolean"
diff --git a/pandas/tests/extension/decimal/test_decimal.py b/pandas/tests/extension/decimal/test_decimal.py
index 49f29b2194cae..a46663ef606f9 100644
--- a/pandas/tests/extension/decimal/test_decimal.py
+++ b/pandas/tests/extension/decimal/test_decimal.py
@@ -302,8 +302,7 @@ def test_dataframe_constructor_with_dtype():
tm.assert_frame_equal(result, expected)
-@pytest.mark.parametrize("frame", [True, False])
-def test_astype_dispatches(frame):
+def test_astype_dispatches(frame_or_series):
# This is a dtype-specific test that ensures Series[decimal].astype
# gets all the way through to ExtensionArray.astype
# Designing a reliable smoke test that works for arbitrary data types
@@ -312,12 +311,11 @@ def test_astype_dispatches(frame):
ctx = decimal.Context()
ctx.prec = 5
- if frame:
- data = data.to_frame()
+ data = frame_or_series(data)
result = data.astype(DecimalDtype(ctx))
- if frame:
+ if frame_or_series is pd.DataFrame:
result = result["a"]
assert result.dtype.context.prec == ctx.prec
diff --git a/pandas/tests/frame/methods/test_set_index.py b/pandas/tests/frame/methods/test_set_index.py
index 5724f79b82578..024af66ec0844 100644
--- a/pandas/tests/frame/methods/test_set_index.py
+++ b/pandas/tests/frame/methods/test_set_index.py
@@ -577,8 +577,8 @@ def test_set_index_raise_keys(self, frame_of_index_cols, drop, append):
@pytest.mark.parametrize("append", [True, False])
@pytest.mark.parametrize("drop", [True, False])
- @pytest.mark.parametrize("box", [set], ids=["set"])
- def test_set_index_raise_on_type(self, frame_of_index_cols, box, drop, append):
+ def test_set_index_raise_on_type(self, frame_of_index_cols, drop, append):
+ box = set
df = frame_of_index_cols
msg = 'The parameter "keys" may be a column key, .*'
diff --git a/pandas/tests/groupby/aggregate/test_numba.py b/pandas/tests/groupby/aggregate/test_numba.py
index 89404a9bd09a3..964a80f8f3310 100644
--- a/pandas/tests/groupby/aggregate/test_numba.py
+++ b/pandas/tests/groupby/aggregate/test_numba.py
@@ -52,8 +52,7 @@ def incorrect_function(values, index):
@pytest.mark.filterwarnings("ignore")
# Filter warnings when parallel=True and the function can't be parallelized by Numba
@pytest.mark.parametrize("jit", [True, False])
-@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"])
-def test_numba_vs_cython(jit, pandas_obj, nogil, parallel, nopython, as_index):
+def test_numba_vs_cython(jit, frame_or_series, nogil, parallel, nopython, as_index):
pytest.importorskip("numba")
def func_numba(values, index):
@@ -70,7 +69,7 @@ def func_numba(values, index):
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0, as_index=as_index)
- if pandas_obj == "Series":
+ if frame_or_series is Series:
grouped = grouped[1]
result = grouped.agg(func_numba, engine="numba", engine_kwargs=engine_kwargs)
@@ -82,8 +81,7 @@ def func_numba(values, index):
@pytest.mark.filterwarnings("ignore")
# Filter warnings when parallel=True and the function can't be parallelized by Numba
@pytest.mark.parametrize("jit", [True, False])
-@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"])
-def test_cache(jit, pandas_obj, nogil, parallel, nopython):
+def test_cache(jit, frame_or_series, nogil, parallel, nopython):
# Test that the functions are cached correctly if we switch functions
pytest.importorskip("numba")
@@ -104,7 +102,7 @@ def func_2(values, index):
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
- if pandas_obj == "Series":
+ if frame_or_series is Series:
grouped = grouped[1]
result = grouped.agg(func_1, engine="numba", engine_kwargs=engine_kwargs)
diff --git a/pandas/tests/groupby/transform/test_numba.py b/pandas/tests/groupby/transform/test_numba.py
index af11dae0aabfe..b75113d3f4e14 100644
--- a/pandas/tests/groupby/transform/test_numba.py
+++ b/pandas/tests/groupby/transform/test_numba.py
@@ -50,8 +50,7 @@ def incorrect_function(values, index):
@pytest.mark.filterwarnings("ignore")
# Filter warnings when parallel=True and the function can't be parallelized by Numba
@pytest.mark.parametrize("jit", [True, False])
-@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"])
-def test_numba_vs_cython(jit, pandas_obj, nogil, parallel, nopython, as_index):
+def test_numba_vs_cython(jit, frame_or_series, nogil, parallel, nopython, as_index):
pytest.importorskip("numba")
def func(values, index):
@@ -68,7 +67,7 @@ def func(values, index):
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0, as_index=as_index)
- if pandas_obj == "Series":
+ if frame_or_series is Series:
grouped = grouped[1]
result = grouped.transform(func, engine="numba", engine_kwargs=engine_kwargs)
@@ -80,8 +79,7 @@ def func(values, index):
@pytest.mark.filterwarnings("ignore")
# Filter warnings when parallel=True and the function can't be parallelized by Numba
@pytest.mark.parametrize("jit", [True, False])
-@pytest.mark.parametrize("pandas_obj", ["Series", "DataFrame"])
-def test_cache(jit, pandas_obj, nogil, parallel, nopython):
+def test_cache(jit, frame_or_series, nogil, parallel, nopython):
# Test that the functions are cached correctly if we switch functions
pytest.importorskip("numba")
@@ -102,7 +100,7 @@ def func_2(values, index):
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
grouped = data.groupby(0)
- if pandas_obj == "Series":
+ if frame_or_series is Series:
grouped = grouped[1]
result = grouped.transform(func_1, engine="numba", engine_kwargs=engine_kwargs)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index e0898a636474c..f36ddff223a9a 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -106,8 +106,9 @@ def test_iloc_setitem_fullcol_categorical(self, indexer, key):
expected = DataFrame({0: Series(cat.astype(object), dtype=object), 1: range(3)})
tm.assert_frame_equal(df, expected)
- @pytest.mark.parametrize("box", [array, Series])
- def test_iloc_setitem_ea_inplace(self, frame_or_series, box, using_copy_on_write):
+ def test_iloc_setitem_ea_inplace(
+ self, frame_or_series, index_or_series_or_array, using_copy_on_write
+ ):
# GH#38952 Case with not setting a full column
# IntegerArray without NAs
arr = array([1, 2, 3, 4])
@@ -119,9 +120,9 @@ def test_iloc_setitem_ea_inplace(self, frame_or_series, box, using_copy_on_write
values = obj._mgr.arrays[0]
if frame_or_series is Series:
- obj.iloc[:2] = box(arr[2:])
+ obj.iloc[:2] = index_or_series_or_array(arr[2:])
else:
- obj.iloc[:2, 0] = box(arr[2:])
+ obj.iloc[:2, 0] = index_or_series_or_array(arr[2:])
expected = frame_or_series(np.array([3, 4, 3, 4], dtype="i8"))
tm.assert_equal(obj, expected)
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 8fbb78737474c..a6aaeba1dc3a8 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -16,8 +16,7 @@
class TestDatetimeLikeStatReductions:
- @pytest.mark.parametrize("box", [Series, pd.Index, pd.array])
- def test_dt64_mean(self, tz_naive_fixture, box):
+ def test_dt64_mean(self, tz_naive_fixture, index_or_series_or_array):
tz = tz_naive_fixture
dti = date_range("2001-01-01", periods=11, tz=tz)
@@ -25,20 +24,19 @@ def test_dt64_mean(self, tz_naive_fixture, box):
dti = dti.take([4, 1, 3, 10, 9, 7, 8, 5, 0, 2, 6])
dtarr = dti._data
- obj = box(dtarr)
+ obj = index_or_series_or_array(dtarr)
assert obj.mean() == pd.Timestamp("2001-01-06", tz=tz)
assert obj.mean(skipna=False) == pd.Timestamp("2001-01-06", tz=tz)
# dtarr[-2] will be the first date 2001-01-1
dtarr[-2] = pd.NaT
- obj = box(dtarr)
+ obj = index_or_series_or_array(dtarr)
assert obj.mean() == pd.Timestamp("2001-01-06 07:12:00", tz=tz)
assert obj.mean(skipna=False) is pd.NaT
- @pytest.mark.parametrize("box", [Series, pd.Index, pd.array])
@pytest.mark.parametrize("freq", ["s", "h", "D", "W", "B"])
- def test_period_mean(self, box, freq):
+ def test_period_mean(self, index_or_series_or_array, freq):
# GH#24757
dti = date_range("2001-01-01", periods=11)
# shuffle so that we are not just working with monotone-increasing
@@ -48,7 +46,7 @@ def test_period_mean(self, box, freq):
msg = r"PeriodDtype\[B\] is deprecated"
with tm.assert_produces_warning(warn, match=msg):
parr = dti._data.to_period(freq)
- obj = box(parr)
+ obj = index_or_series_or_array(parr)
with pytest.raises(TypeError, match="ambiguous"):
obj.mean()
with pytest.raises(TypeError, match="ambiguous"):
@@ -62,13 +60,12 @@ def test_period_mean(self, box, freq):
with pytest.raises(TypeError, match="ambiguous"):
obj.mean(skipna=True)
- @pytest.mark.parametrize("box", [Series, pd.Index, pd.array])
- def test_td64_mean(self, box):
+ def test_td64_mean(self, index_or_series_or_array):
m8values = np.array([0, 3, -2, -7, 1, 2, -1, 3, 5, -2, 4], "m8[D]")
tdi = pd.TimedeltaIndex(m8values).as_unit("ns")
tdarr = tdi._data
- obj = box(tdarr, copy=False)
+ obj = index_or_series_or_array(tdarr, copy=False)
result = obj.mean()
expected = np.array(tdarr).mean()
diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index f20518c7be98a..6ba2ac0104e75 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -30,9 +30,8 @@
date_range(datetime(2005, 1, 1), datetime(2005, 1, 10), freq="D"),
],
)
-@pytest.mark.parametrize("klass", [DataFrame, Series])
-def test_asfreq(klass, index, freq):
- obj = klass(range(len(index)), index=index)
+def test_asfreq(frame_or_series, index, freq):
+ obj = frame_or_series(range(len(index)), index=index)
idx_range = date_range if isinstance(index, DatetimeIndex) else timedelta_range
result = obj.resample(freq).asfreq()
diff --git a/pandas/tests/resample/test_period_index.py b/pandas/tests/resample/test_period_index.py
index cf77238a553d0..3be11e0a5ad2f 100644
--- a/pandas/tests/resample/test_period_index.py
+++ b/pandas/tests/resample/test_period_index.py
@@ -59,12 +59,11 @@ def _simple_period_range_series(start, end, freq="D"):
class TestPeriodIndex:
@pytest.mark.parametrize("freq", ["2D", "1h", "2h"])
@pytest.mark.parametrize("kind", ["period", None, "timestamp"])
- @pytest.mark.parametrize("klass", [DataFrame, Series])
- def test_asfreq(self, klass, freq, kind):
+ def test_asfreq(self, frame_or_series, freq, kind):
# GH 12884, 15944
# make sure .asfreq() returns PeriodIndex (except kind='timestamp')
- obj = klass(range(5), index=period_range("2020-01-01", periods=5))
+ obj = frame_or_series(range(5), index=period_range("2020-01-01", periods=5))
if kind == "timestamp":
expected = obj.to_timestamp().resample(freq).asfreq()
else:
@@ -1007,12 +1006,11 @@ def test_resample_t_l_deprecated(self):
offsets.BusinessHour(2),
],
)
- @pytest.mark.parametrize("klass", [DataFrame, Series])
- def test_asfreq_invalid_period_freq(self, offset, klass):
+ def test_asfreq_invalid_period_freq(self, offset, frame_or_series):
# GH#9586
msg = f"Invalid offset: '{offset.base}' for converting time series "
- obj = klass(range(5), index=period_range("2020-01-01", periods=5))
+ obj = frame_or_series(range(5), index=period_range("2020-01-01", periods=5))
with pytest.raises(ValueError, match=msg):
obj.asfreq(freq=offset)
@@ -1027,12 +1025,11 @@ def test_asfreq_invalid_period_freq(self, offset, klass):
("2Y-MAR", "2YE-MAR"),
],
)
-@pytest.mark.parametrize("klass", [DataFrame, Series])
-def test_resample_frequency_ME_QE_YE_error_message(klass, freq, freq_depr):
+def test_resample_frequency_ME_QE_YE_error_message(frame_or_series, freq, freq_depr):
# GH#9586
msg = f"for Period, please use '{freq[1:]}' instead of '{freq_depr[1:]}'"
- obj = klass(range(5), index=period_range("2020-01-01", periods=5))
+ obj = frame_or_series(range(5), index=period_range("2020-01-01", periods=5))
with pytest.raises(ValueError, match=msg):
obj.resample(freq_depr)
@@ -1057,11 +1054,10 @@ def test_corner_cases_period(simple_period_range_series):
"2BYE-MAR",
],
)
-@pytest.mark.parametrize("klass", [DataFrame, Series])
-def test_resample_frequency_invalid_freq(klass, freq_depr):
+def test_resample_frequency_invalid_freq(frame_or_series, freq_depr):
# GH#9586
msg = f"Invalid frequency: {freq_depr[1:]}"
- obj = klass(range(5), index=period_range("2020-01-01", periods=5))
+ obj = frame_or_series(range(5), index=period_range("2020-01-01", periods=5))
with pytest.raises(ValueError, match=msg):
obj.resample(freq_depr)
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 5ec95cbf24b39..d8bc7974b4139 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -545,14 +545,13 @@ def test_concat_no_unnecessary_upcast(float_numpy_dtype, frame_or_series):
assert x.values.dtype == dt
-@pytest.mark.parametrize("pdt", [Series, DataFrame])
-def test_concat_will_upcast(pdt, any_signed_int_numpy_dtype):
+def test_concat_will_upcast(frame_or_series, any_signed_int_numpy_dtype):
dt = any_signed_int_numpy_dtype
- dims = pdt().ndim
+ dims = frame_or_series().ndim
dfs = [
- pdt(np.array([1], dtype=dt, ndmin=dims)),
- pdt(np.array([np.nan], ndmin=dims)),
- pdt(np.array([5], dtype=dt, ndmin=dims)),
+ frame_or_series(np.array([1], dtype=dt, ndmin=dims)),
+ frame_or_series(np.array([np.nan], ndmin=dims)),
+ frame_or_series(np.array([5], dtype=dt, ndmin=dims)),
]
x = concat(dfs)
assert x.values.dtype == "float64"
diff --git a/pandas/tests/series/methods/test_view.py b/pandas/tests/series/methods/test_view.py
index 7e0ac372cd443..9d1478cd9f689 100644
--- a/pandas/tests/series/methods/test_view.py
+++ b/pandas/tests/series/methods/test_view.py
@@ -2,9 +2,7 @@
import pytest
from pandas import (
- Index,
Series,
- array,
date_range,
)
import pandas._testing as tm
@@ -47,11 +45,10 @@ def test_view_tz(self):
@pytest.mark.parametrize(
"second", ["m8[ns]", "M8[ns]", "M8[ns, US/Central]", "period[D]"]
)
- @pytest.mark.parametrize("box", [Series, Index, array])
- def test_view_between_datetimelike(self, first, second, box):
+ def test_view_between_datetimelike(self, first, second, index_or_series_or_array):
dti = date_range("2016-01-01", periods=3)
- orig = box(dti)
+ orig = index_or_series_or_array(dti)
obj = orig.view(first)
assert obj.dtype == first
tm.assert_numpy_array_equal(np.asarray(obj.view("i8")), dti.asi8)
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index dfec99f0786eb..71994d186163e 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -6,11 +6,7 @@
from pandas import option_context
import pandas._testing as tm
-from pandas.core.api import (
- DataFrame,
- Index,
- Series,
-)
+from pandas.core.api import DataFrame
from pandas.core.computation import expressions as expr
@@ -433,16 +429,15 @@ def test_frame_series_axis(self, axis, arith, _frame, monkeypatch):
"__rfloordiv__",
],
)
- @pytest.mark.parametrize("box", [DataFrame, Series, Index])
@pytest.mark.parametrize("scalar", [-5, 5])
def test_python_semantics_with_numexpr_installed(
- self, op, box, scalar, monkeypatch
+ self, op, box_with_array, scalar, monkeypatch
):
# https://github.com/pandas-dev/pandas/issues/36047
with monkeypatch.context() as m:
m.setattr(expr, "_MIN_ELEMENTS", 0)
data = np.arange(-50, 50)
- obj = box(data)
+ obj = box_with_array(data)
method = getattr(obj, op)
result = method(scalar)
@@ -454,7 +449,7 @@ def test_python_semantics_with_numexpr_installed(
# compare result element-wise with Python
for i, elem in enumerate(data):
- if box == DataFrame:
+ if box_with_array == DataFrame:
scalar_result = result.iloc[i, 0]
else:
scalar_result = result[i]
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index cb94427ae8961..4a012f34ddc3b 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -988,14 +988,13 @@ def test_to_datetime_dtarr(self, tz):
# Doesn't work on Windows since tzpath not set correctly
@td.skip_if_windows
- @pytest.mark.parametrize("arg_class", [Series, Index])
@pytest.mark.parametrize("utc", [True, False])
@pytest.mark.parametrize("tz", [None, "US/Central"])
- def test_to_datetime_arrow(self, tz, utc, arg_class):
+ def test_to_datetime_arrow(self, tz, utc, index_or_series):
pa = pytest.importorskip("pyarrow")
dti = date_range("1965-04-03", periods=19, freq="2W", tz=tz)
- dti = arg_class(dti)
+ dti = index_or_series(dti)
dti_arrow = dti.astype(pd.ArrowDtype(pa.timestamp(unit="ns", tz=tz)))
@@ -1003,11 +1002,11 @@ def test_to_datetime_arrow(self, tz, utc, arg_class):
expected = to_datetime(dti, utc=utc).astype(
pd.ArrowDtype(pa.timestamp(unit="ns", tz=tz if not utc else "UTC"))
)
- if not utc and arg_class is not Series:
+ if not utc and index_or_series is not Series:
# Doesn't hold for utc=True, since that will astype
# to_datetime also returns a new object for series
assert result is dti_arrow
- if arg_class is Series:
+ if index_or_series is Series:
tm.assert_series_equal(result, expected)
else:
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index 79132591b15b3..e244615ea4629 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -10,11 +10,6 @@ def by_blocks_fixture(request):
return request.param
-@pytest.fixture(params=["DataFrame", "Series"])
-def obj_fixture(request):
- return request.param
-
-
def _assert_frame_equal_both(a, b, **kwargs):
"""
Check that two DataFrame equal.
@@ -35,16 +30,20 @@ def _assert_frame_equal_both(a, b, **kwargs):
@pytest.mark.parametrize("check_like", [True, False])
-def test_frame_equal_row_order_mismatch(check_like, obj_fixture):
+def test_frame_equal_row_order_mismatch(check_like, frame_or_series):
df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"])
df2 = DataFrame({"A": [3, 2, 1], "B": [6, 5, 4]}, index=["c", "b", "a"])
if not check_like: # Do not ignore row-column orderings.
- msg = f"{obj_fixture}.index are different"
+ msg = f"{frame_or_series.__name__}.index are different"
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture)
+ tm.assert_frame_equal(
+ df1, df2, check_like=check_like, obj=frame_or_series.__name__
+ )
else:
- _assert_frame_equal_both(df1, df2, check_like=check_like, obj=obj_fixture)
+ _assert_frame_equal_both(
+ df1, df2, check_like=check_like, obj=frame_or_series.__name__
+ )
@pytest.mark.parametrize(
@@ -54,11 +53,11 @@ def test_frame_equal_row_order_mismatch(check_like, obj_fixture):
(DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}), DataFrame({"A": [1, 2, 3]})),
],
)
-def test_frame_equal_shape_mismatch(df1, df2, obj_fixture):
- msg = f"{obj_fixture} are different"
+def test_frame_equal_shape_mismatch(df1, df2, frame_or_series):
+ msg = f"{frame_or_series.__name__} are different"
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, obj=obj_fixture)
+ tm.assert_frame_equal(df1, df2, obj=frame_or_series.__name__)
@pytest.mark.parametrize(
@@ -109,14 +108,14 @@ def test_empty_dtypes(check_dtype):
@pytest.mark.parametrize("check_like", [True, False])
-def test_frame_equal_index_mismatch(check_like, obj_fixture, using_infer_string):
+def test_frame_equal_index_mismatch(check_like, frame_or_series, using_infer_string):
if using_infer_string:
dtype = "string"
else:
dtype = "object"
- msg = f"""{obj_fixture}\\.index are different
+ msg = f"""{frame_or_series.__name__}\\.index are different
-{obj_fixture}\\.index values are different \\(33\\.33333 %\\)
+{frame_or_series.__name__}\\.index values are different \\(33\\.33333 %\\)
\\[left\\]: Index\\(\\['a', 'b', 'c'\\], dtype='{dtype}'\\)
\\[right\\]: Index\\(\\['a', 'b', 'd'\\], dtype='{dtype}'\\)
At positional index 2, first diff: c != d"""
@@ -125,18 +124,20 @@ def test_frame_equal_index_mismatch(check_like, obj_fixture, using_infer_string)
df2 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "d"])
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture)
+ tm.assert_frame_equal(
+ df1, df2, check_like=check_like, obj=frame_or_series.__name__
+ )
@pytest.mark.parametrize("check_like", [True, False])
-def test_frame_equal_columns_mismatch(check_like, obj_fixture, using_infer_string):
+def test_frame_equal_columns_mismatch(check_like, frame_or_series, using_infer_string):
if using_infer_string:
dtype = "string"
else:
dtype = "object"
- msg = f"""{obj_fixture}\\.columns are different
+ msg = f"""{frame_or_series.__name__}\\.columns are different
-{obj_fixture}\\.columns values are different \\(50\\.0 %\\)
+{frame_or_series.__name__}\\.columns values are different \\(50\\.0 %\\)
\\[left\\]: Index\\(\\['A', 'B'\\], dtype='{dtype}'\\)
\\[right\\]: Index\\(\\['A', 'b'\\], dtype='{dtype}'\\)"""
@@ -144,11 +145,13 @@ def test_frame_equal_columns_mismatch(check_like, obj_fixture, using_infer_strin
df2 = DataFrame({"A": [1, 2, 3], "b": [4, 5, 6]}, index=["a", "b", "c"])
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, check_like=check_like, obj=obj_fixture)
+ tm.assert_frame_equal(
+ df1, df2, check_like=check_like, obj=frame_or_series.__name__
+ )
-def test_frame_equal_block_mismatch(by_blocks_fixture, obj_fixture):
- obj = obj_fixture
+def test_frame_equal_block_mismatch(by_blocks_fixture, frame_or_series):
+ obj = frame_or_series.__name__
msg = f"""{obj}\\.iloc\\[:, 1\\] \\(column name="B"\\) are different
{obj}\\.iloc\\[:, 1\\] \\(column name="B"\\) values are different \\(33\\.33333 %\\)
@@ -160,7 +163,7 @@ def test_frame_equal_block_mismatch(by_blocks_fixture, obj_fixture):
df2 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 7]})
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, by_blocks=by_blocks_fixture, obj=obj_fixture)
+ tm.assert_frame_equal(df1, df2, by_blocks=by_blocks_fixture, obj=obj)
@pytest.mark.parametrize(
@@ -188,14 +191,16 @@ def test_frame_equal_block_mismatch(by_blocks_fixture, obj_fixture):
),
],
)
-def test_frame_equal_unicode(df1, df2, msg, by_blocks_fixture, obj_fixture):
+def test_frame_equal_unicode(df1, df2, msg, by_blocks_fixture, frame_or_series):
# see gh-20503
#
# Test ensures that `tm.assert_frame_equals` raises the right exception
# when comparing DataFrames containing differing unicode objects.
- msg = msg.format(obj=obj_fixture)
+ msg = msg.format(obj=frame_or_series.__name__)
with pytest.raises(AssertionError, match=msg):
- tm.assert_frame_equal(df1, df2, by_blocks=by_blocks_fixture, obj=obj_fixture)
+ tm.assert_frame_equal(
+ df1, df2, by_blocks=by_blocks_fixture, obj=frame_or_series.__name__
+ )
def test_assert_frame_equal_extension_dtype_mismatch():
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56708 | 2024-01-03T01:34:01Z | 2024-01-03T19:12:25Z | 2024-01-03T19:12:25Z | 2024-01-03T19:18:33Z |
STY: Use ruff instead of black for formatting | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 4b02ad7cf886f..6033bda99e8c8 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -18,11 +18,6 @@ ci:
# manual stage hooks
skip: [pylint, pyright, mypy]
repos:
-- repo: https://github.com/hauntsaninja/black-pre-commit-mirror
- # black compiled with mypyc
- rev: 23.11.0
- hooks:
- - id: black
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.1.6
hooks:
@@ -35,6 +30,9 @@ repos:
files: ^pandas
exclude: ^pandas/tests
args: [--select, "ANN001,ANN2", --fix-only, --exit-non-zero-on-fix]
+ - id: ruff-format
+ # TODO: "." not needed in ruff 0.1.8
+ args: ["."]
- repo: https://github.com/jendrikseipp/vulture
rev: 'v2.10'
hooks:
diff --git a/asv_bench/benchmarks/indexing.py b/asv_bench/benchmarks/indexing.py
index 9ad1f5b31016d..86da26bead64d 100644
--- a/asv_bench/benchmarks/indexing.py
+++ b/asv_bench/benchmarks/indexing.py
@@ -84,9 +84,7 @@ def time_loc_slice(self, index, index_structure):
class NumericMaskedIndexing:
monotonic_list = list(range(10**6))
- non_monotonic_list = (
- list(range(50)) + [54, 53, 52, 51] + list(range(55, 10**6 - 1))
- )
+ non_monotonic_list = list(range(50)) + [54, 53, 52, 51] + list(range(55, 10**6 - 1))
params = [
("Int64", "UInt64", "Float64"),
diff --git a/asv_bench/benchmarks/io/style.py b/asv_bench/benchmarks/io/style.py
index af9eef337e78e..24fd8a0d20aba 100644
--- a/asv_bench/benchmarks/io/style.py
+++ b/asv_bench/benchmarks/io/style.py
@@ -76,7 +76,8 @@ def _style_format(self):
# apply a formatting function
# subset is flexible but hinders vectorised solutions
self.st = self.df.style.format(
- "{:,.3f}", subset=IndexSlice["row_1":f"row_{ir}", "float_1":f"float_{ic}"]
+ "{:,.3f}",
+ subset=IndexSlice["row_1" : f"row_{ir}", "float_1" : f"float_{ic}"],
)
def _style_apply_format_hide(self):
diff --git a/doc/source/development/contributing_codebase.rst b/doc/source/development/contributing_codebase.rst
index be8249cd3a287..7e0b9c3200d3b 100644
--- a/doc/source/development/contributing_codebase.rst
+++ b/doc/source/development/contributing_codebase.rst
@@ -38,7 +38,7 @@ Pre-commit
----------
Additionally, :ref:`Continuous Integration <contributing.ci>` will run code formatting checks
-like ``black``, ``ruff``,
+like ``ruff``,
``isort``, and ``clang-format`` and more using `pre-commit hooks <https://pre-commit.com/>`_.
Any warnings from these checks will cause the :ref:`Continuous Integration <contributing.ci>` to fail; therefore,
it is helpful to run the check yourself before submitting code. This
diff --git a/pandas/_libs/hashtable.pyi b/pandas/_libs/hashtable.pyi
index 3bb957812f0ed..3725bfa3362d9 100644
--- a/pandas/_libs/hashtable.pyi
+++ b/pandas/_libs/hashtable.pyi
@@ -196,7 +196,7 @@ class HashTable:
*,
return_inverse: Literal[True],
mask: None = ...,
- ) -> tuple[np.ndarray, npt.NDArray[np.intp],]: ... # np.ndarray[subclass-specific]
+ ) -> tuple[np.ndarray, npt.NDArray[np.intp]]: ... # np.ndarray[subclass-specific]
@overload
def unique(
self,
@@ -204,7 +204,10 @@ class HashTable:
*,
return_inverse: Literal[False] = ...,
mask: npt.NDArray[np.bool_],
- ) -> tuple[np.ndarray, npt.NDArray[np.bool_],]: ... # np.ndarray[subclass-specific]
+ ) -> tuple[
+ np.ndarray,
+ npt.NDArray[np.bool_],
+ ]: ... # np.ndarray[subclass-specific]
def factorize(
self,
values: np.ndarray, # np.ndarray[subclass-specific]
diff --git a/pandas/_libs/lib.pyi b/pandas/_libs/lib.pyi
index b9fd970e68f5b..32ecd264262d6 100644
--- a/pandas/_libs/lib.pyi
+++ b/pandas/_libs/lib.pyi
@@ -179,7 +179,8 @@ def indices_fast(
sorted_labels: list[npt.NDArray[np.int64]],
) -> dict[Hashable, npt.NDArray[np.intp]]: ...
def generate_slices(
- labels: np.ndarray, ngroups: int # const intp_t[:]
+ labels: np.ndarray,
+ ngroups: int, # const intp_t[:]
) -> tuple[npt.NDArray[np.int64], npt.NDArray[np.int64]]: ...
def count_level_2d(
mask: np.ndarray, # ndarray[uint8_t, ndim=2, cast=True],
@@ -209,5 +210,6 @@ def get_reverse_indexer(
def is_bool_list(obj: list) -> bool: ...
def dtypes_all_equal(types: list[DtypeObj]) -> bool: ...
def is_range_indexer(
- left: np.ndarray, n: int # np.ndarray[np.int64, ndim=1]
+ left: np.ndarray,
+ n: int, # np.ndarray[np.int64, ndim=1]
) -> bool: ...
diff --git a/pandas/_testing/_hypothesis.py b/pandas/_testing/_hypothesis.py
index 084ca9c306d19..f9f653f636c4c 100644
--- a/pandas/_testing/_hypothesis.py
+++ b/pandas/_testing/_hypothesis.py
@@ -54,12 +54,8 @@
DATETIME_NO_TZ = st.datetimes()
DATETIME_JAN_1_1900_OPTIONAL_TZ = st.datetimes(
- min_value=pd.Timestamp(
- 1900, 1, 1
- ).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
- max_value=pd.Timestamp(
- 1900, 1, 1
- ).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
+ min_value=pd.Timestamp(1900, 1, 1).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
+ max_value=pd.Timestamp(1900, 1, 1).to_pydatetime(), # pyright: ignore[reportGeneralTypeIssues]
timezones=st.one_of(st.none(), dateutil_timezones(), pytz_timezones()),
)
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 784e11415ade6..5b2293aeebbe7 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1010,7 +1010,8 @@ def wrapper(*args, **kwargs):
# [..., Any] | str] | dict[Hashable,Callable[..., Any] | str |
# list[Callable[..., Any] | str]]"; expected "Hashable"
nb_looper = generate_apply_looper(
- self.func, **engine_kwargs # type: ignore[arg-type]
+ self.func, # type: ignore[arg-type]
+ **engine_kwargs,
)
result = nb_looper(self.values, self.axis)
# If we made the result 2-D, squeeze it back to 1-D
diff --git a/pandas/core/arrays/_arrow_string_mixins.py b/pandas/core/arrays/_arrow_string_mixins.py
index bfff19a123a08..06c74290bd82e 100644
--- a/pandas/core/arrays/_arrow_string_mixins.py
+++ b/pandas/core/arrays/_arrow_string_mixins.py
@@ -58,7 +58,8 @@ def _str_get(self, i: int) -> Self:
self._pa_array, start=start, stop=stop, step=step
)
null_value = pa.scalar(
- None, type=self._pa_array.type # type: ignore[attr-defined]
+ None,
+ type=self._pa_array.type, # type: ignore[attr-defined]
)
result = pc.if_else(not_out_of_bounds, selected, null_value)
return type(self)(result)
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index 0da121c36644a..560845d375b56 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -347,7 +347,9 @@ def fillna(
# error: Argument 2 to "check_value_size" has incompatible type
# "ExtensionArray"; expected "ndarray"
value = missing.check_value_size(
- value, mask, len(self) # type: ignore[arg-type]
+ value,
+ mask, # type: ignore[arg-type]
+ len(self),
)
if mask.any():
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index f90a4691ec263..1d0f5c60de64f 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -2855,7 +2855,8 @@ def _dt_tz_localize(
"shift_backward": "earliest",
"shift_forward": "latest",
}.get(
- nonexistent, None # type: ignore[arg-type]
+ nonexistent, # type: ignore[arg-type]
+ None,
)
if nonexistent_pa is None:
raise NotImplementedError(f"{nonexistent=} is not supported")
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 5f3d66d17a9bc..58264f2aef6f3 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1121,7 +1121,9 @@ def fillna(
# error: Argument 2 to "check_value_size" has incompatible type
# "ExtensionArray"; expected "ndarray"
value = missing.check_value_size(
- value, mask, len(self) # type: ignore[arg-type]
+ value,
+ mask, # type: ignore[arg-type]
+ len(self),
)
if mask.any():
@@ -1490,9 +1492,7 @@ def factorize(
uniques_ea = self._from_factorized(uniques, self)
return codes, uniques_ea
- _extension_array_shared_docs[
- "repeat"
- ] = """
+ _extension_array_shared_docs["repeat"] = """
Repeat elements of a %(klass)s.
Returns a new %(klass)s where each element of the current %(klass)s
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 4668db8d75cd7..44049f73b792b 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -1236,7 +1236,9 @@ def _add_timedeltalike(self, other: Timedelta | TimedeltaArray) -> Self:
# error: Unexpected keyword argument "freq" for "_simple_new" of "NDArrayBacked"
return type(self)._simple_new(
- res_values, dtype=self.dtype, freq=new_freq # type: ignore[call-arg]
+ res_values,
+ dtype=self.dtype,
+ freq=new_freq, # type: ignore[call-arg]
)
@final
@@ -1256,7 +1258,9 @@ def _add_nat(self) -> Self:
result = result.view(self._ndarray.dtype) # preserve reso
# error: Unexpected keyword argument "freq" for "_simple_new" of "NDArrayBacked"
return type(self)._simple_new(
- result, dtype=self.dtype, freq=None # type: ignore[call-arg]
+ result,
+ dtype=self.dtype,
+ freq=None, # type: ignore[call-arg]
)
@final
@@ -2162,7 +2166,9 @@ def as_unit(self, unit: str, round_ok: bool = True) -> Self:
# error: Unexpected keyword argument "freq" for "_simple_new" of
# "NDArrayBacked" [call-arg]
return type(self)._simple_new(
- new_values, dtype=new_dtype, freq=self.freq # type: ignore[call-arg]
+ new_values,
+ dtype=new_dtype,
+ freq=self.freq, # type: ignore[call-arg]
)
# TODO: annotate other as DatetimeArray | TimedeltaArray | Timestamp | Timedelta
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index febea079527e6..96ee728d6dcb7 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -122,9 +122,7 @@
}
-_interval_shared_docs[
- "class"
-] = """
+_interval_shared_docs["class"] = """
%(summary)s
Parameters
@@ -1489,9 +1487,7 @@ def set_closed(self, closed: IntervalClosedType) -> Self:
dtype = IntervalDtype(left.dtype, closed=closed)
return self._simple_new(left, right, dtype=dtype)
- _interval_shared_docs[
- "is_non_overlapping_monotonic"
- ] = """
+ _interval_shared_docs["is_non_overlapping_monotonic"] = """
Return a boolean whether the %(klass)s is non-overlapping and monotonic.
Non-overlapping means (no Intervals share points), and monotonic means
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index be7895fdb0275..9ce19ced2b356 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -1089,7 +1089,8 @@ def value_counts(self, dropna: bool = True) -> Series:
arr = IntegerArray(value_counts, mask)
index = Index(
self.dtype.construct_array_type()(
- keys, mask_index # type: ignore[arg-type]
+ keys, # type: ignore[arg-type]
+ mask_index,
)
)
return Series(arr, index=index, name="count", copy=False)
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index b2d2e82c7a81f..fafeedc01b02b 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -454,7 +454,8 @@ def __init__(
# error: Argument "dtype" to "asarray" has incompatible type
# "Union[ExtensionDtype, dtype[Any], None]"; expected "None"
sparse_values = np.asarray(
- data.sp_values, dtype=dtype # type: ignore[arg-type]
+ data.sp_values,
+ dtype=dtype, # type: ignore[arg-type]
)
elif sparse_index is None:
data = extract_array(data, extract_numpy=True)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index e98f1157572bb..490daa656f603 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1207,9 +1207,7 @@ def factorize(
uniques = Index(uniques)
return codes, uniques
- _shared_docs[
- "searchsorted"
- ] = """
+ _shared_docs["searchsorted"] = """
Find indices where elements should be inserted to maintain order.
Find the indices into a sorted {klass} `self` such that, if the
diff --git a/pandas/core/computation/expr.py b/pandas/core/computation/expr.py
index b5861fbaebe9c..f0aa7363d2644 100644
--- a/pandas/core/computation/expr.py
+++ b/pandas/core/computation/expr.py
@@ -695,8 +695,7 @@ def visit_Call(self, node, side=None, **kwargs):
if not isinstance(key, ast.keyword):
# error: "expr" has no attribute "id"
raise ValueError(
- "keyword error in function call "
- f"'{node.func.id}'" # type: ignore[attr-defined]
+ "keyword error in function call " f"'{node.func.id}'" # type: ignore[attr-defined]
)
if key.arg:
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 9a1ec2330a326..5a0867d0251e8 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -589,7 +589,9 @@ def maybe_promote(dtype: np.dtype, fill_value=np.nan):
# error: Argument 3 to "__call__" of "_lru_cache_wrapper" has incompatible type
# "Type[Any]"; expected "Hashable" [arg-type]
dtype, fill_value = _maybe_promote_cached(
- dtype, fill_value, type(fill_value) # type: ignore[arg-type]
+ dtype,
+ fill_value,
+ type(fill_value), # type: ignore[arg-type]
)
except TypeError:
# if fill_value is not hashable (required for caching)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 73b5804d8c168..6851955d693bc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1405,9 +1405,7 @@ def style(self) -> Styler:
return Styler(self)
- _shared_docs[
- "items"
- ] = r"""
+ _shared_docs["items"] = r"""
Iterate over (column name, Series) pairs.
Iterates over the DataFrame columns, returning a tuple with
@@ -2030,8 +2028,7 @@ def to_dict(
orient: Literal[
"dict", "list", "series", "split", "tight", "records", "index"
] = "dict",
- into: type[MutableMappingT]
- | MutableMappingT = dict, # type: ignore[assignment]
+ into: type[MutableMappingT] | MutableMappingT = dict, # type: ignore[assignment]
index: bool = True,
) -> MutableMappingT | list[MutableMappingT]:
"""
@@ -9137,9 +9134,7 @@ def groupby(
dropna=dropna,
)
- _shared_docs[
- "pivot"
- ] = """
+ _shared_docs["pivot"] = """
Return reshaped DataFrame organized by given index / column values.
Reshape data (produce a "pivot" table) based on column values. Uses
@@ -9283,9 +9278,7 @@ def pivot(
return pivot(self, index=index, columns=columns, values=values)
- _shared_docs[
- "pivot_table"
- ] = """
+ _shared_docs["pivot_table"] = """
Create a spreadsheet-style pivot table as a DataFrame.
The levels in the pivot table will be stored in MultiIndex objects
@@ -12529,7 +12522,7 @@ def _to_dict_of_blocks(self):
mgr = cast(BlockManager, mgr_to_mgr(mgr, "block"))
return {
k: self._constructor_from_mgr(v, axes=v.axes).__finalize__(self)
- for k, v, in mgr.to_dict().items()
+ for k, v in mgr.to_dict().items()
}
@property
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c06703660f82d..b37f22339fcfd 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -8006,8 +8006,6 @@ def replace(
if items:
keys, values = zip(*items)
else:
- # error: Incompatible types in assignment (expression has type
- # "list[Never]", variable has type "tuple[Any, ...]")
keys, values = ([], []) # type: ignore[assignment]
are_mappings = [is_dict_like(v) for v in values]
@@ -8825,15 +8823,11 @@ def _clip_with_scalar(self, lower, upper, inplace: bool_t = False):
if lower is not None:
cond = mask | (self >= lower)
- result = result.where(
- cond, lower, inplace=inplace
- ) # type: ignore[assignment]
+ result = result.where(cond, lower, inplace=inplace) # type: ignore[assignment]
if upper is not None:
cond = mask | (self <= upper)
result = self if inplace else result
- result = result.where(
- cond, upper, inplace=inplace
- ) # type: ignore[assignment]
+ result = result.where(cond, upper, inplace=inplace) # type: ignore[assignment]
return result
@@ -12242,7 +12236,12 @@ def _accum_func(
if axis == 1:
return self.T._accum_func(
- name, func, axis=0, skipna=skipna, *args, **kwargs # noqa: B026
+ name,
+ func,
+ axis=0,
+ skipna=skipna,
+ *args, # noqa: B026
+ **kwargs,
).T
def block_accum_func(blk_values):
@@ -12720,14 +12719,16 @@ def __imul__(self, other) -> Self:
def __itruediv__(self, other) -> Self:
# error: Unsupported left operand type for / ("Type[NDFrame]")
return self._inplace_method(
- other, type(self).__truediv__ # type: ignore[operator]
+ other,
+ type(self).__truediv__, # type: ignore[operator]
)
@final
def __ifloordiv__(self, other) -> Self:
# error: Unsupported left operand type for // ("Type[NDFrame]")
return self._inplace_method(
- other, type(self).__floordiv__ # type: ignore[operator]
+ other,
+ type(self).__floordiv__, # type: ignore[operator]
)
@final
@@ -13495,9 +13496,7 @@ def last_valid_index(self) -> Hashable | None:
Series([], dtype: bool)
"""
-_shared_docs[
- "stat_func_example"
-] = """
+_shared_docs["stat_func_example"] = """
Examples
--------
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index f2e314046fb74..9598bc0db02cc 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -848,7 +848,10 @@ def value_counts(
# "List[ndarray[Any, Any]]"; expected "List[Union[Union[ExtensionArray,
# ndarray[Any, Any]], Index, Series]]
_, idx = get_join_indexers(
- left, right, sort=False, how="left" # type: ignore[arg-type]
+ left, # type: ignore[arg-type]
+ right, # type: ignore[arg-type]
+ sort=False,
+ how="left",
)
if idx is not None:
out = np.where(idx != -1, out[idx], 0)
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 089e15afd465b..c9beaee55d608 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -5187,7 +5187,10 @@ def shift(
period = cast(int, period)
if freq is not None or axis != 0:
f = lambda x: x.shift(
- period, freq, axis, fill_value # pylint: disable=cell-var-from-loop
+ period, # pylint: disable=cell-var-from-loop
+ freq,
+ axis,
+ fill_value,
)
shifted = self._python_apply_general(
f, self._selected_obj, is_transform=True
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index e2224caad9e84..e68c393f8f707 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -1008,7 +1008,8 @@ def is_in_obj(gpr) -> bool:
return False
if isinstance(gpr, Series) and isinstance(obj_gpr_column, Series):
return gpr._mgr.references_same_values( # type: ignore[union-attr]
- obj_gpr_column._mgr, 0 # type: ignore[arg-type]
+ obj_gpr_column._mgr, # type: ignore[arg-type]
+ 0,
)
return False
try:
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 166d6946beacf..74c1f165ac06c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -1107,9 +1107,7 @@ def astype(self, dtype, copy: bool = True):
result._references.add_index_reference(result)
return result
- _index_shared_docs[
- "take"
- ] = """
+ _index_shared_docs["take"] = """
Return a new %(klass)s of the values selected by the indices.
For internal compatibility with numpy arrays.
@@ -1196,9 +1194,7 @@ def _maybe_disallow_fill(self, allow_fill: bool, fill_value, indices) -> bool:
allow_fill = False
return allow_fill
- _index_shared_docs[
- "repeat"
- ] = """
+ _index_shared_docs["repeat"] = """
Repeat elements of a %(klass)s.
Returns a new %(klass)s where each element of the current %(klass)s
@@ -5807,7 +5803,8 @@ def asof_locs(
# types "Union[ExtensionArray, ndarray[Any, Any]]", "str"
# TODO: will be fixed when ExtensionArray.searchsorted() is fixed
locs = self._values[mask].searchsorted(
- where._values, side="right" # type: ignore[call-overload]
+ where._values,
+ side="right", # type: ignore[call-overload]
)
locs = np.where(locs > 0, locs - 1, 0)
@@ -6069,9 +6066,7 @@ def _should_fallback_to_positional(self) -> bool:
"complex",
}
- _index_shared_docs[
- "get_indexer_non_unique"
- ] = """
+ _index_shared_docs["get_indexer_non_unique"] = """
Compute indexer and mask for new index given the current index.
The indexer should be then used as an input to ndarray.take to align the
diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py
index 4fcdb87974511..b62c19bef74be 100644
--- a/pandas/core/indexes/interval.py
+++ b/pandas/core/indexes/interval.py
@@ -555,9 +555,7 @@ def _maybe_convert_i8(self, key):
right = self._maybe_convert_i8(key.right)
constructor = Interval if scalar else IntervalIndex.from_arrays
# error: "object" not callable
- return constructor(
- left, right, closed=self.closed
- ) # type: ignore[operator]
+ return constructor(left, right, closed=self.closed) # type: ignore[operator]
if scalar:
# Timestamp/Timedelta
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 1352238eb60ec..5242706e0ce23 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -2673,7 +2673,8 @@ def sortlevel(
# error: Item "Hashable" of "Union[Hashable, Sequence[Hashable]]" has
# no attribute "__iter__" (not iterable)
level = [
- self._get_level_number(lev) for lev in level # type: ignore[union-attr]
+ self._get_level_number(lev)
+ for lev in level # type: ignore[union-attr]
]
sortorder = None
@@ -4056,8 +4057,6 @@ def sparsify_labels(label_list, start: int = 0, sentinel: object = ""):
for i, (p, t) in enumerate(zip(prev, cur)):
if i == k - 1:
sparse_cur.append(t)
- # error: Argument 1 to "append" of "list" has incompatible
- # type "list[Any]"; expected "tuple[Any, ...]"
result.append(sparse_cur) # type: ignore[arg-type]
break
@@ -4065,8 +4064,6 @@ def sparsify_labels(label_list, start: int = 0, sentinel: object = ""):
sparse_cur.append(sentinel)
else:
sparse_cur.extend(cur[i:])
- # error: Argument 1 to "append" of "list" has incompatible
- # type "list[Any]"; expected "tuple[Any, ...]"
result.append(sparse_cur) # type: ignore[arg-type]
break
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 06fd9ebe47eae..8a54cb2d7a189 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1843,7 +1843,8 @@ def shift(self, periods: int, fill_value: Any = None) -> list[Block]:
# error: Argument 1 to "np_can_hold_element" has incompatible type
# "Union[dtype[Any], ExtensionDtype]"; expected "dtype[Any]"
casted = np_can_hold_element(
- self.dtype, fill_value # type: ignore[arg-type]
+ self.dtype, # type: ignore[arg-type]
+ fill_value,
)
except LossySetitemError:
nb = self.coerce_to_target_dtype(fill_value)
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index b2d463a8c6c26..4445627732a9b 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -118,7 +118,9 @@ def concatenate_managers(
# type "List[BlockManager]"; expected "List[Union[ArrayManager,
# SingleArrayManager, BlockManager, SingleBlockManager]]"
return _concatenate_array_managers(
- mgrs, axes, concat_axis # type: ignore[arg-type]
+ mgrs, # type: ignore[arg-type]
+ axes,
+ concat_axis,
)
# Assertions disabled for performance
@@ -474,9 +476,7 @@ def _concatenate_join_units(join_units: list[JoinUnit], copy: bool) -> ArrayLike
# error: No overload variant of "__getitem__" of "ExtensionArray" matches
# argument type "Tuple[int, slice]"
to_concat = [
- t
- if is_1d_only_ea_dtype(t.dtype)
- else t[0, :] # type: ignore[call-overload]
+ t if is_1d_only_ea_dtype(t.dtype) else t[0, :] # type: ignore[call-overload]
for t in to_concat
]
concat_values = concat_compat(to_concat, axis=0, ea_compat_axis=True)
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index b2a915589cba7..ea74c17917279 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -535,7 +535,8 @@ def pivot(
# error: Unsupported operand types for + ("List[Any]" and "ExtensionArray")
# error: Unsupported left operand type for + ("ExtensionArray")
indexed = data.set_index(
- cols + columns_listlike, append=append # type: ignore[operator]
+ cols + columns_listlike, # type: ignore[operator]
+ append=append,
)
else:
index_list: list[Index] | list[Series]
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 487f57b7390a8..1f9ac8511476e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2023,8 +2023,7 @@ def to_dict(self, *, into: type[dict] = ...) -> dict:
)
def to_dict(
self,
- into: type[MutableMappingT]
- | MutableMappingT = dict, # type: ignore[assignment]
+ into: type[MutableMappingT] | MutableMappingT = dict, # type: ignore[assignment]
) -> MutableMappingT:
"""
Convert Series to {label -> value} dict or dict-like object.
diff --git a/pandas/core/shared_docs.py b/pandas/core/shared_docs.py
index 25f7e7e9f832b..3369df5da4cba 100644
--- a/pandas/core/shared_docs.py
+++ b/pandas/core/shared_docs.py
@@ -2,9 +2,7 @@
_shared_docs: dict[str, str] = {}
-_shared_docs[
- "aggregate"
-] = """
+_shared_docs["aggregate"] = """
Aggregate using one or more operations over the specified axis.
Parameters
@@ -53,9 +51,7 @@
A passed user-defined-function will be passed a Series for evaluation.
{examples}"""
-_shared_docs[
- "compare"
-] = """
+_shared_docs["compare"] = """
Compare to another {klass} and show the differences.
Parameters
@@ -85,9 +81,7 @@
.. versionadded:: 1.5.0
"""
-_shared_docs[
- "groupby"
-] = """
+_shared_docs["groupby"] = """
Group %(klass)s using a mapper or by a Series of columns.
A groupby operation involves some combination of splitting the
@@ -195,9 +189,7 @@
iterating through groups, selecting a group, aggregation, and more.
"""
-_shared_docs[
- "melt"
-] = """
+_shared_docs["melt"] = """
Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
This function is useful to massage a DataFrame into a format where one
@@ -311,9 +303,7 @@
2 c B E 5
"""
-_shared_docs[
- "transform"
-] = """
+_shared_docs["transform"] = """
Call ``func`` on self producing a {klass} with the same axis shape as self.
Parameters
@@ -438,9 +428,7 @@
6 2 n 4
"""
-_shared_docs[
- "storage_options"
-] = """storage_options : dict, optional
+_shared_docs["storage_options"] = """storage_options : dict, optional
Extra options that make sense for a particular storage connection, e.g.
host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
are forwarded to ``urllib.request.Request`` as header options. For other
@@ -450,9 +438,7 @@
<https://pandas.pydata.org/docs/user_guide/io.html?
highlight=storage_options#reading-writing-remote-files>`_."""
-_shared_docs[
- "compression_options"
-] = """compression : str or dict, default 'infer'
+_shared_docs["compression_options"] = """compression : str or dict, default 'infer'
For on-the-fly compression of the output data. If 'infer' and '%s' is
path-like, then detect compression from the following extensions: '.gz',
'.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
@@ -471,9 +457,7 @@
.. versionadded:: 1.5.0
Added support for `.tar` files."""
-_shared_docs[
- "decompression_options"
-] = """compression : str or dict, default 'infer'
+_shared_docs["decompression_options"] = """compression : str or dict, default 'infer'
For on-the-fly decompression of on-disk data. If 'infer' and '%s' is
path-like, then detect compression from the following extensions: '.gz',
'.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
@@ -493,9 +477,7 @@
.. versionadded:: 1.5.0
Added support for `.tar` files."""
-_shared_docs[
- "replace"
-] = """
+_shared_docs["replace"] = """
Replace values given in `to_replace` with `value`.
Values of the {klass} are replaced with other values dynamically.
@@ -817,9 +799,7 @@
4 4 e e
"""
-_shared_docs[
- "idxmin"
-] = """
+_shared_docs["idxmin"] = """
Return index of first occurrence of minimum over requested axis.
NA/null values are excluded.
@@ -884,9 +864,7 @@
dtype: object
"""
-_shared_docs[
- "idxmax"
-] = """
+_shared_docs["idxmax"] = """
Return index of first occurrence of maximum over requested axis.
NA/null values are excluded.
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 1b7d632c0fa80..7c6dca3bad7d9 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -707,9 +707,7 @@ def cat(
out = res_ser.__finalize__(self._orig, method="str_cat")
return out
- _shared_docs[
- "str_split"
- ] = r"""
+ _shared_docs["str_split"] = r"""
Split strings around given separator/delimiter.
Splits the string in the Series/Index from the %(side)s,
@@ -946,9 +944,7 @@ def rsplit(self, pat=None, *, n=-1, expand: bool = False):
result, expand=expand, returns_string=expand, dtype=dtype
)
- _shared_docs[
- "str_partition"
- ] = """
+ _shared_docs["str_partition"] = """
Split the string at the %(side)s occurrence of `sep`.
This method splits the string at the %(side)s occurrence of `sep`,
@@ -1686,9 +1682,7 @@ def pad(
result = self._data.array._str_pad(width, side=side, fillchar=fillchar)
return self._wrap_result(result)
- _shared_docs[
- "str_pad"
- ] = """
+ _shared_docs["str_pad"] = """
Pad %(side)s side of strings in the Series/Index.
Equivalent to :meth:`str.%(method)s`.
@@ -2036,9 +2030,7 @@ def encode(self, encoding, errors: str = "strict"):
result = self._data.array._str_encode(encoding, errors)
return self._wrap_result(result, returns_string=False)
- _shared_docs[
- "str_strip"
- ] = r"""
+ _shared_docs["str_strip"] = r"""
Remove %(position)s characters.
Strip whitespaces (including newlines) or a set of specified characters
@@ -2143,9 +2135,7 @@ def rstrip(self, to_strip=None):
result = self._data.array._str_rstrip(to_strip)
return self._wrap_result(result)
- _shared_docs[
- "str_removefix"
- ] = r"""
+ _shared_docs["str_removefix"] = r"""
Remove a %(side)s from an object series.
If the %(side)s is not present, the original string will be returned.
@@ -2852,9 +2842,7 @@ def extractall(self, pat, flags: int = 0) -> DataFrame:
# TODO: dispatch
return str_extractall(self._orig, pat, flags)
- _shared_docs[
- "find"
- ] = """
+ _shared_docs["find"] = """
Return %(side)s indexes in each strings in the Series/Index.
Each of returned indexes corresponds to the position where the
@@ -2960,9 +2948,7 @@ def normalize(self, form):
result = self._data.array._str_normalize(form)
return self._wrap_result(result)
- _shared_docs[
- "index"
- ] = """
+ _shared_docs["index"] = """
Return %(side)s indexes in each string in Series/Index.
Each of the returned indexes corresponds to the position where the
@@ -3094,9 +3080,7 @@ def len(self):
result = self._data.array._str_len()
return self._wrap_result(result, returns_string=False)
- _shared_docs[
- "casemethods"
- ] = """
+ _shared_docs["casemethods"] = """
Convert strings in the Series/Index to %(type)s.
%(version)s
Equivalent to :meth:`str.%(method)s`.
@@ -3224,9 +3208,7 @@ def casefold(self):
result = self._data.array._str_casefold()
return self._wrap_result(result)
- _shared_docs[
- "ismethods"
- ] = """
+ _shared_docs["ismethods"] = """
Check whether all characters in each string are %(type)s.
This is equivalent to running the Python string method
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 57bc6c1379d77..576bf7215f363 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -792,7 +792,9 @@ def get_handle(
# "Union[str, BaseBuffer]"; expected "Union[Union[str, PathLike[str]],
# ReadBuffer[bytes], WriteBuffer[bytes]]"
handle = _BytesZipFile(
- handle, ioargs.mode, **compression_args # type: ignore[arg-type]
+ handle, # type: ignore[arg-type]
+ ioargs.mode,
+ **compression_args,
)
if handle.buffer.mode == "r":
handles.append(handle)
@@ -817,7 +819,8 @@ def get_handle(
# type "BaseBuffer"; expected "Union[ReadBuffer[bytes],
# WriteBuffer[bytes], None]"
handle = _BytesTarFile(
- fileobj=handle, **compression_args # type: ignore[arg-type]
+ fileobj=handle, # type: ignore[arg-type]
+ **compression_args,
)
assert isinstance(handle, _BytesTarFile)
if "r" in handle.buffer.mode:
@@ -841,7 +844,9 @@ def get_handle(
# BaseBuffer]"; expected "Optional[Union[Union[str, bytes, PathLike[str],
# PathLike[bytes]], IO[bytes]], None]"
handle = get_lzma_file()(
- handle, ioargs.mode, **compression_args # type: ignore[arg-type]
+ handle, # type: ignore[arg-type]
+ ioargs.mode,
+ **compression_args,
)
# Zstd Compression
@@ -1137,7 +1142,9 @@ def _maybe_memory_map(
# expected "BaseBuffer"
wrapped = _IOWrapper(
mmap.mmap(
- handle.fileno(), 0, access=mmap.ACCESS_READ # type: ignore[arg-type]
+ handle.fileno(),
+ 0,
+ access=mmap.ACCESS_READ, # type: ignore[arg-type]
)
)
finally:
diff --git a/pandas/io/excel/_calamine.py b/pandas/io/excel/_calamine.py
index 4f65acf1aa40e..1f721c65982d4 100644
--- a/pandas/io/excel/_calamine.py
+++ b/pandas/io/excel/_calamine.py
@@ -75,7 +75,8 @@ def load_workbook(
from python_calamine import load_workbook
return load_workbook(
- filepath_or_buffer, **engine_kwargs # type: ignore[arg-type]
+ filepath_or_buffer, # type: ignore[arg-type]
+ **engine_kwargs,
)
@property
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index c30238e412450..36c9b66f2bd47 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2170,9 +2170,7 @@ def convert(
# "Union[Type[Index], Type[DatetimeIndex]]")
factory = lambda x, **kwds: PeriodIndex.from_ordinals( # type: ignore[assignment]
x, freq=kwds.get("freq", None)
- )._rename(
- kwds["name"]
- )
+ )._rename(kwds["name"])
# making an Index instance could throw a number of different errors
try:
@@ -3181,7 +3179,9 @@ def write_array(
# error: Item "ExtensionArray" of "Union[Any, ExtensionArray]" has no
# attribute "asi8"
self._handle.create_array(
- self.group, key, value.asi8 # type: ignore[union-attr]
+ self.group,
+ key,
+ value.asi8, # type: ignore[union-attr]
)
node = getattr(self.group, key)
diff --git a/pandas/plotting/_matplotlib/core.py b/pandas/plotting/_matplotlib/core.py
index 2979903edf360..136056e3ff428 100644
--- a/pandas/plotting/_matplotlib/core.py
+++ b/pandas/plotting/_matplotlib/core.py
@@ -616,8 +616,7 @@ def result(self):
# error: Argument 1 to "len" has incompatible type "Union[bool,
# Tuple[Any, ...], List[Any], ndarray[Any, Any]]"; expected "Sized"
all_sec = (
- is_list_like(self.secondary_y)
- and len(self.secondary_y) == self.nseries # type: ignore[arg-type]
+ is_list_like(self.secondary_y) and len(self.secondary_y) == self.nseries # type: ignore[arg-type]
)
if sec_true or all_sec:
# if all data is plotted on secondary, return right axes
diff --git a/pandas/plotting/_matplotlib/hist.py b/pandas/plotting/_matplotlib/hist.py
index 898abc9b78e3f..ab2dc20ccbd02 100644
--- a/pandas/plotting/_matplotlib/hist.py
+++ b/pandas/plotting/_matplotlib/hist.py
@@ -202,17 +202,13 @@ def _post_plot_logic(self, ax: Axes, data) -> None:
# error: Argument 1 to "set_xlabel" of "_AxesBase" has incompatible
# type "Hashable"; expected "str"
ax.set_xlabel(
- "Frequency"
- if self.xlabel is None
- else self.xlabel # type: ignore[arg-type]
+ "Frequency" if self.xlabel is None else self.xlabel # type: ignore[arg-type]
)
ax.set_ylabel(self.ylabel) # type: ignore[arg-type]
else:
ax.set_xlabel(self.xlabel) # type: ignore[arg-type]
ax.set_ylabel(
- "Frequency"
- if self.ylabel is None
- else self.ylabel # type: ignore[arg-type]
+ "Frequency" if self.ylabel is None else self.ylabel # type: ignore[arg-type]
)
@property
diff --git a/pandas/plotting/_matplotlib/timeseries.py b/pandas/plotting/_matplotlib/timeseries.py
index bf1c0f6346f02..067bcf0b01ccb 100644
--- a/pandas/plotting/_matplotlib/timeseries.py
+++ b/pandas/plotting/_matplotlib/timeseries.py
@@ -250,9 +250,7 @@ def use_dynamic_x(ax: Axes, data: DataFrame | Series) -> bool:
if isinstance(data.index, ABCDatetimeIndex):
# error: "BaseOffset" has no attribute "_period_dtype_code"
freq_str = OFFSET_TO_PERIOD_FREQSTR.get(freq_str, freq_str)
- base = to_offset(
- freq_str, is_period=True
- )._period_dtype_code # type: ignore[attr-defined]
+ base = to_offset(freq_str, is_period=True)._period_dtype_code # type: ignore[attr-defined]
x = data.index
if base <= FreqGroup.FR_DAY.value:
return x[:1].is_normalized
diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py
index 475473218f712..5eeab778c184c 100644
--- a/pandas/tests/dtypes/test_inference.py
+++ b/pandas/tests/dtypes/test_inference.py
@@ -656,9 +656,7 @@ def test_convert_numeric_uint64_nan_values(
arr = np.array([2**63, 2**63 + 1], dtype=object)
na_values = {2**63}
- expected = (
- np.array([np.nan, 2**63 + 1], dtype=float) if coerce else arr.copy()
- )
+ expected = np.array([np.nan, 2**63 + 1], dtype=float) if coerce else arr.copy()
result = lib.maybe_convert_numeric(
arr,
na_values,
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index e1f8d8eca2537..7105755df6f88 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -843,7 +843,8 @@ def test_empty_like(self):
class TestLibMissing:
@pytest.mark.parametrize("func", [libmissing.checknull, isna])
@pytest.mark.parametrize(
- "value", na_vals + sometimes_na_vals # type: ignore[operator]
+ "value",
+ na_vals + sometimes_na_vals, # type: ignore[operator]
)
def test_checknull_na_vals(self, func, value):
assert func(value)
@@ -864,7 +865,8 @@ def test_checknull_never_na_vals(self, func, value):
assert not func(value)
@pytest.mark.parametrize(
- "value", na_vals + sometimes_na_vals # type: ignore[operator]
+ "value",
+ na_vals + sometimes_na_vals, # type: ignore[operator]
)
def test_checknull_old_na_vals(self, value):
assert libmissing.checknull(value, inf_as_na=True)
diff --git a/pandas/tests/frame/methods/test_first_and_last.py b/pandas/tests/frame/methods/test_first_and_last.py
index 212e56442ee07..2170cf254fbe6 100644
--- a/pandas/tests/frame/methods/test_first_and_last.py
+++ b/pandas/tests/frame/methods/test_first_and_last.py
@@ -61,17 +61,13 @@ def test_first_last_raises(self, frame_or_series):
msg = "'first' only supports a DatetimeIndex index"
with tm.assert_produces_warning(
FutureWarning, match=deprecated_msg
- ), pytest.raises(
- TypeError, match=msg
- ): # index is not a DatetimeIndex
+ ), pytest.raises(TypeError, match=msg): # index is not a DatetimeIndex
obj.first("1D")
msg = "'last' only supports a DatetimeIndex index"
with tm.assert_produces_warning(
FutureWarning, match=last_deprecated_msg
- ), pytest.raises(
- TypeError, match=msg
- ): # index is not a DatetimeIndex
+ ), pytest.raises(TypeError, match=msg): # index is not a DatetimeIndex
obj.last("1D")
def test_last_subset(self, frame_or_series):
diff --git a/pandas/tests/frame/methods/test_rank.py b/pandas/tests/frame/methods/test_rank.py
index 1d0931f5982b7..79aabbcc83bbf 100644
--- a/pandas/tests/frame/methods/test_rank.py
+++ b/pandas/tests/frame/methods/test_rank.py
@@ -319,9 +319,7 @@ def test_rank_pct_true(self, rank_method, exp):
@pytest.mark.single_cpu
def test_pct_max_many_rows(self):
# GH 18271
- df = DataFrame(
- {"A": np.arange(2**24 + 1), "B": np.arange(2**24 + 1, 0, -1)}
- )
+ df = DataFrame({"A": np.arange(2**24 + 1), "B": np.arange(2**24 + 1, 0, -1)})
result = df.rank(pct=True).max()
assert (result == 1).all()
diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py
index fbf36dbc4fb02..9d07b8ab2288f 100644
--- a/pandas/tests/frame/methods/test_reset_index.py
+++ b/pandas/tests/frame/methods/test_reset_index.py
@@ -232,9 +232,7 @@ def test_reset_index_level_missing(self, idx_lev):
def test_reset_index_right_dtype(self):
time = np.arange(0.0, 10, np.sqrt(2) / 2)
- s1 = Series(
- (9.81 * time**2) / 2, index=Index(time, name="time"), name="speed"
- )
+ s1 = Series((9.81 * time**2) / 2, index=Index(time, name="time"), name="speed")
df = DataFrame(s1)
reset = s1.reset_index()
diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py
index 6223a153df358..5a69c26f2ab16 100644
--- a/pandas/tests/groupby/aggregate/test_aggregate.py
+++ b/pandas/tests/groupby/aggregate/test_aggregate.py
@@ -1188,9 +1188,7 @@ def test_agg_with_one_lambda(self):
# check pd.NameAgg case
result1 = df.groupby(by="kind").agg(
- height_sqr_min=pd.NamedAgg(
- column="height", aggfunc=lambda x: np.min(x**2)
- ),
+ height_sqr_min=pd.NamedAgg(column="height", aggfunc=lambda x: np.min(x**2)),
height_max=pd.NamedAgg(column="height", aggfunc="max"),
weight_max=pd.NamedAgg(column="weight", aggfunc="max"),
)
@@ -1245,9 +1243,7 @@ def test_agg_multiple_lambda(self):
# check pd.NamedAgg case
result2 = df.groupby(by="kind").agg(
- height_sqr_min=pd.NamedAgg(
- column="height", aggfunc=lambda x: np.min(x**2)
- ),
+ height_sqr_min=pd.NamedAgg(column="height", aggfunc=lambda x: np.min(x**2)),
height_max=pd.NamedAgg(column="height", aggfunc="max"),
weight_max=pd.NamedAgg(column="weight", aggfunc="max"),
height_max_2=pd.NamedAgg(column="height", aggfunc=lambda x: np.max(x)),
diff --git a/pandas/tests/indexes/datetimes/test_scalar_compat.py b/pandas/tests/indexes/datetimes/test_scalar_compat.py
index e93fc0e2a4e2e..f766894a993a0 100644
--- a/pandas/tests/indexes/datetimes/test_scalar_compat.py
+++ b/pandas/tests/indexes/datetimes/test_scalar_compat.py
@@ -135,7 +135,8 @@ def test_dti_hour_tzaware(self, prefix):
# GH#12806
# error: Unsupported operand types for + ("List[None]" and "List[str]")
@pytest.mark.parametrize(
- "time_locale", [None] + tm.get_locales() # type: ignore[operator]
+ "time_locale",
+ [None] + tm.get_locales(), # type: ignore[operator]
)
def test_day_name_month_name(self, time_locale):
# Test Monday -> Sunday and January -> December, in that sequence
diff --git a/pandas/tests/indexes/multi/test_sorting.py b/pandas/tests/indexes/multi/test_sorting.py
index b4dcef71dcf50..4a1a6b9c452d5 100644
--- a/pandas/tests/indexes/multi/test_sorting.py
+++ b/pandas/tests/indexes/multi/test_sorting.py
@@ -151,7 +151,7 @@ def test_unsortedindex_doc_examples():
msg = r"Key length \(2\) was greater than MultiIndex lexsort depth \(1\)"
with pytest.raises(UnsortedIndexError, match=msg):
- dfm.loc[(0, "y"):(1, "z")]
+ dfm.loc[(0, "y") : (1, "z")]
assert not dfm.index._is_lexsorted()
assert dfm.index._lexsort_depth == 1
@@ -159,7 +159,7 @@ def test_unsortedindex_doc_examples():
# sort it
dfm = dfm.sort_index()
dfm.loc[(1, "z")]
- dfm.loc[(0, "y"):(1, "z")]
+ dfm.loc[(0, "y") : (1, "z")]
assert dfm.index._is_lexsorted()
assert dfm.index._lexsort_depth == 2
diff --git a/pandas/tests/indexes/numeric/test_indexing.py b/pandas/tests/indexes/numeric/test_indexing.py
index f2458a6c6114d..29f8a0a5a5932 100644
--- a/pandas/tests/indexes/numeric/test_indexing.py
+++ b/pandas/tests/indexes/numeric/test_indexing.py
@@ -403,8 +403,9 @@ def test_get_indexer_arrow_dictionary_target(self):
tm.assert_numpy_array_equal(result, expected)
result_1, result_2 = idx.get_indexer_non_unique(target)
- expected_1, expected_2 = np.array([0, -1], dtype=np.int64), np.array(
- [1], dtype=np.int64
+ expected_1, expected_2 = (
+ np.array([0, -1], dtype=np.int64),
+ np.array([1], dtype=np.int64),
)
tm.assert_numpy_array_equal(result_1, expected_1)
tm.assert_numpy_array_equal(result_2, expected_2)
diff --git a/pandas/tests/indexes/numeric/test_join.py b/pandas/tests/indexes/numeric/test_join.py
index 918d505216735..9839f40861d55 100644
--- a/pandas/tests/indexes/numeric/test_join.py
+++ b/pandas/tests/indexes/numeric/test_join.py
@@ -313,15 +313,11 @@ def test_join_right(self, index_large):
tm.assert_numpy_array_equal(ridx, eridx)
def test_join_non_int_index(self, index_large):
- other = Index(
- 2**63 + np.array([1, 5, 7, 10, 20], dtype="uint64"), dtype=object
- )
+ other = Index(2**63 + np.array([1, 5, 7, 10, 20], dtype="uint64"), dtype=object)
outer = index_large.join(other, how="outer")
outer2 = other.join(index_large, how="outer")
- expected = Index(
- 2**63 + np.array([0, 1, 5, 7, 10, 15, 20, 25], dtype="uint64")
- )
+ expected = Index(2**63 + np.array([0, 1, 5, 7, 10, 15, 20, 25], dtype="uint64"))
tm.assert_index_equal(outer, outer2)
tm.assert_index_equal(outer, expected)
@@ -353,9 +349,7 @@ def test_join_outer(self, index_large):
noidx_res = index_large.join(other, how="outer")
tm.assert_index_equal(res, noidx_res)
- eres = Index(
- 2**63 + np.array([0, 1, 2, 7, 10, 12, 15, 20, 25], dtype="uint64")
- )
+ eres = Index(2**63 + np.array([0, 1, 2, 7, 10, 12, 15, 20, 25], dtype="uint64"))
elidx = np.array([0, -1, -1, -1, 1, -1, 2, 3, 4], dtype=np.intp)
eridx = np.array([-1, 3, 4, 0, 5, 1, -1, -1, 2], dtype=np.intp)
diff --git a/pandas/tests/indexing/multiindex/test_partial.py b/pandas/tests/indexing/multiindex/test_partial.py
index 5aff1f1309004..830c187a205a8 100644
--- a/pandas/tests/indexing/multiindex/test_partial.py
+++ b/pandas/tests/indexing/multiindex/test_partial.py
@@ -93,7 +93,7 @@ def test_fancy_slice_partial(
tm.assert_frame_equal(result, expected)
ymd = multiindex_year_month_day_dataframe_random_data
- result = ymd.loc[(2000, 2):(2000, 4)]
+ result = ymd.loc[(2000, 2) : (2000, 4)]
lev = ymd.index.codes[1]
expected = ymd[(lev >= 1) & (lev <= 3)]
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexing/multiindex/test_slice.py b/pandas/tests/indexing/multiindex/test_slice.py
index cef3dca054758..7f298e9bdd375 100644
--- a/pandas/tests/indexing/multiindex/test_slice.py
+++ b/pandas/tests/indexing/multiindex/test_slice.py
@@ -700,21 +700,23 @@ def test_multiindex_label_slicing_with_negative_step(self):
tm.assert_indexing_slices_equivalent(ser, SLC[::-1], SLC[::-1])
tm.assert_indexing_slices_equivalent(ser, SLC["d"::-1], SLC[15::-1])
- tm.assert_indexing_slices_equivalent(ser, SLC[("d",)::-1], SLC[15::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("d",) :: -1], SLC[15::-1])
tm.assert_indexing_slices_equivalent(ser, SLC[:"d":-1], SLC[:11:-1])
- tm.assert_indexing_slices_equivalent(ser, SLC[:("d",):-1], SLC[:11:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[: ("d",) : -1], SLC[:11:-1])
tm.assert_indexing_slices_equivalent(ser, SLC["d":"b":-1], SLC[15:3:-1])
- tm.assert_indexing_slices_equivalent(ser, SLC[("d",):"b":-1], SLC[15:3:-1])
- tm.assert_indexing_slices_equivalent(ser, SLC["d":("b",):-1], SLC[15:3:-1])
- tm.assert_indexing_slices_equivalent(ser, SLC[("d",):("b",):-1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("d",) : "b" : -1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC["d" : ("b",) : -1], SLC[15:3:-1])
+ tm.assert_indexing_slices_equivalent(
+ ser, SLC[("d",) : ("b",) : -1], SLC[15:3:-1]
+ )
tm.assert_indexing_slices_equivalent(ser, SLC["b":"d":-1], SLC[:0])
- tm.assert_indexing_slices_equivalent(ser, SLC[("c", 2)::-1], SLC[10::-1])
- tm.assert_indexing_slices_equivalent(ser, SLC[:("c", 2):-1], SLC[:9:-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[("c", 2) :: -1], SLC[10::-1])
+ tm.assert_indexing_slices_equivalent(ser, SLC[: ("c", 2) : -1], SLC[:9:-1])
tm.assert_indexing_slices_equivalent(
- ser, SLC[("e", 0):("c", 2):-1], SLC[16:9:-1]
+ ser, SLC[("e", 0) : ("c", 2) : -1], SLC[16:9:-1]
)
def test_multiindex_slice_first_level(self):
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index c455b0bc8599b..c897afaeeee0e 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -1818,7 +1818,7 @@ def test_loc_setitem_multiindex_slice(self):
)
result = Series([1, 1, 1, 1, 1, 1, 1, 1], index=index)
- result.loc[("baz", "one"):("foo", "two")] = 100
+ result.loc[("baz", "one") : ("foo", "two")] = 100
expected = Series([1, 1, 100, 100, 100, 100, 1, 1], index=index)
@@ -2842,7 +2842,7 @@ def test_loc_axis_1_slice():
index=tuple("ABCDEFGHIJ"),
columns=MultiIndex.from_tuples(cols),
)
- result = df.loc(axis=1)[(2014, 9):(2015, 8)]
+ result = df.loc(axis=1)[(2014, 9) : (2015, 8)]
expected = DataFrame(
np.ones((10, 4)),
index=tuple("ABCDEFGHIJ"),
diff --git a/pandas/tests/io/formats/style/test_html.py b/pandas/tests/io/formats/style/test_html.py
index 1e345eb82ed3c..8cb06e3b7619d 100644
--- a/pandas/tests/io/formats/style/test_html.py
+++ b/pandas/tests/io/formats/style/test_html.py
@@ -93,11 +93,7 @@ def test_w3_html_format(styler):
lambda x: "att1:v1;"
).set_table_attributes('class="my-cls1" style="attr3:v3;"').set_td_classes(
DataFrame(["my-cls2"], index=["a"], columns=["A"])
- ).format(
- "{:.1f}"
- ).set_caption(
- "A comprehensive test"
- )
+ ).format("{:.1f}").set_caption("A comprehensive test")
expected = dedent(
"""\
<style type="text/css">
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 1fd96dff27d06..98f1e0245b353 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -1338,9 +1338,7 @@ def test_to_latex_multiindex_names(self, name0, name1, axes):
& 4 & -1 & -1 & -1 & -1 \\
\bottomrule
\end{tabular}
-""" % tuple(
- list(col_names) + [idx_names_row]
- )
+""" % tuple(list(col_names) + [idx_names_row])
assert observed == expected
@pytest.mark.parametrize("one_row", [True, False])
diff --git a/pandas/tests/io/json/test_ujson.py b/pandas/tests/io/json/test_ujson.py
index 56ea9ea625dff..c7d2a5845b50e 100644
--- a/pandas/tests/io/json/test_ujson.py
+++ b/pandas/tests/io/json/test_ujson.py
@@ -178,7 +178,8 @@ def test_encode_dict_with_unicode_keys(self, unicode_key):
assert unicode_dict == ujson.ujson_loads(ujson.ujson_dumps(unicode_dict))
@pytest.mark.parametrize(
- "double_input", [math.pi, -math.pi] # Should work with negatives too.
+ "double_input",
+ [math.pi, -math.pi], # Should work with negatives too.
)
def test_encode_double_conversion(self, double_input):
output = ujson.ujson_dumps(double_input)
@@ -520,7 +521,8 @@ def test_decode_invalid_dict(self, invalid_dict):
ujson.ujson_loads(invalid_dict)
@pytest.mark.parametrize(
- "numeric_int_as_str", ["31337", "-31337"] # Should work with negatives.
+ "numeric_int_as_str",
+ ["31337", "-31337"], # Should work with negatives.
)
def test_decode_numeric_int(self, numeric_int_as_str):
assert int(numeric_int_as_str) == ujson.ujson_loads(numeric_int_as_str)
diff --git a/pandas/tests/io/parser/test_converters.py b/pandas/tests/io/parser/test_converters.py
index 7f3e45324dbd2..b6b882b4ec432 100644
--- a/pandas/tests/io/parser/test_converters.py
+++ b/pandas/tests/io/parser/test_converters.py
@@ -32,7 +32,8 @@ def test_converters_type_must_be_dict(all_parsers):
@pytest.mark.parametrize("column", [3, "D"])
@pytest.mark.parametrize(
- "converter", [parse, lambda x: int(x.split("/")[2])] # Produce integer.
+ "converter",
+ [parse, lambda x: int(x.split("/")[2])], # Produce integer.
)
def test_converters(all_parsers, column, converter):
parser = all_parsers
@@ -84,9 +85,9 @@ def test_converters_euro_decimal_format(all_parsers):
1;1521,1541;187101,9543;ABC;poi;4,7387
2;121,12;14897,76;DEF;uyt;0,3773
3;878,158;108013,434;GHI;rez;2,7356"""
- converters["Number1"] = converters["Number2"] = converters[
- "Number3"
- ] = lambda x: float(x.replace(",", "."))
+ converters["Number1"] = converters["Number2"] = converters["Number3"] = (
+ lambda x: float(x.replace(",", "."))
+ )
if parser.engine == "pyarrow":
msg = "The 'converters' option is not supported with the 'pyarrow' engine"
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index cbd3917ba9c04..155e52d76e895 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -57,9 +57,7 @@ def test_utf16_bom_skiprows(all_parsers, sep, encoding):
skip this too
A,B,C
1,2,3
-4,5,6""".replace(
- ",", sep
- )
+4,5,6""".replace(",", sep)
path = f"__{uuid.uuid4()}__.csv"
kwargs = {"sep": sep, "skiprows": 2}
utf8 = "utf-8"
diff --git a/pandas/tests/io/parser/test_read_fwf.py b/pandas/tests/io/parser/test_read_fwf.py
index bed2b5e10a6f7..b62fcc04c375c 100644
--- a/pandas/tests/io/parser/test_read_fwf.py
+++ b/pandas/tests/io/parser/test_read_fwf.py
@@ -488,9 +488,7 @@ def test_full_file_with_spaces():
868 Jennifer Love Hewitt 0 17000.00 5/25/1985
761 Jada Pinkett-Smith 49654.87 100000.00 12/5/2006
317 Bill Murray 789.65 5000.00 2/5/2007
-""".strip(
- "\r\n"
- )
+""".strip("\r\n")
colspecs = ((0, 7), (8, 28), (30, 38), (42, 53), (56, 70))
expected = read_fwf(StringIO(test), colspecs=colspecs)
@@ -507,9 +505,7 @@ def test_full_file_with_spaces_and_missing():
868 5/25/1985
761 Jada Pinkett-Smith 49654.87 100000.00 12/5/2006
317 Bill Murray 789.65
-""".strip(
- "\r\n"
- )
+""".strip("\r\n")
colspecs = ((0, 7), (8, 28), (30, 38), (42, 53), (56, 70))
expected = read_fwf(StringIO(test), colspecs=colspecs)
@@ -526,9 +522,7 @@ def test_messed_up_data():
761 Jada Pinkett-Smith 49654.87 100000.00 12/5/2006
317 Bill Murray 789.65
-""".strip(
- "\r\n"
- )
+""".strip("\r\n")
colspecs = ((2, 10), (15, 33), (37, 45), (49, 61), (64, 79))
expected = read_fwf(StringIO(test), colspecs=colspecs)
@@ -544,9 +538,7 @@ def test_multiple_delimiters():
++44~~~~12.01 baz~~Jennifer Love Hewitt
~~55 11+++foo++++Jada Pinkett-Smith
..66++++++.03~~~bar Bill Murray
-""".strip(
- "\r\n"
- )
+""".strip("\r\n")
delimiter = " +~.\\"
colspecs = ((0, 4), (7, 13), (15, 19), (21, 41))
expected = read_fwf(StringIO(test), colspecs=colspecs, delimiter=delimiter)
@@ -560,9 +552,7 @@ def test_variable_width_unicode():
שלום שלום
ום שלל
של ום
-""".strip(
- "\r\n"
- )
+""".strip("\r\n")
encoding = "utf8"
kwargs = {"header": None, "encoding": encoding}
diff --git a/pandas/tests/io/parser/test_skiprows.py b/pandas/tests/io/parser/test_skiprows.py
index 2d50916228f14..0ca47ded7ba8a 100644
--- a/pandas/tests/io/parser/test_skiprows.py
+++ b/pandas/tests/io/parser/test_skiprows.py
@@ -189,7 +189,8 @@ def test_skip_row_with_newline_and_quote(all_parsers, data, exp_data):
@xfail_pyarrow # ValueError: The 'delim_whitespace' option is not supported
@pytest.mark.parametrize(
- "lineterminator", ["\n", "\r\n", "\r"] # "LF" # "CRLF" # "CR"
+ "lineterminator",
+ ["\n", "\r\n", "\r"], # "LF" # "CRLF" # "CR"
)
def test_skiprows_lineterminator(all_parsers, lineterminator, request):
# see gh-9079
diff --git a/pandas/tests/io/parser/test_textreader.py b/pandas/tests/io/parser/test_textreader.py
index fef5414e85e52..1b3d1d41bc1c9 100644
--- a/pandas/tests/io/parser/test_textreader.py
+++ b/pandas/tests/io/parser/test_textreader.py
@@ -136,7 +136,10 @@ def test_skip_bad_lines(self):
reader.read()
reader = TextReader(
- StringIO(data), delimiter=":", header=None, on_bad_lines=2 # Skip
+ StringIO(data),
+ delimiter=":",
+ header=None,
+ on_bad_lines=2, # Skip
)
result = reader.read()
expected = {
@@ -148,7 +151,10 @@ def test_skip_bad_lines(self):
with tm.assert_produces_warning(ParserWarning, match="Skipping line"):
reader = TextReader(
- StringIO(data), delimiter=":", header=None, on_bad_lines=1 # Warn
+ StringIO(data),
+ delimiter=":",
+ header=None,
+ on_bad_lines=1, # Warn
)
reader.read()
diff --git a/pandas/tests/io/pytables/test_file_handling.py b/pandas/tests/io/pytables/test_file_handling.py
index d93de16816725..ed5f8cde7db7b 100644
--- a/pandas/tests/io/pytables/test_file_handling.py
+++ b/pandas/tests/io/pytables/test_file_handling.py
@@ -258,7 +258,7 @@ def test_complibs_default_settings_override(tmp_path, setup_path):
@pytest.mark.filterwarnings("ignore:object name is not a valid")
@pytest.mark.skipif(
not PY311 and is_ci_environment() and is_platform_linux(),
- reason="Segfaulting in a CI environment"
+ reason="Segfaulting in a CI environment",
# with xfail, would sometimes raise UnicodeDecodeError
# invalid state byte
)
diff --git a/pandas/tests/io/test_html.py b/pandas/tests/io/test_html.py
index 607357e709b6e..6a2d460232165 100644
--- a/pandas/tests/io/test_html.py
+++ b/pandas/tests/io/test_html.py
@@ -157,7 +157,8 @@ def test_to_html_compat(self, flavor_read_html):
columns=pd.Index(list("abc"), dtype=object),
)
# pylint: disable-next=consider-using-f-string
- .map("{:.3f}".format).astype(float)
+ .map("{:.3f}".format)
+ .astype(float)
)
out = df.to_html()
res = flavor_read_html(
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index 05e1c93e1a676..cacbe0a1d6095 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -124,7 +124,8 @@ def test_is_end(self, end, tz):
)
# error: Unsupported operand types for + ("List[None]" and "List[str]")
@pytest.mark.parametrize(
- "time_locale", [None] + tm.get_locales() # type: ignore[operator]
+ "time_locale",
+ [None] + tm.get_locales(), # type: ignore[operator]
)
def test_names(self, data, time_locale):
# GH 17354
diff --git a/pandas/tests/series/accessors/test_dt_accessor.py b/pandas/tests/series/accessors/test_dt_accessor.py
index 34465a7c12c18..911f5d7b28e3f 100644
--- a/pandas/tests/series/accessors/test_dt_accessor.py
+++ b/pandas/tests/series/accessors/test_dt_accessor.py
@@ -452,7 +452,8 @@ def test_dt_accessor_no_new_attributes(self):
# error: Unsupported operand types for + ("List[None]" and "List[str]")
@pytest.mark.parametrize(
- "time_locale", [None] + tm.get_locales() # type: ignore[operator]
+ "time_locale",
+ [None] + tm.get_locales(), # type: ignore[operator]
)
def test_dt_accessor_datetime_name_accessors(self, time_locale):
# Test Monday -> Sunday and January -> December, in that sequence
diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py
index f662dfd7e2b14..25e4e1f9ec50c 100644
--- a/pandas/tests/strings/test_strings.py
+++ b/pandas/tests/strings/test_strings.py
@@ -230,7 +230,8 @@ def test_isnumeric_unicode(method, expected, any_string_dtype):
# 0x1378: ፸ ETHIOPIC NUMBER SEVENTY
# 0xFF13: 3 Em 3 # noqa: RUF003
ser = Series(
- ["A", "3", "¼", "★", "፸", "3", "four"], dtype=any_string_dtype # noqa: RUF001
+ ["A", "3", "¼", "★", "፸", "3", "four"], # noqa: RUF001
+ dtype=any_string_dtype,
)
expected_dtype = "bool" if any_string_dtype in object_pyarrow_numpy else "boolean"
expected = Series(expected, dtype=expected_dtype)
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 16d684b72e1e3..02cd7b77c9b7d 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1522,9 +1522,7 @@ def test_duplicated_with_nas(self):
]
),
np.array(["a", "b", "a", "e", "c", "b", "d", "a", "e", "f"], dtype=object),
- np.array(
- [1, 2**63, 1, 3**5, 10, 2**63, 39, 1, 3**5, 7], dtype=np.uint64
- ),
+ np.array([1, 2**63, 1, 3**5, 10, 2**63, 39, 1, 3**5, 7], dtype=np.uint64),
],
)
def test_numeric_object_likes(self, case):
diff --git a/pandas/tests/tseries/offsets/test_business_hour.py b/pandas/tests/tseries/offsets/test_business_hour.py
index e675977c6fab4..f01406fb50d23 100644
--- a/pandas/tests/tseries/offsets/test_business_hour.py
+++ b/pandas/tests/tseries/offsets/test_business_hour.py
@@ -915,28 +915,34 @@ def test_apply_nanoseconds(self):
(
BusinessHour(),
{
- Timestamp("2014-07-04 15:00")
- + Nano(5): Timestamp("2014-07-04 16:00")
+ Timestamp("2014-07-04 15:00") + Nano(5): Timestamp(
+ "2014-07-04 16:00"
+ )
+ Nano(5),
- Timestamp("2014-07-04 16:00")
- + Nano(5): Timestamp("2014-07-07 09:00")
+ Timestamp("2014-07-04 16:00") + Nano(5): Timestamp(
+ "2014-07-07 09:00"
+ )
+ Nano(5),
- Timestamp("2014-07-04 16:00")
- - Nano(5): Timestamp("2014-07-04 17:00")
+ Timestamp("2014-07-04 16:00") - Nano(5): Timestamp(
+ "2014-07-04 17:00"
+ )
- Nano(5),
},
),
(
BusinessHour(-1),
{
- Timestamp("2014-07-04 15:00")
- + Nano(5): Timestamp("2014-07-04 14:00")
+ Timestamp("2014-07-04 15:00") + Nano(5): Timestamp(
+ "2014-07-04 14:00"
+ )
+ Nano(5),
- Timestamp("2014-07-04 10:00")
- + Nano(5): Timestamp("2014-07-04 09:00")
+ Timestamp("2014-07-04 10:00") + Nano(5): Timestamp(
+ "2014-07-04 09:00"
+ )
+ Nano(5),
- Timestamp("2014-07-04 10:00")
- - Nano(5): Timestamp("2014-07-03 17:00")
+ Timestamp("2014-07-04 10:00") - Nano(5): Timestamp(
+ "2014-07-03 17:00"
+ )
- Nano(5),
},
),
diff --git a/pandas/tests/tseries/offsets/test_custom_business_hour.py b/pandas/tests/tseries/offsets/test_custom_business_hour.py
index 55a184f95c2d8..0335f415e2ec2 100644
--- a/pandas/tests/tseries/offsets/test_custom_business_hour.py
+++ b/pandas/tests/tseries/offsets/test_custom_business_hour.py
@@ -269,28 +269,22 @@ def test_apply(self, apply_case):
(
CustomBusinessHour(holidays=holidays),
{
- Timestamp("2014-07-01 15:00")
- + Nano(5): Timestamp("2014-07-01 16:00")
+ Timestamp("2014-07-01 15:00") + Nano(5): Timestamp("2014-07-01 16:00")
+ Nano(5),
- Timestamp("2014-07-01 16:00")
- + Nano(5): Timestamp("2014-07-03 09:00")
+ Timestamp("2014-07-01 16:00") + Nano(5): Timestamp("2014-07-03 09:00")
+ Nano(5),
- Timestamp("2014-07-01 16:00")
- - Nano(5): Timestamp("2014-07-01 17:00")
+ Timestamp("2014-07-01 16:00") - Nano(5): Timestamp("2014-07-01 17:00")
- Nano(5),
},
),
(
CustomBusinessHour(-1, holidays=holidays),
{
- Timestamp("2014-07-01 15:00")
- + Nano(5): Timestamp("2014-07-01 14:00")
+ Timestamp("2014-07-01 15:00") + Nano(5): Timestamp("2014-07-01 14:00")
+ Nano(5),
- Timestamp("2014-07-01 10:00")
- + Nano(5): Timestamp("2014-07-01 09:00")
+ Timestamp("2014-07-01 10:00") + Nano(5): Timestamp("2014-07-01 09:00")
+ Nano(5),
- Timestamp("2014-07-01 10:00")
- - Nano(5): Timestamp("2014-06-26 17:00")
+ Timestamp("2014-07-01 10:00") - Nano(5): Timestamp("2014-06-26 17:00")
- Nano(5),
},
),
diff --git a/pandas/tseries/holiday.py b/pandas/tseries/holiday.py
index 3c429a960b451..646284c79a3ad 100644
--- a/pandas/tseries/holiday.py
+++ b/pandas/tseries/holiday.py
@@ -484,9 +484,7 @@ def holidays(self, start=None, end=None, return_name: bool = False):
else:
# error: Incompatible types in assignment (expression has type
# "Series", variable has type "DataFrame")
- holidays = Series(
- index=DatetimeIndex([]), dtype=object
- ) # type: ignore[assignment]
+ holidays = Series(index=DatetimeIndex([]), dtype=object) # type: ignore[assignment]
self._cache = (start, end, holidays.sort_index())
diff --git a/pyproject.toml b/pyproject.toml
index 5aaa06d9a8da1..f693048adb60c 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -188,27 +188,6 @@ test-command = ""
select = "*-macosx*"
environment = {CFLAGS="-g0"}
-[tool.black]
-target-version = ['py39', 'py310']
-required-version = '23.11.0'
-exclude = '''
-(
- asv_bench/env
- | \.egg
- | \.git
- | \.hg
- | \.mypy_cache
- | \.nox
- | \.tox
- | \.venv
- | _build
- | buck-out
- | build
- | dist
- | setup.py
-)
-'''
-
[tool.ruff]
line-length = 88
target-version = "py310"
@@ -325,6 +304,8 @@ ignore = [
"PERF102",
# try-except-in-loop, becomes useless in Python 3.11
"PERF203",
+ # The following rules may cause conflicts when used with the formatter:
+ "ISC001",
### TODO: Enable gradually
diff --git a/scripts/tests/data/deps_minimum.toml b/scripts/tests/data/deps_minimum.toml
index 3be6be17d1ee2..0424920e5f446 100644
--- a/scripts/tests/data/deps_minimum.toml
+++ b/scripts/tests/data/deps_minimum.toml
@@ -161,26 +161,6 @@ test-command = ""
select = "*-win32"
environment = { IS_32_BIT="true" }
-[tool.black]
-target-version = ['py38', 'py39']
-exclude = '''
-(
- asv_bench/env
- | \.egg
- | \.git
- | \.hg
- | \.mypy_cache
- | \.nox
- | \.tox
- | \.venv
- | _build
- | buck-out
- | build
- | dist
- | setup.py
-)
-'''
-
[tool.ruff]
line-length = 88
update-check = false
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56704 | 2024-01-02T22:23:12Z | 2024-01-04T00:49:24Z | 2024-01-04T00:49:24Z | 2024-01-04T00:51:12Z |
DOC: Corrected typo in warning on coerce | diff --git a/doc/source/whatsnew/v2.1.0.rst b/doc/source/whatsnew/v2.1.0.rst
index 51b4c4f297b07..d4eb5742ef928 100644
--- a/doc/source/whatsnew/v2.1.0.rst
+++ b/doc/source/whatsnew/v2.1.0.rst
@@ -432,7 +432,7 @@ In a future version, these will raise an error and you should cast to a common d
In [3]: ser[0] = 'not an int64'
FutureWarning:
- Setting an item of incompatible dtype is deprecated and will raise in a future error of pandas.
+ Setting an item of incompatible dtype is deprecated and will raise an error in a future version of pandas.
Value 'not an int64' has dtype incompatible with int64, please explicitly cast to a compatible dtype first.
In [4]: ser
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 20eff9315bc80..b7af545bd523e 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -512,7 +512,7 @@ def coerce_to_target_dtype(self, other, warn_on_upcast: bool = False) -> Block:
if warn_on_upcast:
warnings.warn(
f"Setting an item of incompatible dtype is deprecated "
- "and will raise in a future error of pandas. "
+ "and will raise an error in a future version of pandas. "
f"Value '{other}' has dtype incompatible with {self.values.dtype}, "
"please explicitly cast to a compatible dtype first.",
FutureWarning,
| Low priority.
I encountered this warning message:
> "Setting an item of incompatible dtype is deprecated and will raise in a future **error** of pandas."
Which I believe should read:
> "Setting an item of incompatible dtype is deprecated and will raise **an error** in a future **version** of pandas." | https://api.github.com/repos/pandas-dev/pandas/pulls/56699 | 2024-01-02T20:44:54Z | 2024-01-03T21:37:12Z | 2024-01-03T21:37:12Z | 2024-01-03T21:42:25Z |
Backport PR #56167 on branch 2.2.x ([ENH]: Expand types allowed in Series.struct.field) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 649ad37a56b35..15e98cbb2a4d7 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -251,6 +251,14 @@ DataFrame. (:issue:`54938`)
)
series.struct.explode()
+Use :meth:`Series.struct.field` to index into a (possible nested)
+struct field.
+
+
+.. ipython:: python
+
+ series.struct.field("project")
+
.. _whatsnew_220.enhancements.list_accessor:
Series.list accessor for PyArrow list data
diff --git a/pandas/core/arrays/arrow/accessors.py b/pandas/core/arrays/arrow/accessors.py
index 7f88267943526..124f8fb6ad8bc 100644
--- a/pandas/core/arrays/arrow/accessors.py
+++ b/pandas/core/arrays/arrow/accessors.py
@@ -6,13 +6,18 @@
ABCMeta,
abstractmethod,
)
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ cast,
+)
from pandas.compat import (
pa_version_under10p1,
pa_version_under11p0,
)
+from pandas.core.dtypes.common import is_list_like
+
if not pa_version_under10p1:
import pyarrow as pa
import pyarrow.compute as pc
@@ -267,15 +272,27 @@ def dtypes(self) -> Series:
names = [struct.name for struct in pa_type]
return Series(types, index=Index(names))
- def field(self, name_or_index: str | int) -> Series:
+ def field(
+ self,
+ name_or_index: list[str]
+ | list[bytes]
+ | list[int]
+ | pc.Expression
+ | bytes
+ | str
+ | int,
+ ) -> Series:
"""
Extract a child field of a struct as a Series.
Parameters
----------
- name_or_index : str | int
+ name_or_index : str | bytes | int | expression | list
Name or index of the child field to extract.
+ For list-like inputs, this will index into a nested
+ struct.
+
Returns
-------
pandas.Series
@@ -285,6 +302,19 @@ def field(self, name_or_index: str | int) -> Series:
--------
Series.struct.explode : Return all child fields as a DataFrame.
+ Notes
+ -----
+ The name of the resulting Series will be set using the following
+ rules:
+
+ - For string, bytes, or integer `name_or_index` (or a list of these, for
+ a nested selection), the Series name is set to the selected
+ field's name.
+ - For a :class:`pyarrow.compute.Expression`, this is set to
+ the string form of the expression.
+ - For list-like `name_or_index`, the name will be set to the
+ name of the final field selected.
+
Examples
--------
>>> import pyarrow as pa
@@ -314,27 +344,92 @@ def field(self, name_or_index: str | int) -> Series:
1 2
2 1
Name: version, dtype: int64[pyarrow]
+
+ Or an expression
+
+ >>> import pyarrow.compute as pc
+ >>> s.struct.field(pc.field("project"))
+ 0 pandas
+ 1 pandas
+ 2 numpy
+ Name: project, dtype: string[pyarrow]
+
+ For nested struct types, you can pass a list of values to index
+ multiple levels:
+
+ >>> version_type = pa.struct([
+ ... ("major", pa.int64()),
+ ... ("minor", pa.int64()),
+ ... ])
+ >>> s = pd.Series(
+ ... [
+ ... {"version": {"major": 1, "minor": 5}, "project": "pandas"},
+ ... {"version": {"major": 2, "minor": 1}, "project": "pandas"},
+ ... {"version": {"major": 1, "minor": 26}, "project": "numpy"},
+ ... ],
+ ... dtype=pd.ArrowDtype(pa.struct(
+ ... [("version", version_type), ("project", pa.string())]
+ ... ))
+ ... )
+ >>> s.struct.field(["version", "minor"])
+ 0 5
+ 1 1
+ 2 26
+ Name: minor, dtype: int64[pyarrow]
+ >>> s.struct.field([0, 0])
+ 0 1
+ 1 2
+ 2 1
+ Name: major, dtype: int64[pyarrow]
"""
from pandas import Series
+ def get_name(
+ level_name_or_index: list[str]
+ | list[bytes]
+ | list[int]
+ | pc.Expression
+ | bytes
+ | str
+ | int,
+ data: pa.ChunkedArray,
+ ):
+ if isinstance(level_name_or_index, int):
+ name = data.type.field(level_name_or_index).name
+ elif isinstance(level_name_or_index, (str, bytes)):
+ name = level_name_or_index
+ elif isinstance(level_name_or_index, pc.Expression):
+ name = str(level_name_or_index)
+ elif is_list_like(level_name_or_index):
+ # For nested input like [2, 1, 2]
+ # iteratively get the struct and field name. The last
+ # one is used for the name of the index.
+ level_name_or_index = list(reversed(level_name_or_index))
+ selected = data
+ while level_name_or_index:
+ # we need the cast, otherwise mypy complains about
+ # getting ints, bytes, or str here, which isn't possible.
+ level_name_or_index = cast(list, level_name_or_index)
+ name_or_index = level_name_or_index.pop()
+ name = get_name(name_or_index, selected)
+ selected = selected.type.field(selected.type.get_field_index(name))
+ name = selected.name
+ else:
+ raise ValueError(
+ "name_or_index must be an int, str, bytes, "
+ "pyarrow.compute.Expression, or list of those"
+ )
+ return name
+
pa_arr = self._data.array._pa_array
- if isinstance(name_or_index, int):
- index = name_or_index
- elif isinstance(name_or_index, str):
- index = pa_arr.type.get_field_index(name_or_index)
- else:
- raise ValueError(
- "name_or_index must be an int or str, "
- f"got {type(name_or_index).__name__}"
- )
+ name = get_name(name_or_index, pa_arr)
+ field_arr = pc.struct_field(pa_arr, name_or_index)
- pa_field = pa_arr.type[index]
- field_arr = pc.struct_field(pa_arr, [index])
return Series(
field_arr,
dtype=ArrowDtype(field_arr.type),
index=self._data.index,
- name=pa_field.name,
+ name=name,
)
def explode(self) -> DataFrame:
diff --git a/pandas/tests/series/accessors/test_struct_accessor.py b/pandas/tests/series/accessors/test_struct_accessor.py
index 1ec5b3b726d17..80aea75fda406 100644
--- a/pandas/tests/series/accessors/test_struct_accessor.py
+++ b/pandas/tests/series/accessors/test_struct_accessor.py
@@ -2,6 +2,11 @@
import pytest
+from pandas.compat.pyarrow import (
+ pa_version_under11p0,
+ pa_version_under13p0,
+)
+
from pandas import (
ArrowDtype,
DataFrame,
@@ -11,6 +16,7 @@
import pandas._testing as tm
pa = pytest.importorskip("pyarrow")
+pc = pytest.importorskip("pyarrow.compute")
def test_struct_accessor_dtypes():
@@ -53,6 +59,7 @@ def test_struct_accessor_dtypes():
tm.assert_series_equal(actual, expected)
+@pytest.mark.skipif(pa_version_under13p0, reason="pyarrow>=13.0.0 required")
def test_struct_accessor_field():
index = Index([-100, 42, 123])
ser = Series(
@@ -94,10 +101,11 @@ def test_struct_accessor_field():
def test_struct_accessor_field_with_invalid_name_or_index():
ser = Series([], dtype=ArrowDtype(pa.struct([("field", pa.int64())])))
- with pytest.raises(ValueError, match="name_or_index must be an int or str"):
+ with pytest.raises(ValueError, match="name_or_index must be an int, str,"):
ser.struct.field(1.1)
+@pytest.mark.skipif(pa_version_under11p0, reason="pyarrow>=11.0.0 required")
def test_struct_accessor_explode():
index = Index([-100, 42, 123])
ser = Series(
@@ -148,3 +156,41 @@ def test_struct_accessor_api_for_invalid(invalid):
),
):
invalid.struct
+
+
+@pytest.mark.parametrize(
+ ["indices", "name"],
+ [
+ (0, "int_col"),
+ ([1, 2], "str_col"),
+ (pc.field("int_col"), "int_col"),
+ ("int_col", "int_col"),
+ (b"string_col", b"string_col"),
+ ([b"string_col"], "string_col"),
+ ],
+)
+@pytest.mark.skipif(pa_version_under13p0, reason="pyarrow>=13.0.0 required")
+def test_struct_accessor_field_expanded(indices, name):
+ arrow_type = pa.struct(
+ [
+ ("int_col", pa.int64()),
+ (
+ "struct_col",
+ pa.struct(
+ [
+ ("int_col", pa.int64()),
+ ("float_col", pa.float64()),
+ ("str_col", pa.string()),
+ ]
+ ),
+ ),
+ (b"string_col", pa.string()),
+ ]
+ )
+
+ data = pa.array([], type=arrow_type)
+ ser = Series(data, dtype=ArrowDtype(arrow_type))
+ expected = pc.struct_field(data, indices)
+ result = ser.struct.field(indices)
+ tm.assert_equal(result.array._pa_array.combine_chunks(), expected)
+ assert result.name == name
| Backport PR #56167: [ENH]: Expand types allowed in Series.struct.field | https://api.github.com/repos/pandas-dev/pandas/pulls/56698 | 2024-01-02T19:16:37Z | 2024-01-02T20:39:07Z | 2024-01-02T20:39:07Z | 2024-01-02T20:39:07Z |
Bug pyarrow implementation of str.fullmatch matches partial string. issue #56652 | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 15e98cbb2a4d7..043646457f604 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -805,6 +805,7 @@ Strings
- Bug in :meth:`Series.str.replace` when ``n < 0`` for :class:`ArrowDtype` with ``pyarrow.string`` (:issue:`56404`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for :class:`ArrowDtype` with ``pyarrow.string`` dtype (:issue:`56579`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for ``string[pyarrow]`` (:issue:`54942`)
+- Bug in :meth:`str.fullmatch` when ``dtype=pandas.ArrowDtype(pyarrow.string()))`` allows partial matches when regex ends in literal //$ (:issue:`56652`)
- Bug in comparison operations for ``dtype="string[pyarrow_numpy]"`` raising if dtypes can't be compared (:issue:`56008`)
Interval
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index d7c4d695e6951..ce496d612ae46 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -2296,7 +2296,7 @@ def _str_match(
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
) -> Self:
- if not pat.endswith("$") or pat.endswith("//$"):
+ if not pat.endswith("$") or pat.endswith("\\$"):
pat = f"{pat}$"
return self._str_match(pat, case, flags, na)
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index cb07fcf1a48fa..8c8787d15c8fe 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -436,7 +436,7 @@ def _str_match(
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
):
- if not pat.endswith("$") or pat.endswith("//$"):
+ if not pat.endswith("$") or pat.endswith("\\$"):
pat = f"{pat}$"
return self._str_match(pat, case, flags, na)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 76982ee5c38f8..204084d3a2de0 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -1898,16 +1898,21 @@ def test_str_match(pat, case, na, exp):
@pytest.mark.parametrize(
"pat, case, na, exp",
[
- ["abc", False, None, [True, None]],
- ["Abc", True, None, [False, None]],
- ["bc", True, None, [False, None]],
- ["ab", False, True, [True, True]],
- ["a[a-z]{2}", False, None, [True, None]],
- ["A[a-z]{1}", True, None, [False, None]],
+ ["abc", False, None, [True, True, False, None]],
+ ["Abc", True, None, [False, False, False, None]],
+ ["bc", True, None, [False, False, False, None]],
+ ["ab", False, None, [True, True, False, None]],
+ ["a[a-z]{2}", False, None, [True, True, False, None]],
+ ["A[a-z]{1}", True, None, [False, False, False, None]],
+ # GH Issue: #56652
+ ["abc$", False, None, [True, False, False, None]],
+ ["abc\\$", False, None, [False, True, False, None]],
+ ["Abc$", True, None, [False, False, False, None]],
+ ["Abc\\$", True, None, [False, False, False, None]],
],
)
def test_str_fullmatch(pat, case, na, exp):
- ser = pd.Series(["abc", None], dtype=ArrowDtype(pa.string()))
+ ser = pd.Series(["abc", "abc$", "$abc", None], dtype=ArrowDtype(pa.string()))
result = ser.str.match(pat, case=case, na=na)
expected = pd.Series(exp, dtype=ArrowDtype(pa.bool_()))
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 3f58c6d703f8f..cd4707ac405de 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -730,6 +730,15 @@ def test_fullmatch(any_string_dtype):
tm.assert_series_equal(result, expected)
+def test_fullmatch_dollar_literal(any_string_dtype):
+ # GH 56652
+ ser = Series(["foo", "foo$foo", np.nan, "foo$"], dtype=any_string_dtype)
+ result = ser.str.fullmatch("foo\\$")
+ expected_dtype = "object" if any_string_dtype in object_pyarrow_numpy else "boolean"
+ expected = Series([False, False, np.nan, True], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+
def test_fullmatch_na_kwarg(any_string_dtype):
ser = Series(
["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype
| - [x] closes #56652
- [x] Tests updated and passed
- [x] All code checks passed
Changed array.py: Makes Series(["abc$abc]).str.fullmatch("abc\\$") give the same result as when dtype = pyarrow.string as opposed to dtype = str.
Issue reporter (Issue-[#56652](https://github.com/pandas-dev/pandas/issues/56652)) requested change "//$" to "\\$", but this resulted in DepreciationWarnng in pytest, so used "\\\\$" instead.
Change test_arrow.py: updated test_str_fullmatch to account for edge cases where string starts with and ends with literal $ as well as ends with the string.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56691 | 2023-12-31T15:18:41Z | 2024-01-03T18:41:23Z | 2024-01-03T18:41:23Z | 2024-01-17T22:28:22Z |
TYP: mostly Hashtable and ArrowExtensionArray | diff --git a/pandas/_libs/hashtable.pyi b/pandas/_libs/hashtable.pyi
index 555ec73acd9b2..3bb957812f0ed 100644
--- a/pandas/_libs/hashtable.pyi
+++ b/pandas/_libs/hashtable.pyi
@@ -2,6 +2,7 @@ from typing import (
Any,
Hashable,
Literal,
+ overload,
)
import numpy as np
@@ -180,18 +181,30 @@ class HashTable:
na_value: object = ...,
mask=...,
) -> npt.NDArray[np.intp]: ...
+ @overload
def unique(
self,
values: np.ndarray, # np.ndarray[subclass-specific]
- return_inverse: bool = ...,
- mask=...,
- ) -> (
- tuple[
- np.ndarray, # np.ndarray[subclass-specific]
- npt.NDArray[np.intp],
- ]
- | np.ndarray
- ): ... # np.ndarray[subclass-specific]
+ *,
+ return_inverse: Literal[False] = ...,
+ mask: None = ...,
+ ) -> np.ndarray: ... # np.ndarray[subclass-specific]
+ @overload
+ def unique(
+ self,
+ values: np.ndarray, # np.ndarray[subclass-specific]
+ *,
+ return_inverse: Literal[True],
+ mask: None = ...,
+ ) -> tuple[np.ndarray, npt.NDArray[np.intp],]: ... # np.ndarray[subclass-specific]
+ @overload
+ def unique(
+ self,
+ values: np.ndarray, # np.ndarray[subclass-specific]
+ *,
+ return_inverse: Literal[False] = ...,
+ mask: npt.NDArray[np.bool_],
+ ) -> tuple[np.ndarray, npt.NDArray[np.bool_],]: ... # np.ndarray[subclass-specific]
def factorize(
self,
values: np.ndarray, # np.ndarray[subclass-specific]
diff --git a/pandas/_libs/hashtable_class_helper.pxi.in b/pandas/_libs/hashtable_class_helper.pxi.in
index c0723392496c1..ed1284c34e110 100644
--- a/pandas/_libs/hashtable_class_helper.pxi.in
+++ b/pandas/_libs/hashtable_class_helper.pxi.in
@@ -755,7 +755,7 @@ cdef class {{name}}HashTable(HashTable):
return uniques.to_array(), result_mask.to_array()
return uniques.to_array()
- def unique(self, const {{dtype}}_t[:] values, bint return_inverse=False, object mask=None):
+ def unique(self, const {{dtype}}_t[:] values, *, bint return_inverse=False, object mask=None):
"""
Calculate unique values and labels (no sorting!)
@@ -1180,7 +1180,7 @@ cdef class StringHashTable(HashTable):
return uniques.to_array(), labels.base # .base -> underlying ndarray
return uniques.to_array()
- def unique(self, ndarray[object] values, bint return_inverse=False, object mask=None):
+ def unique(self, ndarray[object] values, *, bint return_inverse=False, object mask=None):
"""
Calculate unique values and labels (no sorting!)
@@ -1438,7 +1438,7 @@ cdef class PyObjectHashTable(HashTable):
return uniques.to_array(), labels.base # .base -> underlying ndarray
return uniques.to_array()
- def unique(self, ndarray[object] values, bint return_inverse=False, object mask=None):
+ def unique(self, ndarray[object] values, *, bint return_inverse=False, object mask=None):
"""
Calculate unique values and labels (no sorting!)
diff --git a/pandas/_typing.py b/pandas/_typing.py
index 3df9a47a35fca..a80f9603493a7 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -109,6 +109,7 @@
# array-like
ArrayLike = Union["ExtensionArray", np.ndarray]
+ArrayLikeT = TypeVar("ArrayLikeT", "ExtensionArray", np.ndarray)
AnyArrayLike = Union[ArrayLike, "Index", "Series"]
TimeArrayLike = Union["DatetimeArray", "TimedeltaArray"]
@@ -137,7 +138,7 @@ def __len__(self) -> int:
def __iter__(self) -> Iterator[_T_co]:
...
- def index(self, value: Any, /, start: int = 0, stop: int = ...) -> int:
+ def index(self, value: Any, start: int = ..., stop: int = ..., /) -> int:
...
def count(self, value: Any, /) -> int:
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index cd98087c06c18..ff589ebba4cf6 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -7,7 +7,10 @@
import copy
import io
import pickle as pkl
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ Any,
+)
import numpy as np
@@ -209,7 +212,7 @@ def load_newobj_ex(self) -> None:
pass
-def load(fh, encoding: str | None = None, is_verbose: bool = False):
+def load(fh, encoding: str | None = None, is_verbose: bool = False) -> Any:
"""
Load a pickle, with a provided encoding,
@@ -239,7 +242,7 @@ def loads(
fix_imports: bool = True,
encoding: str = "ASCII",
errors: str = "strict",
-):
+) -> Any:
"""
Analogous to pickle._loads.
"""
diff --git a/pandas/core/accessor.py b/pandas/core/accessor.py
index 698abb2ec4989..9098a6f9664a9 100644
--- a/pandas/core/accessor.py
+++ b/pandas/core/accessor.py
@@ -54,7 +54,7 @@ class PandasDelegate:
def _delegate_property_get(self, name: str, *args, **kwargs):
raise TypeError(f"You cannot access the property {name}")
- def _delegate_property_set(self, name: str, value, *args, **kwargs):
+ def _delegate_property_set(self, name: str, value, *args, **kwargs) -> None:
raise TypeError(f"The property {name} cannot be set")
def _delegate_method(self, name: str, *args, **kwargs):
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 15a07da76d2f7..76fdcefd03407 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -25,6 +25,7 @@
from pandas._typing import (
AnyArrayLike,
ArrayLike,
+ ArrayLikeT,
AxisInt,
DtypeObj,
TakeIndexer,
@@ -182,8 +183,8 @@ def _ensure_data(values: ArrayLike) -> np.ndarray:
def _reconstruct_data(
- values: ArrayLike, dtype: DtypeObj, original: AnyArrayLike
-) -> ArrayLike:
+ values: ArrayLikeT, dtype: DtypeObj, original: AnyArrayLike
+) -> ArrayLikeT:
"""
reverse of _ensure_data
@@ -206,7 +207,9 @@ def _reconstruct_data(
# that values.dtype == dtype
cls = dtype.construct_array_type()
- values = cls._from_sequence(values, dtype=dtype)
+ # error: Incompatible types in assignment (expression has type
+ # "ExtensionArray", variable has type "ndarray[Any, Any]")
+ values = cls._from_sequence(values, dtype=dtype) # type: ignore[assignment]
else:
values = values.astype(dtype, copy=False)
@@ -259,7 +262,9 @@ def _ensure_arraylike(values, func_name: str) -> ArrayLike:
}
-def _get_hashtable_algo(values: np.ndarray):
+def _get_hashtable_algo(
+ values: np.ndarray,
+) -> tuple[type[htable.HashTable], np.ndarray]:
"""
Parameters
----------
@@ -1550,7 +1555,9 @@ def safe_sort(
hash_klass, values = _get_hashtable_algo(values) # type: ignore[arg-type]
t = hash_klass(len(values))
t.map_locations(values)
- sorter = ensure_platform_int(t.lookup(ordered))
+ # error: Argument 1 to "lookup" of "HashTable" has incompatible type
+ # "ExtensionArray | ndarray[Any, Any] | Index | Series"; expected "ndarray"
+ sorter = ensure_platform_int(t.lookup(ordered)) # type: ignore[arg-type]
if use_na_sentinel:
# take_nd is faster, but only works for na_sentinels of -1
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index b1164301e6d79..d7c4d695e6951 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -10,6 +10,7 @@
Callable,
Literal,
cast,
+ overload,
)
import unicodedata
@@ -157,6 +158,7 @@ def floordiv_compat(
if TYPE_CHECKING:
from collections.abc import Sequence
+ from pandas._libs.missing import NAType
from pandas._typing import (
ArrayLike,
AxisInt,
@@ -280,7 +282,9 @@ def __init__(self, values: pa.Array | pa.ChunkedArray) -> None:
self._dtype = ArrowDtype(self._pa_array.type)
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):
+ def _from_sequence(
+ cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
+ ) -> Self:
"""
Construct a new ExtensionArray from a sequence of scalars.
"""
@@ -292,7 +296,7 @@ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = Fal
@classmethod
def _from_sequence_of_strings(
cls, strings, *, dtype: Dtype | None = None, copy: bool = False
- ):
+ ) -> Self:
"""
Construct a new ExtensionArray from a sequence of strings.
"""
@@ -675,7 +679,7 @@ def __setstate__(self, state) -> None:
state["_pa_array"] = pa.chunked_array(data)
self.__dict__.update(state)
- def _cmp_method(self, other, op):
+ def _cmp_method(self, other, op) -> ArrowExtensionArray:
pc_func = ARROW_CMP_FUNCS[op.__name__]
if isinstance(
other, (ArrowExtensionArray, np.ndarray, list, BaseMaskedArray)
@@ -701,7 +705,7 @@ def _cmp_method(self, other, op):
)
return ArrowExtensionArray(result)
- def _evaluate_op_method(self, other, op, arrow_funcs):
+ def _evaluate_op_method(self, other, op, arrow_funcs) -> Self:
pa_type = self._pa_array.type
other = self._box_pa(other)
@@ -752,7 +756,7 @@ def _evaluate_op_method(self, other, op, arrow_funcs):
result = pc_func(self._pa_array, other)
return type(self)(result)
- def _logical_method(self, other, op):
+ def _logical_method(self, other, op) -> Self:
# For integer types `^`, `|`, `&` are bitwise operators and return
# integer types. Otherwise these are boolean ops.
if pa.types.is_integer(self._pa_array.type):
@@ -760,7 +764,7 @@ def _logical_method(self, other, op):
else:
return self._evaluate_op_method(other, op, ARROW_LOGICAL_FUNCS)
- def _arith_method(self, other, op):
+ def _arith_method(self, other, op) -> Self:
return self._evaluate_op_method(other, op, ARROW_ARITHMETIC_FUNCS)
def equals(self, other) -> bool:
@@ -825,7 +829,15 @@ def isna(self) -> npt.NDArray[np.bool_]:
return self._pa_array.is_null().to_numpy()
- def any(self, *, skipna: bool = True, **kwargs):
+ @overload
+ def any(self, *, skipna: Literal[True] = ..., **kwargs) -> bool:
+ ...
+
+ @overload
+ def any(self, *, skipna: bool, **kwargs) -> bool | NAType:
+ ...
+
+ def any(self, *, skipna: bool = True, **kwargs) -> bool | NAType:
"""
Return whether any element is truthy.
@@ -883,7 +895,15 @@ def any(self, *, skipna: bool = True, **kwargs):
"""
return self._reduce("any", skipna=skipna, **kwargs)
- def all(self, *, skipna: bool = True, **kwargs):
+ @overload
+ def all(self, *, skipna: Literal[True] = ..., **kwargs) -> bool:
+ ...
+
+ @overload
+ def all(self, *, skipna: bool, **kwargs) -> bool | NAType:
+ ...
+
+ def all(self, *, skipna: bool = True, **kwargs) -> bool | NAType:
"""
Return whether all elements are truthy.
@@ -2027,7 +2047,7 @@ def _if_else(
cond: npt.NDArray[np.bool_] | bool,
left: ArrayLike | Scalar,
right: ArrayLike | Scalar,
- ):
+ ) -> pa.Array:
"""
Choose values based on a condition.
@@ -2071,7 +2091,7 @@ def _replace_with_mask(
values: pa.Array | pa.ChunkedArray,
mask: npt.NDArray[np.bool_] | bool,
replacements: ArrayLike | Scalar,
- ):
+ ) -> pa.Array | pa.ChunkedArray:
"""
Replace items selected with a mask.
@@ -2178,14 +2198,14 @@ def _apply_elementwise(self, func: Callable) -> list[list[Any]]:
for chunk in self._pa_array.iterchunks()
]
- def _str_count(self, pat: str, flags: int = 0):
+ def _str_count(self, pat: str, flags: int = 0) -> Self:
if flags:
raise NotImplementedError(f"count not implemented with {flags=}")
return type(self)(pc.count_substring_regex(self._pa_array, pat))
def _str_contains(
self, pat, case: bool = True, flags: int = 0, na=None, regex: bool = True
- ):
+ ) -> Self:
if flags:
raise NotImplementedError(f"contains not implemented with {flags=}")
@@ -2198,7 +2218,7 @@ def _str_contains(
result = result.fill_null(na)
return type(self)(result)
- def _str_startswith(self, pat: str | tuple[str, ...], na=None):
+ def _str_startswith(self, pat: str | tuple[str, ...], na=None) -> Self:
if isinstance(pat, str):
result = pc.starts_with(self._pa_array, pattern=pat)
else:
@@ -2215,7 +2235,7 @@ def _str_startswith(self, pat: str | tuple[str, ...], na=None):
result = result.fill_null(na)
return type(self)(result)
- def _str_endswith(self, pat: str | tuple[str, ...], na=None):
+ def _str_endswith(self, pat: str | tuple[str, ...], na=None) -> Self:
if isinstance(pat, str):
result = pc.ends_with(self._pa_array, pattern=pat)
else:
@@ -2240,7 +2260,7 @@ def _str_replace(
case: bool = True,
flags: int = 0,
regex: bool = True,
- ):
+ ) -> Self:
if isinstance(pat, re.Pattern) or callable(repl) or not case or flags:
raise NotImplementedError(
"replace is not supported with a re.Pattern, callable repl, "
@@ -2259,29 +2279,28 @@ def _str_replace(
)
return type(self)(result)
- def _str_repeat(self, repeats: int | Sequence[int]):
+ def _str_repeat(self, repeats: int | Sequence[int]) -> Self:
if not isinstance(repeats, int):
raise NotImplementedError(
f"repeat is not implemented when repeats is {type(repeats).__name__}"
)
- else:
- return type(self)(pc.binary_repeat(self._pa_array, repeats))
+ return type(self)(pc.binary_repeat(self._pa_array, repeats))
def _str_match(
self, pat: str, case: bool = True, flags: int = 0, na: Scalar | None = None
- ):
+ ) -> Self:
if not pat.startswith("^"):
pat = f"^{pat}"
return self._str_contains(pat, case, flags, na, regex=True)
def _str_fullmatch(
self, pat, case: bool = True, flags: int = 0, na: Scalar | None = None
- ):
+ ) -> Self:
if not pat.endswith("$") or pat.endswith("//$"):
pat = f"{pat}$"
return self._str_match(pat, case, flags, na)
- def _str_find(self, sub: str, start: int = 0, end: int | None = None):
+ def _str_find(self, sub: str, start: int = 0, end: int | None = None) -> Self:
if start != 0 and end is not None:
slices = pc.utf8_slice_codeunits(self._pa_array, start, stop=end)
result = pc.find_substring(slices, sub)
@@ -2298,7 +2317,7 @@ def _str_find(self, sub: str, start: int = 0, end: int | None = None):
)
return type(self)(result)
- def _str_join(self, sep: str):
+ def _str_join(self, sep: str) -> Self:
if pa.types.is_string(self._pa_array.type) or pa.types.is_large_string(
self._pa_array.type
):
@@ -2308,19 +2327,19 @@ def _str_join(self, sep: str):
result = self._pa_array
return type(self)(pc.binary_join(result, sep))
- def _str_partition(self, sep: str, expand: bool):
+ def _str_partition(self, sep: str, expand: bool) -> Self:
predicate = lambda val: val.partition(sep)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_rpartition(self, sep: str, expand: bool):
+ def _str_rpartition(self, sep: str, expand: bool) -> Self:
predicate = lambda val: val.rpartition(sep)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
def _str_slice(
self, start: int | None = None, stop: int | None = None, step: int | None = None
- ):
+ ) -> Self:
if start is None:
start = 0
if step is None:
@@ -2329,57 +2348,57 @@ def _str_slice(
pc.utf8_slice_codeunits(self._pa_array, start=start, stop=stop, step=step)
)
- def _str_isalnum(self):
+ def _str_isalnum(self) -> Self:
return type(self)(pc.utf8_is_alnum(self._pa_array))
- def _str_isalpha(self):
+ def _str_isalpha(self) -> Self:
return type(self)(pc.utf8_is_alpha(self._pa_array))
- def _str_isdecimal(self):
+ def _str_isdecimal(self) -> Self:
return type(self)(pc.utf8_is_decimal(self._pa_array))
- def _str_isdigit(self):
+ def _str_isdigit(self) -> Self:
return type(self)(pc.utf8_is_digit(self._pa_array))
- def _str_islower(self):
+ def _str_islower(self) -> Self:
return type(self)(pc.utf8_is_lower(self._pa_array))
- def _str_isnumeric(self):
+ def _str_isnumeric(self) -> Self:
return type(self)(pc.utf8_is_numeric(self._pa_array))
- def _str_isspace(self):
+ def _str_isspace(self) -> Self:
return type(self)(pc.utf8_is_space(self._pa_array))
- def _str_istitle(self):
+ def _str_istitle(self) -> Self:
return type(self)(pc.utf8_is_title(self._pa_array))
- def _str_isupper(self):
+ def _str_isupper(self) -> Self:
return type(self)(pc.utf8_is_upper(self._pa_array))
- def _str_len(self):
+ def _str_len(self) -> Self:
return type(self)(pc.utf8_length(self._pa_array))
- def _str_lower(self):
+ def _str_lower(self) -> Self:
return type(self)(pc.utf8_lower(self._pa_array))
- def _str_upper(self):
+ def _str_upper(self) -> Self:
return type(self)(pc.utf8_upper(self._pa_array))
- def _str_strip(self, to_strip=None):
+ def _str_strip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_trim_whitespace(self._pa_array)
else:
result = pc.utf8_trim(self._pa_array, characters=to_strip)
return type(self)(result)
- def _str_lstrip(self, to_strip=None):
+ def _str_lstrip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_ltrim_whitespace(self._pa_array)
else:
result = pc.utf8_ltrim(self._pa_array, characters=to_strip)
return type(self)(result)
- def _str_rstrip(self, to_strip=None):
+ def _str_rstrip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_rtrim_whitespace(self._pa_array)
else:
@@ -2396,12 +2415,12 @@ def _str_removeprefix(self, prefix: str):
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_casefold(self):
+ def _str_casefold(self) -> Self:
predicate = lambda val: val.casefold()
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_encode(self, encoding: str, errors: str = "strict"):
+ def _str_encode(self, encoding: str, errors: str = "strict") -> Self:
predicate = lambda val: val.encode(encoding, errors)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
@@ -2421,7 +2440,7 @@ def _str_extract(self, pat: str, flags: int = 0, expand: bool = True):
else:
return type(self)(pc.struct_field(result, [0]))
- def _str_findall(self, pat: str, flags: int = 0):
+ def _str_findall(self, pat: str, flags: int = 0) -> Self:
regex = re.compile(pat, flags=flags)
predicate = lambda val: regex.findall(val)
result = self._apply_elementwise(predicate)
@@ -2443,22 +2462,22 @@ def _str_get_dummies(self, sep: str = "|"):
result = type(self)(pa.array(list(dummies)))
return result, uniques_sorted.to_pylist()
- def _str_index(self, sub: str, start: int = 0, end: int | None = None):
+ def _str_index(self, sub: str, start: int = 0, end: int | None = None) -> Self:
predicate = lambda val: val.index(sub, start, end)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_rindex(self, sub: str, start: int = 0, end: int | None = None):
+ def _str_rindex(self, sub: str, start: int = 0, end: int | None = None) -> Self:
predicate = lambda val: val.rindex(sub, start, end)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_normalize(self, form: str):
+ def _str_normalize(self, form: str) -> Self:
predicate = lambda val: unicodedata.normalize(form, val)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_rfind(self, sub: str, start: int = 0, end=None):
+ def _str_rfind(self, sub: str, start: int = 0, end=None) -> Self:
predicate = lambda val: val.rfind(sub, start, end)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
@@ -2469,7 +2488,7 @@ def _str_split(
n: int | None = -1,
expand: bool = False,
regex: bool | None = None,
- ):
+ ) -> Self:
if n in {-1, 0}:
n = None
if pat is None:
@@ -2480,24 +2499,23 @@ def _str_split(
split_func = functools.partial(pc.split_pattern, pattern=pat)
return type(self)(split_func(self._pa_array, max_splits=n))
- def _str_rsplit(self, pat: str | None = None, n: int | None = -1):
+ def _str_rsplit(self, pat: str | None = None, n: int | None = -1) -> Self:
if n in {-1, 0}:
n = None
if pat is None:
return type(self)(
pc.utf8_split_whitespace(self._pa_array, max_splits=n, reverse=True)
)
- else:
- return type(self)(
- pc.split_pattern(self._pa_array, pat, max_splits=n, reverse=True)
- )
+ return type(self)(
+ pc.split_pattern(self._pa_array, pat, max_splits=n, reverse=True)
+ )
- def _str_translate(self, table: dict[int, str]):
+ def _str_translate(self, table: dict[int, str]) -> Self:
predicate = lambda val: val.translate(table)
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
- def _str_wrap(self, width: int, **kwargs):
+ def _str_wrap(self, width: int, **kwargs) -> Self:
kwargs["width"] = width
tw = textwrap.TextWrapper(**kwargs)
predicate = lambda val: "\n".join(tw.wrap(val))
@@ -2505,13 +2523,13 @@ def _str_wrap(self, width: int, **kwargs):
return type(self)(pa.chunked_array(result))
@property
- def _dt_days(self):
+ def _dt_days(self) -> Self:
return type(self)(
pa.array(self._to_timedeltaarray().days, from_pandas=True, type=pa.int32())
)
@property
- def _dt_hours(self):
+ def _dt_hours(self) -> Self:
return type(self)(
pa.array(
[
@@ -2523,7 +2541,7 @@ def _dt_hours(self):
)
@property
- def _dt_minutes(self):
+ def _dt_minutes(self) -> Self:
return type(self)(
pa.array(
[
@@ -2535,7 +2553,7 @@ def _dt_minutes(self):
)
@property
- def _dt_seconds(self):
+ def _dt_seconds(self) -> Self:
return type(self)(
pa.array(
self._to_timedeltaarray().seconds, from_pandas=True, type=pa.int32()
@@ -2543,7 +2561,7 @@ def _dt_seconds(self):
)
@property
- def _dt_milliseconds(self):
+ def _dt_milliseconds(self) -> Self:
return type(self)(
pa.array(
[
@@ -2555,7 +2573,7 @@ def _dt_milliseconds(self):
)
@property
- def _dt_microseconds(self):
+ def _dt_microseconds(self) -> Self:
return type(self)(
pa.array(
self._to_timedeltaarray().microseconds,
@@ -2565,25 +2583,25 @@ def _dt_microseconds(self):
)
@property
- def _dt_nanoseconds(self):
+ def _dt_nanoseconds(self) -> Self:
return type(self)(
pa.array(
self._to_timedeltaarray().nanoseconds, from_pandas=True, type=pa.int32()
)
)
- def _dt_to_pytimedelta(self):
+ def _dt_to_pytimedelta(self) -> np.ndarray:
data = self._pa_array.to_pylist()
if self._dtype.pyarrow_dtype.unit == "ns":
data = [None if ts is None else ts.to_pytimedelta() for ts in data]
return np.array(data, dtype=object)
- def _dt_total_seconds(self):
+ def _dt_total_seconds(self) -> Self:
return type(self)(
pa.array(self._to_timedeltaarray().total_seconds(), from_pandas=True)
)
- def _dt_as_unit(self, unit: str):
+ def _dt_as_unit(self, unit: str) -> Self:
if pa.types.is_date(self.dtype.pyarrow_dtype):
raise NotImplementedError("as_unit not implemented for date types")
pd_array = self._maybe_convert_datelike_array()
@@ -2591,43 +2609,43 @@ def _dt_as_unit(self, unit: str):
return type(self)(pa.array(pd_array.as_unit(unit), from_pandas=True))
@property
- def _dt_year(self):
+ def _dt_year(self) -> Self:
return type(self)(pc.year(self._pa_array))
@property
- def _dt_day(self):
+ def _dt_day(self) -> Self:
return type(self)(pc.day(self._pa_array))
@property
- def _dt_day_of_week(self):
+ def _dt_day_of_week(self) -> Self:
return type(self)(pc.day_of_week(self._pa_array))
_dt_dayofweek = _dt_day_of_week
_dt_weekday = _dt_day_of_week
@property
- def _dt_day_of_year(self):
+ def _dt_day_of_year(self) -> Self:
return type(self)(pc.day_of_year(self._pa_array))
_dt_dayofyear = _dt_day_of_year
@property
- def _dt_hour(self):
+ def _dt_hour(self) -> Self:
return type(self)(pc.hour(self._pa_array))
- def _dt_isocalendar(self):
+ def _dt_isocalendar(self) -> Self:
return type(self)(pc.iso_calendar(self._pa_array))
@property
- def _dt_is_leap_year(self):
+ def _dt_is_leap_year(self) -> Self:
return type(self)(pc.is_leap_year(self._pa_array))
@property
- def _dt_is_month_start(self):
+ def _dt_is_month_start(self) -> Self:
return type(self)(pc.equal(pc.day(self._pa_array), 1))
@property
- def _dt_is_month_end(self):
+ def _dt_is_month_end(self) -> Self:
result = pc.equal(
pc.days_between(
pc.floor_temporal(self._pa_array, unit="day"),
@@ -2638,7 +2656,7 @@ def _dt_is_month_end(self):
return type(self)(result)
@property
- def _dt_is_year_start(self):
+ def _dt_is_year_start(self) -> Self:
return type(self)(
pc.and_(
pc.equal(pc.month(self._pa_array), 1),
@@ -2647,7 +2665,7 @@ def _dt_is_year_start(self):
)
@property
- def _dt_is_year_end(self):
+ def _dt_is_year_end(self) -> Self:
return type(self)(
pc.and_(
pc.equal(pc.month(self._pa_array), 12),
@@ -2656,7 +2674,7 @@ def _dt_is_year_end(self):
)
@property
- def _dt_is_quarter_start(self):
+ def _dt_is_quarter_start(self) -> Self:
result = pc.equal(
pc.floor_temporal(self._pa_array, unit="quarter"),
pc.floor_temporal(self._pa_array, unit="day"),
@@ -2664,7 +2682,7 @@ def _dt_is_quarter_start(self):
return type(self)(result)
@property
- def _dt_is_quarter_end(self):
+ def _dt_is_quarter_end(self) -> Self:
result = pc.equal(
pc.days_between(
pc.floor_temporal(self._pa_array, unit="day"),
@@ -2675,7 +2693,7 @@ def _dt_is_quarter_end(self):
return type(self)(result)
@property
- def _dt_days_in_month(self):
+ def _dt_days_in_month(self) -> Self:
result = pc.days_between(
pc.floor_temporal(self._pa_array, unit="month"),
pc.ceil_temporal(self._pa_array, unit="month"),
@@ -2685,35 +2703,35 @@ def _dt_days_in_month(self):
_dt_daysinmonth = _dt_days_in_month
@property
- def _dt_microsecond(self):
+ def _dt_microsecond(self) -> Self:
return type(self)(pc.microsecond(self._pa_array))
@property
- def _dt_minute(self):
+ def _dt_minute(self) -> Self:
return type(self)(pc.minute(self._pa_array))
@property
- def _dt_month(self):
+ def _dt_month(self) -> Self:
return type(self)(pc.month(self._pa_array))
@property
- def _dt_nanosecond(self):
+ def _dt_nanosecond(self) -> Self:
return type(self)(pc.nanosecond(self._pa_array))
@property
- def _dt_quarter(self):
+ def _dt_quarter(self) -> Self:
return type(self)(pc.quarter(self._pa_array))
@property
- def _dt_second(self):
+ def _dt_second(self) -> Self:
return type(self)(pc.second(self._pa_array))
@property
- def _dt_date(self):
+ def _dt_date(self) -> Self:
return type(self)(self._pa_array.cast(pa.date32()))
@property
- def _dt_time(self):
+ def _dt_time(self) -> Self:
unit = (
self.dtype.pyarrow_dtype.unit
if self.dtype.pyarrow_dtype.unit in {"us", "ns"}
@@ -2729,10 +2747,10 @@ def _dt_tz(self):
def _dt_unit(self):
return self.dtype.pyarrow_dtype.unit
- def _dt_normalize(self):
+ def _dt_normalize(self) -> Self:
return type(self)(pc.floor_temporal(self._pa_array, 1, "day"))
- def _dt_strftime(self, format: str):
+ def _dt_strftime(self, format: str) -> Self:
return type(self)(pc.strftime(self._pa_array, format=format))
def _round_temporally(
@@ -2741,7 +2759,7 @@ def _round_temporally(
freq,
ambiguous: TimeAmbiguous = "raise",
nonexistent: TimeNonexistent = "raise",
- ):
+ ) -> Self:
if ambiguous != "raise":
raise NotImplementedError("ambiguous is not supported.")
if nonexistent != "raise":
@@ -2777,7 +2795,7 @@ def _dt_ceil(
freq,
ambiguous: TimeAmbiguous = "raise",
nonexistent: TimeNonexistent = "raise",
- ):
+ ) -> Self:
return self._round_temporally("ceil", freq, ambiguous, nonexistent)
def _dt_floor(
@@ -2785,7 +2803,7 @@ def _dt_floor(
freq,
ambiguous: TimeAmbiguous = "raise",
nonexistent: TimeNonexistent = "raise",
- ):
+ ) -> Self:
return self._round_temporally("floor", freq, ambiguous, nonexistent)
def _dt_round(
@@ -2793,20 +2811,20 @@ def _dt_round(
freq,
ambiguous: TimeAmbiguous = "raise",
nonexistent: TimeNonexistent = "raise",
- ):
+ ) -> Self:
return self._round_temporally("round", freq, ambiguous, nonexistent)
- def _dt_day_name(self, locale: str | None = None):
+ def _dt_day_name(self, locale: str | None = None) -> Self:
if locale is None:
locale = "C"
return type(self)(pc.strftime(self._pa_array, format="%A", locale=locale))
- def _dt_month_name(self, locale: str | None = None):
+ def _dt_month_name(self, locale: str | None = None) -> Self:
if locale is None:
locale = "C"
return type(self)(pc.strftime(self._pa_array, format="%B", locale=locale))
- def _dt_to_pydatetime(self):
+ def _dt_to_pydatetime(self) -> np.ndarray:
if pa.types.is_date(self.dtype.pyarrow_dtype):
raise ValueError(
f"to_pydatetime cannot be called with {self.dtype.pyarrow_dtype} type. "
@@ -2822,7 +2840,7 @@ def _dt_tz_localize(
tz,
ambiguous: TimeAmbiguous = "raise",
nonexistent: TimeNonexistent = "raise",
- ):
+ ) -> Self:
if ambiguous != "raise":
raise NotImplementedError(f"{ambiguous=} is not supported")
nonexistent_pa = {
@@ -2842,7 +2860,7 @@ def _dt_tz_localize(
)
return type(self)(result)
- def _dt_tz_convert(self, tz):
+ def _dt_tz_convert(self, tz) -> Self:
if self.dtype.pyarrow_dtype.tz is None:
raise TypeError(
"Cannot convert tz-naive timestamps, use tz_localize to localize"
diff --git a/pandas/core/arrays/arrow/extension_types.py b/pandas/core/arrays/arrow/extension_types.py
index d52b60df47adc..2fa5f7a882cc7 100644
--- a/pandas/core/arrays/arrow/extension_types.py
+++ b/pandas/core/arrays/arrow/extension_types.py
@@ -145,7 +145,7 @@ def patch_pyarrow() -> None:
return
class ForbiddenExtensionType(pyarrow.ExtensionType):
- def __arrow_ext_serialize__(self):
+ def __arrow_ext_serialize__(self) -> bytes:
return b""
@classmethod
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 59c6d911cfaef..e530b28cba88a 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -81,6 +81,7 @@
Sequence,
)
+ from pandas._libs.missing import NAType
from pandas._typing import (
ArrayLike,
AstypeArg,
@@ -266,7 +267,9 @@ class ExtensionArray:
# ------------------------------------------------------------------------
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):
+ def _from_sequence(
+ cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
+ ) -> Self:
"""
Construct a new ExtensionArray from a sequence of scalars.
@@ -329,7 +332,7 @@ def _from_scalars(cls, scalars, *, dtype: DtypeObj) -> Self:
@classmethod
def _from_sequence_of_strings(
cls, strings, *, dtype: Dtype | None = None, copy: bool = False
- ):
+ ) -> Self:
"""
Construct a new ExtensionArray from a sequence of strings.
@@ -2385,10 +2388,26 @@ def _groupby_op(
class ExtensionArraySupportsAnyAll(ExtensionArray):
- def any(self, *, skipna: bool = True) -> bool:
+ @overload
+ def any(self, *, skipna: Literal[True] = ...) -> bool:
+ ...
+
+ @overload
+ def any(self, *, skipna: bool) -> bool | NAType:
+ ...
+
+ def any(self, *, skipna: bool = True) -> bool | NAType:
raise AbstractMethodError(self)
- def all(self, *, skipna: bool = True) -> bool:
+ @overload
+ def all(self, *, skipna: Literal[True] = ...) -> bool:
+ ...
+
+ @overload
+ def all(self, *, skipna: bool) -> bool | NAType:
+ ...
+
+ def all(self, *, skipna: bool = True) -> bool | NAType:
raise AbstractMethodError(self)
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 8a88227ad54a3..58809ba54ed56 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -597,7 +597,7 @@ def astype(self, dtype: AstypeArg, copy: bool = True) -> ArrayLike:
return result
- def to_list(self):
+ def to_list(self) -> list:
"""
Alias for tolist.
"""
@@ -1017,7 +1017,9 @@ def as_unordered(self) -> Self:
"""
return self.set_ordered(False)
- def set_categories(self, new_categories, ordered=None, rename: bool = False):
+ def set_categories(
+ self, new_categories, ordered=None, rename: bool = False
+ ) -> Self:
"""
Set the categories to the specified new categories.
@@ -1870,7 +1872,7 @@ def check_for_ordered(self, op) -> None:
def argsort(
self, *, ascending: bool = True, kind: SortKind = "quicksort", **kwargs
- ):
+ ) -> npt.NDArray[np.intp]:
"""
Return the indices that would sort the Categorical.
@@ -2618,7 +2620,15 @@ def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:
code_values = code_values[null_mask | (code_values >= 0)]
return algorithms.isin(self.codes, code_values)
- def _replace(self, *, to_replace, value, inplace: bool = False):
+ @overload
+ def _replace(self, *, to_replace, value, inplace: Literal[False] = ...) -> Self:
+ ...
+
+ @overload
+ def _replace(self, *, to_replace, value, inplace: Literal[True]) -> None:
+ ...
+
+ def _replace(self, *, to_replace, value, inplace: bool = False) -> Self | None:
from pandas import Index
orig_dtype = self.dtype
@@ -2666,6 +2676,7 @@ def _replace(self, *, to_replace, value, inplace: bool = False):
)
if not inplace:
return cat
+ return None
# ------------------------------------------------------------------------
# String methods interface
@@ -2901,8 +2912,8 @@ def _delegate_property_get(self, name: str):
# error: Signature of "_delegate_property_set" incompatible with supertype
# "PandasDelegate"
- def _delegate_property_set(self, name: str, new_values): # type: ignore[override]
- return setattr(self._parent, name, new_values)
+ def _delegate_property_set(self, name: str, new_values) -> None: # type: ignore[override]
+ setattr(self._parent, name, new_values)
@property
def codes(self) -> Series:
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 6ca74c4c05bc6..4668db8d75cd7 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -344,7 +344,7 @@ def _format_native_types(
"""
raise AbstractMethodError(self)
- def _formatter(self, boxed: bool = False):
+ def _formatter(self, boxed: bool = False) -> Callable[[object], str]:
# TODO: Remove Datetime & DatetimeTZ formatters.
return "'{}'".format
@@ -808,9 +808,8 @@ def isin(self, values: ArrayLike) -> npt.NDArray[np.bool_]:
if self.dtype.kind in "mM":
self = cast("DatetimeArray | TimedeltaArray", self)
- # error: Item "ExtensionArray" of "ExtensionArray | ndarray[Any, Any]"
- # has no attribute "as_unit"
- values = values.as_unit(self.unit) # type: ignore[union-attr]
+ # error: "DatetimeLikeArrayMixin" has no attribute "as_unit"
+ values = values.as_unit(self.unit) # type: ignore[attr-defined]
try:
# error: Argument 1 to "_check_compatible_with" of "DatetimeLikeArrayMixin"
@@ -1209,7 +1208,7 @@ def _add_timedeltalike_scalar(self, other):
self, other = self._ensure_matching_resos(other)
return self._add_timedeltalike(other)
- def _add_timedelta_arraylike(self, other: TimedeltaArray):
+ def _add_timedelta_arraylike(self, other: TimedeltaArray) -> Self:
"""
Add a delta of a TimedeltaIndex
@@ -1222,30 +1221,26 @@ def _add_timedelta_arraylike(self, other: TimedeltaArray):
if len(self) != len(other):
raise ValueError("cannot add indices of unequal length")
- self = cast("DatetimeArray | TimedeltaArray", self)
-
- self, other = self._ensure_matching_resos(other)
+ self, other = cast(
+ "DatetimeArray | TimedeltaArray", self
+ )._ensure_matching_resos(other)
return self._add_timedeltalike(other)
@final
- def _add_timedeltalike(self, other: Timedelta | TimedeltaArray):
- self = cast("DatetimeArray | TimedeltaArray", self)
-
+ def _add_timedeltalike(self, other: Timedelta | TimedeltaArray) -> Self:
other_i8, o_mask = self._get_i8_values_and_mask(other)
new_values = add_overflowsafe(self.asi8, np.asarray(other_i8, dtype="i8"))
res_values = new_values.view(self._ndarray.dtype)
new_freq = self._get_arithmetic_result_freq(other)
- # error: Argument "dtype" to "_simple_new" of "DatetimeArray" has
- # incompatible type "Union[dtype[datetime64], DatetimeTZDtype,
- # dtype[timedelta64]]"; expected "Union[dtype[datetime64], DatetimeTZDtype]"
+ # error: Unexpected keyword argument "freq" for "_simple_new" of "NDArrayBacked"
return type(self)._simple_new(
- res_values, dtype=self.dtype, freq=new_freq # type: ignore[arg-type]
+ res_values, dtype=self.dtype, freq=new_freq # type: ignore[call-arg]
)
@final
- def _add_nat(self):
+ def _add_nat(self) -> Self:
"""
Add pd.NaT to self
"""
@@ -1253,22 +1248,19 @@ def _add_nat(self):
raise TypeError(
f"Cannot add {type(self).__name__} and {type(NaT).__name__}"
)
- self = cast("TimedeltaArray | DatetimeArray", self)
# GH#19124 pd.NaT is treated like a timedelta for both timedelta
# and datetime dtypes
result = np.empty(self.shape, dtype=np.int64)
result.fill(iNaT)
result = result.view(self._ndarray.dtype) # preserve reso
- # error: Argument "dtype" to "_simple_new" of "DatetimeArray" has
- # incompatible type "Union[dtype[timedelta64], dtype[datetime64],
- # DatetimeTZDtype]"; expected "Union[dtype[datetime64], DatetimeTZDtype]"
+ # error: Unexpected keyword argument "freq" for "_simple_new" of "NDArrayBacked"
return type(self)._simple_new(
- result, dtype=self.dtype, freq=None # type: ignore[arg-type]
+ result, dtype=self.dtype, freq=None # type: ignore[call-arg]
)
@final
- def _sub_nat(self):
+ def _sub_nat(self) -> np.ndarray:
"""
Subtract pd.NaT from self
"""
@@ -1313,7 +1305,7 @@ def _sub_periodlike(self, other: Period | PeriodArray) -> npt.NDArray[np.object_
return new_data
@final
- def _addsub_object_array(self, other: npt.NDArray[np.object_], op):
+ def _addsub_object_array(self, other: npt.NDArray[np.object_], op) -> np.ndarray:
"""
Add or subtract array-like of DateOffset objects
@@ -1364,7 +1356,7 @@ def __add__(self, other):
# scalar others
if other is NaT:
- result = self._add_nat()
+ result: np.ndarray | DatetimeLikeArrayMixin = self._add_nat()
elif isinstance(other, (Tick, timedelta, np.timedelta64)):
result = self._add_timedeltalike_scalar(other)
elif isinstance(other, BaseOffset):
@@ -1424,7 +1416,7 @@ def __sub__(self, other):
# scalar others
if other is NaT:
- result = self._sub_nat()
+ result: np.ndarray | DatetimeLikeArrayMixin = self._sub_nat()
elif isinstance(other, (Tick, timedelta, np.timedelta64)):
result = self._add_timedeltalike_scalar(-other)
elif isinstance(other, BaseOffset):
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index 6b7ddc4a72957..a4d01dd6667f6 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -7,6 +7,7 @@
)
from typing import (
TYPE_CHECKING,
+ TypeVar,
cast,
overload,
)
@@ -73,7 +74,10 @@
)
if TYPE_CHECKING:
- from collections.abc import Iterator
+ from collections.abc import (
+ Generator,
+ Iterator,
+ )
from pandas._typing import (
ArrayLike,
@@ -86,9 +90,15 @@
npt,
)
- from pandas import DataFrame
+ from pandas import (
+ DataFrame,
+ Timedelta,
+ )
from pandas.core.arrays import PeriodArray
+ _TimestampNoneT1 = TypeVar("_TimestampNoneT1", Timestamp, None)
+ _TimestampNoneT2 = TypeVar("_TimestampNoneT2", Timestamp, None)
+
_ITER_CHUNKSIZE = 10_000
@@ -326,7 +336,7 @@ def _simple_new( # type: ignore[override]
return result
@classmethod
- def _from_sequence(cls, scalars, *, dtype=None, copy: bool = False):
+ def _from_sequence(cls, scalars, *, dtype=None, copy: bool = False) -> Self:
return cls._from_sequence_not_strict(scalars, dtype=dtype, copy=copy)
@classmethod
@@ -2125,7 +2135,7 @@ def std(
ddof: int = 1,
keepdims: bool = False,
skipna: bool = True,
- ):
+ ) -> Timedelta:
"""
Return sample standard deviation over requested axis.
@@ -2191,7 +2201,7 @@ def _sequence_to_dt64(
yearfirst: bool = False,
ambiguous: TimeAmbiguous = "raise",
out_unit: str | None = None,
-):
+) -> tuple[np.ndarray, tzinfo | None]:
"""
Parameters
----------
@@ -2360,7 +2370,7 @@ def objects_to_datetime64(
errors: DateTimeErrorChoices = "raise",
allow_object: bool = False,
out_unit: str = "ns",
-):
+) -> tuple[np.ndarray, tzinfo | None]:
"""
Convert data to array of timestamps.
@@ -2665,8 +2675,8 @@ def _infer_tz_from_endpoints(
def _maybe_normalize_endpoints(
- start: Timestamp | None, end: Timestamp | None, normalize: bool
-):
+ start: _TimestampNoneT1, end: _TimestampNoneT2, normalize: bool
+) -> tuple[_TimestampNoneT1, _TimestampNoneT2]:
if normalize:
if start is not None:
start = start.normalize()
@@ -2717,7 +2727,7 @@ def _generate_range(
offset: BaseOffset,
*,
unit: str,
-):
+) -> Generator[Timestamp, None, None]:
"""
Generates a sequence of dates corresponding to the specified time
offset. Similar to dateutil.rrule except uses pandas DateOffset
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index a19b304529383..7d2d98f71b38c 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -8,6 +8,7 @@
import textwrap
from typing import (
TYPE_CHECKING,
+ Callable,
Literal,
Union,
overload,
@@ -232,7 +233,7 @@ def __new__(
dtype: Dtype | None = None,
copy: bool = False,
verify_integrity: bool = True,
- ):
+ ) -> Self:
data = extract_array(data, extract_numpy=True)
if isinstance(data, cls):
@@ -1241,7 +1242,7 @@ def value_counts(self, dropna: bool = True) -> Series:
# ---------------------------------------------------------------------
# Rendering Methods
- def _formatter(self, boxed: bool = False):
+ def _formatter(self, boxed: bool = False) -> Callable[[object], str]:
# returning 'str' here causes us to render as e.g. "(0, 1]" instead of
# "Interval(0, 1, closed='right')"
return str
@@ -1842,9 +1843,13 @@ def _from_combined(self, combined: np.ndarray) -> IntervalArray:
dtype = self._left.dtype
if needs_i8_conversion(dtype):
assert isinstance(self._left, (DatetimeArray, TimedeltaArray))
- new_left = type(self._left)._from_sequence(nc[:, 0], dtype=dtype)
+ new_left: DatetimeArray | TimedeltaArray | np.ndarray = type(
+ self._left
+ )._from_sequence(nc[:, 0], dtype=dtype)
assert isinstance(self._right, (DatetimeArray, TimedeltaArray))
- new_right = type(self._right)._from_sequence(nc[:, 1], dtype=dtype)
+ new_right: DatetimeArray | TimedeltaArray | np.ndarray = type(
+ self._right
+ )._from_sequence(nc[:, 1], dtype=dtype)
else:
assert isinstance(dtype, np.dtype)
new_left = nc[:, 0].view(dtype)
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index 03c09c5b2fd18..c1bac9cfcb02f 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -98,6 +98,7 @@
NumpySorter,
NumpyValueArrayLike,
)
+ from pandas._libs.missing import NAType
from pandas.compat.numpy import function as nv
@@ -152,7 +153,7 @@ def _from_sequence(cls, scalars, *, dtype=None, copy: bool = False) -> Self:
@classmethod
@doc(ExtensionArray._empty)
- def _empty(cls, shape: Shape, dtype: ExtensionDtype):
+ def _empty(cls, shape: Shape, dtype: ExtensionDtype) -> Self:
values = np.empty(shape, dtype=dtype.type)
values.fill(cls._internal_fill_value)
mask = np.ones(shape, dtype=bool)
@@ -499,7 +500,7 @@ def to_numpy(
return data
@doc(ExtensionArray.tolist)
- def tolist(self):
+ def tolist(self) -> list:
if self.ndim > 1:
return [x.tolist() for x in self]
dtype = None if self._hasna else self._data.dtype
@@ -1307,7 +1308,21 @@ def max(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
def map(self, mapper, na_action=None):
return map_array(self.to_numpy(), mapper, na_action=None)
- def any(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
+ @overload
+ def any(
+ self, *, skipna: Literal[True] = ..., axis: AxisInt | None = ..., **kwargs
+ ) -> np.bool_:
+ ...
+
+ @overload
+ def any(
+ self, *, skipna: bool, axis: AxisInt | None = ..., **kwargs
+ ) -> np.bool_ | NAType:
+ ...
+
+ def any(
+ self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs
+ ) -> np.bool_ | NAType:
"""
Return whether any element is truthy.
@@ -1388,7 +1403,21 @@ def any(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
else:
return self.dtype.na_value
- def all(self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs):
+ @overload
+ def all(
+ self, *, skipna: Literal[True] = ..., axis: AxisInt | None = ..., **kwargs
+ ) -> np.bool_:
+ ...
+
+ @overload
+ def all(
+ self, *, skipna: bool, axis: AxisInt | None = ..., **kwargs
+ ) -> np.bool_ | NAType:
+ ...
+
+ def all(
+ self, *, skipna: bool = True, axis: AxisInt | None = 0, **kwargs
+ ) -> np.bool_ | NAType:
"""
Return whether all elements are truthy.
diff --git a/pandas/core/arrays/sparse/accessor.py b/pandas/core/arrays/sparse/accessor.py
index 3dd7ebf564ca1..a1d81aeeecb0b 100644
--- a/pandas/core/arrays/sparse/accessor.py
+++ b/pandas/core/arrays/sparse/accessor.py
@@ -17,6 +17,11 @@
from pandas.core.arrays.sparse.array import SparseArray
if TYPE_CHECKING:
+ from scipy.sparse import (
+ coo_matrix,
+ spmatrix,
+ )
+
from pandas import (
DataFrame,
Series,
@@ -115,7 +120,9 @@ def from_coo(cls, A, dense_index: bool = False) -> Series:
return result
- def to_coo(self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False):
+ def to_coo(
+ self, row_levels=(0,), column_levels=(1,), sort_labels: bool = False
+ ) -> tuple[coo_matrix, list, list]:
"""
Create a scipy.sparse.coo_matrix from a Series with MultiIndex.
@@ -326,7 +333,7 @@ def to_dense(self) -> DataFrame:
data = {k: v.array.to_dense() for k, v in self._parent.items()}
return DataFrame(data, index=self._parent.index, columns=self._parent.columns)
- def to_coo(self):
+ def to_coo(self) -> spmatrix:
"""
Return the contents of the frame as a sparse SciPy COO matrix.
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 7a3ea85dde2b4..db670e1ea4816 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -584,11 +584,13 @@ def __setitem__(self, key, value) -> None:
raise TypeError(msg)
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):
+ def _from_sequence(
+ cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
+ ) -> Self:
return cls(scalars, dtype=dtype)
@classmethod
- def _from_factorized(cls, values, original):
+ def _from_factorized(cls, values, original) -> Self:
return cls(values, dtype=original.dtype)
# ------------------------------------------------------------------------
diff --git a/pandas/core/arrays/string_.py b/pandas/core/arrays/string_.py
index f451ebc352733..d4da5840689de 100644
--- a/pandas/core/arrays/string_.py
+++ b/pandas/core/arrays/string_.py
@@ -257,7 +257,7 @@ class BaseStringArray(ExtensionArray):
"""
@doc(ExtensionArray.tolist)
- def tolist(self):
+ def tolist(self) -> list:
if self.ndim > 1:
return [x.tolist() for x in self]
return list(self.to_numpy())
@@ -381,7 +381,9 @@ def _validate(self) -> None:
lib.convert_nans_to_NA(self._ndarray)
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):
+ def _from_sequence(
+ cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
+ ) -> Self:
if dtype and not (isinstance(dtype, str) and dtype == "string"):
dtype = pandas_dtype(dtype)
assert isinstance(dtype, StringDtype) and dtype.storage == "python"
@@ -414,7 +416,7 @@ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = Fal
@classmethod
def _from_sequence_of_strings(
cls, strings, *, dtype: Dtype | None = None, copy: bool = False
- ):
+ ) -> Self:
return cls._from_sequence(strings, dtype=dtype, copy=copy)
@classmethod
@@ -436,7 +438,7 @@ def __arrow_array__(self, type=None):
values[self.isna()] = None
return pa.array(values, type=type, from_pandas=True)
- def _values_for_factorize(self):
+ def _values_for_factorize(self) -> tuple[np.ndarray, None]:
arr = self._ndarray.copy()
mask = self.isna()
arr[mask] = None
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index d5a76811a12e6..cb07fcf1a48fa 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -59,6 +59,7 @@
AxisInt,
Dtype,
Scalar,
+ Self,
npt,
)
@@ -172,7 +173,9 @@ def __len__(self) -> int:
return len(self._pa_array)
@classmethod
- def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = False):
+ def _from_sequence(
+ cls, scalars, *, dtype: Dtype | None = None, copy: bool = False
+ ) -> Self:
from pandas.core.arrays.masked import BaseMaskedArray
_chk_pyarrow_available()
@@ -201,7 +204,7 @@ def _from_sequence(cls, scalars, *, dtype: Dtype | None = None, copy: bool = Fal
@classmethod
def _from_sequence_of_strings(
cls, strings, dtype: Dtype | None = None, copy: bool = False
- ):
+ ) -> Self:
return cls._from_sequence(strings, dtype=dtype, copy=copy)
@property
@@ -439,7 +442,7 @@ def _str_fullmatch(
def _str_slice(
self, start: int | None = None, stop: int | None = None, step: int | None = None
- ):
+ ) -> Self:
if stop is None:
return super()._str_slice(start, stop, step)
if start is None:
@@ -490,27 +493,27 @@ def _str_len(self):
result = pc.utf8_length(self._pa_array)
return self._convert_int_dtype(result)
- def _str_lower(self):
+ def _str_lower(self) -> Self:
return type(self)(pc.utf8_lower(self._pa_array))
- def _str_upper(self):
+ def _str_upper(self) -> Self:
return type(self)(pc.utf8_upper(self._pa_array))
- def _str_strip(self, to_strip=None):
+ def _str_strip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_trim_whitespace(self._pa_array)
else:
result = pc.utf8_trim(self._pa_array, characters=to_strip)
return type(self)(result)
- def _str_lstrip(self, to_strip=None):
+ def _str_lstrip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_ltrim_whitespace(self._pa_array)
else:
result = pc.utf8_ltrim(self._pa_array, characters=to_strip)
return type(self)(result)
- def _str_rstrip(self, to_strip=None):
+ def _str_rstrip(self, to_strip=None) -> Self:
if to_strip is None:
result = pc.utf8_rtrim_whitespace(self._pa_array)
else:
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index d7a177c2a19c0..9a1ec2330a326 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1541,25 +1541,24 @@ def construct_1d_arraylike_from_scalar(
if isinstance(dtype, ExtensionDtype):
cls = dtype.construct_array_type()
seq = [] if length == 0 else [value]
- subarr = cls._from_sequence(seq, dtype=dtype).repeat(length)
+ return cls._from_sequence(seq, dtype=dtype).repeat(length)
+
+ if length and dtype.kind in "iu" and isna(value):
+ # coerce if we have nan for an integer dtype
+ dtype = np.dtype("float64")
+ elif lib.is_np_dtype(dtype, "US"):
+ # we need to coerce to object dtype to avoid
+ # to allow numpy to take our string as a scalar value
+ dtype = np.dtype("object")
+ if not isna(value):
+ value = ensure_str(value)
+ elif dtype.kind in "mM":
+ value = _maybe_box_and_unbox_datetimelike(value, dtype)
- else:
- if length and dtype.kind in "iu" and isna(value):
- # coerce if we have nan for an integer dtype
- dtype = np.dtype("float64")
- elif lib.is_np_dtype(dtype, "US"):
- # we need to coerce to object dtype to avoid
- # to allow numpy to take our string as a scalar value
- dtype = np.dtype("object")
- if not isna(value):
- value = ensure_str(value)
- elif dtype.kind in "mM":
- value = _maybe_box_and_unbox_datetimelike(value, dtype)
-
- subarr = np.empty(length, dtype=dtype)
- if length:
- # GH 47391: numpy > 1.24 will raise filling np.nan into int dtypes
- subarr.fill(value)
+ subarr = np.empty(length, dtype=dtype)
+ if length:
+ # GH 47391: numpy > 1.24 will raise filling np.nan into int dtypes
+ subarr.fill(value)
return subarr
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index e253f82256a5f..ee62441ab8f55 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -661,7 +661,9 @@ def fast_xs(self, loc: int) -> SingleArrayManager:
values = [arr[loc] for arr in self.arrays]
if isinstance(dtype, ExtensionDtype):
- result = dtype.construct_array_type()._from_sequence(values, dtype=dtype)
+ result: np.ndarray | ExtensionArray = (
+ dtype.construct_array_type()._from_sequence(values, dtype=dtype)
+ )
# for datetime64/timedelta64, the np.ndarray constructor cannot handle pd.NaT
elif is_datetime64_ns_dtype(dtype):
result = DatetimeArray._from_sequence(values, dtype=dtype)._ndarray
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index 5f38720135efa..d08dee3663395 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -971,7 +971,9 @@ def fast_xs(self, loc: int) -> SingleBlockManager:
if len(self.blocks) == 1:
# TODO: this could be wrong if blk.mgr_locs is not slice(None)-like;
# is this ruled out in the general case?
- result = self.blocks[0].iget((slice(None), loc))
+ result: np.ndarray | ExtensionArray = self.blocks[0].iget(
+ (slice(None), loc)
+ )
# in the case of a single block, the new block is a view
bp = BlockPlacement(slice(0, len(result)))
block = new_block(
@@ -2368,9 +2370,9 @@ def make_na_array(dtype: DtypeObj, shape: Shape, fill_value) -> ArrayLike:
else:
# NB: we should never get here with dtype integer or bool;
# if we did, the missing_arr.fill would cast to gibberish
- missing_arr = np.empty(shape, dtype=dtype)
- missing_arr.fill(fill_value)
+ missing_arr_np = np.empty(shape, dtype=dtype)
+ missing_arr_np.fill(fill_value)
if dtype.kind in "mM":
- missing_arr = ensure_wrapped_if_datetimelike(missing_arr)
- return missing_arr
+ missing_arr_np = ensure_wrapped_if_datetimelike(missing_arr_np)
+ return missing_arr_np
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index 0029beccc40a8..29d17e7174ee9 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -205,10 +205,10 @@ def rep(x, r):
np.asarray(repeats, dtype=object),
rep,
)
- if isinstance(self, BaseStringArray):
- # Not going through map, so we have to do this here.
- result = type(self)._from_sequence(result, dtype=self.dtype)
- return result
+ if not isinstance(self, BaseStringArray):
+ return result
+ # Not going through map, so we have to do this here.
+ return type(self)._from_sequence(result, dtype=self.dtype)
def _str_match(
self, pat: str, case: bool = True, flags: int = 0, na: Scalar | None = None
diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 05262c235568d..097765f5705af 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -445,7 +445,7 @@ def _convert_listlike_datetimes(
# We can take a shortcut since the datetime64 numpy array
# is in UTC
out_unit = np.datetime_data(result.dtype)[0]
- dtype = cast(DatetimeTZDtype, tz_to_dtype(tz_parsed, out_unit))
+ dtype = tz_to_dtype(tz_parsed, out_unit)
dt64_values = result.view(f"M8[{dtype.unit}]")
dta = DatetimeArray._simple_new(dt64_values, dtype=dtype)
return DatetimeIndex._simple_new(dta, name=name)
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56689 | 2023-12-30T23:56:23Z | 2024-01-02T21:23:01Z | 2024-01-02T21:23:00Z | 2024-01-17T02:49:41Z |
Added validation check for integer value for series.df | diff --git a/doc/source/whatsnew/v2.3.0.rst b/doc/source/whatsnew/v2.3.0.rst
index 1f1b0c7d7195a..c0692dba32a72 100644
--- a/doc/source/whatsnew/v2.3.0.rst
+++ b/doc/source/whatsnew/v2.3.0.rst
@@ -108,6 +108,8 @@ Performance improvements
Bug fixes
~~~~~~~~~
+- Fixed bug in :meth:`Series.diff` allowing non-integer values for the ``periods`` argument. (:issue:`56607`)
+
Categorical
^^^^^^^^^^^
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 76fdcefd03407..128477dac562e 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -47,6 +47,7 @@
is_complex_dtype,
is_dict_like,
is_extension_array_dtype,
+ is_float,
is_float_dtype,
is_integer,
is_integer_dtype,
@@ -1361,7 +1362,12 @@ def diff(arr, n: int, axis: AxisInt = 0):
shifted
"""
- n = int(n)
+ # added a check on the integer value of period
+ # see https://github.com/pandas-dev/pandas/issues/56607
+ if not lib.is_integer(n):
+ if not (is_float(n) and n.is_integer()):
+ raise ValueError("periods must be an integer")
+ n = int(n)
na = np.nan
dtype = arr.dtype
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1f9ac8511476e..90073e21cfd66 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -72,6 +72,7 @@
)
from pandas.core.dtypes.common import (
is_dict_like,
+ is_float,
is_integer,
is_iterator,
is_list_like,
@@ -3102,6 +3103,9 @@ def diff(self, periods: int = 1) -> Series:
--------
{examples}
"""
+ if not lib.is_integer(periods):
+ if not (is_float(periods) and periods.is_integer()):
+ raise ValueError("periods must be an integer")
result = algorithms.diff(self._values, periods)
return self._constructor(result, index=self.index, copy=False).__finalize__(
self, method="diff"
diff --git a/pandas/tests/series/methods/test_diff.py b/pandas/tests/series/methods/test_diff.py
index 18de81a927c3a..a46389087f87b 100644
--- a/pandas/tests/series/methods/test_diff.py
+++ b/pandas/tests/series/methods/test_diff.py
@@ -10,6 +10,11 @@
class TestSeriesDiff:
+ def test_diff_series_requires_integer(self):
+ series = Series(np.random.default_rng(2).standard_normal(2))
+ with pytest.raises(ValueError, match="periods must be an integer"):
+ series.diff(1.5)
+
def test_diff_np(self):
# TODO(__array_function__): could make np.diff return a Series
# matching ser.diff()
| - [1] closes #56607
- [2] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [3 ] Imported is_float from from pandas.core.dtypes.common for the check and added an if statement to raise the ValueError. | https://api.github.com/repos/pandas-dev/pandas/pulls/56688 | 2023-12-30T20:16:41Z | 2024-01-07T16:18:47Z | 2024-01-07T16:18:47Z | 2024-01-07T16:18:47Z |
Backport PR #56312 on branch 2.2.x (DOC: Add whatsnew for concat regression) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 129f5cedb86c2..649ad37a56b35 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -761,6 +761,7 @@ Datetimelike
- Bug in parsing datetime strings with nanosecond resolution with non-ISO8601 formats incorrectly truncating sub-microsecond components (:issue:`56051`)
- Bug in parsing datetime strings with sub-second resolution and trailing zeros incorrectly inferring second or millisecond resolution (:issue:`55737`)
- Bug in the results of :func:`to_datetime` with an floating-dtype argument with ``unit`` not matching the pointwise results of :class:`Timestamp` (:issue:`56037`)
+- Fixed regression where :func:`concat` would raise an error when concatenating ``datetime64`` columns with differing resolutions (:issue:`53641`)
Timedelta
^^^^^^^^^
| Backport PR #56312: DOC: Add whatsnew for concat regression | https://api.github.com/repos/pandas-dev/pandas/pulls/56686 | 2023-12-30T15:33:57Z | 2023-12-30T19:05:18Z | 2023-12-30T19:05:18Z | 2023-12-30T19:05:18Z |
add test for concating tzaware series with empty series. Issue: #34174 | diff --git a/pandas/tests/dtypes/test_concat.py b/pandas/tests/dtypes/test_concat.py
index 97718386dabb7..4f7ae6fa2a0a0 100644
--- a/pandas/tests/dtypes/test_concat.py
+++ b/pandas/tests/dtypes/test_concat.py
@@ -49,3 +49,20 @@ def test_concat_periodarray_2d():
with pytest.raises(ValueError, match=msg):
_concat.concat_compat([arr[:2], arr[2:]], axis=1)
+
+
+def test_concat_series_between_empty_and_tzaware_series():
+ tzaware_time = pd.Timestamp("2020-01-01T00:00:00+00:00")
+ ser1 = Series(index=[tzaware_time], data=0, dtype=float)
+ ser2 = Series(dtype=float)
+
+ result = pd.concat([ser1, ser2], axis=1)
+ expected = pd.DataFrame(
+ data=[
+ (0.0, None),
+ ],
+ index=pd.Index([tzaware_time], dtype=object),
+ columns=[0, 1],
+ dtype=float,
+ )
+ tm.assert_frame_equal(result, expected)
| - [x] closes #34174
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56685 | 2023-12-30T15:04:26Z | 2024-01-03T18:37:37Z | 2024-01-03T18:37:37Z | 2024-01-03T18:37:43Z |
Backport PR #56682 on branch 2.2.x (CLN: NEP 50 followups) | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 12e645dc9da81..dd5d090e098b0 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -92,7 +92,7 @@ jobs:
- name: "Numpy Dev"
env_file: actions-311-numpydev.yaml
pattern: "not slow and not network and not single_cpu"
- test_args: "-W error::FutureWarning"
+ test_args: "-W error::DeprecationWarning -W error::FutureWarning"
- name: "Pyarrow Nightly"
env_file: actions-311-pyarrownightly.yaml
pattern: "not slow and not network and not single_cpu"
diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 4b62ecc79e4ef..45f114322015b 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 95c0319d6f5b8..d6bf9ec7843de 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -21,7 +21,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-311-pyarrownightly.yaml b/ci/deps/actions-311-pyarrownightly.yaml
index 5455b9b84b034..d84063ac2a9ba 100644
--- a/ci/deps/actions-311-pyarrownightly.yaml
+++ b/ci/deps/actions-311-pyarrownightly.yaml
@@ -18,7 +18,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
- pip
diff --git a/ci/deps/actions-311-sanitizers.yaml b/ci/deps/actions-311-sanitizers.yaml
index dcd381066b0ea..f5f04c90bffad 100644
--- a/ci/deps/actions-311-sanitizers.yaml
+++ b/ci/deps/actions-311-sanitizers.yaml
@@ -22,7 +22,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# pandas dependencies
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 52074ae00ea18..d14686696e669 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index 4c51e9e6029e3..86aaf24b4e15c 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index fd71315d2e7ac..7067048c4434d 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -22,7 +22,7 @@ dependencies:
# required dependencies
- python-dateutil=2.8.2
- - numpy=1.22.4, <2
+ - numpy=1.22.4
- pytz=2020.1
# optional dependencies
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index cbe8f77c15730..31ee74174cd46 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-pypy-39.yaml b/ci/deps/actions-pypy-39.yaml
index 5a5a01f7aec72..d9c8dd81b7c33 100644
--- a/ci/deps/actions-pypy-39.yaml
+++ b/ci/deps/actions-pypy-39.yaml
@@ -20,7 +20,7 @@ dependencies:
- hypothesis>=6.46.1
# required
- - numpy<2
+ - numpy
- python-dateutil
- pytz
- pip:
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index 8e106445cd4e0..a19ffd485262d 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 7a088bf84c48e..259e83a5936d7 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1332,7 +1332,7 @@ def find_result_type(left_dtype: DtypeObj, right: Any) -> DtypeObj:
right = left_dtype
elif (
not np.issubdtype(left_dtype, np.unsignedinteger)
- and 0 < right <= 2 ** (8 * right_dtype.itemsize - 1) - 1
+ and 0 < right <= np.iinfo(right_dtype).max
):
# If left dtype isn't unsigned, check if it fits in the signed dtype
right = np.dtype(f"i{right_dtype.itemsize}")
| Backport PR #56682: CLN: NEP 50 followups | https://api.github.com/repos/pandas-dev/pandas/pulls/56684 | 2023-12-29T22:59:27Z | 2023-12-29T23:47:36Z | 2023-12-29T23:47:36Z | 2023-12-29T23:47:36Z |
Backport PR #56666 on branch 2.2.x (STY: Use ruff instead of pygrep check for future annotation import) | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 7f3fc95ce00cc..4b02ad7cf886f 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -358,18 +358,6 @@ repos:
files: ^pandas/
exclude: ^(pandas/_libs/|pandas/tests/|pandas/errors/__init__.py$|pandas/_version.py)
types: [python]
- - id: future-annotations
- name: import annotations from __future__
- entry: 'from __future__ import annotations'
- language: pygrep
- args: [--negate]
- files: ^pandas/
- types: [python]
- exclude: |
- (?x)
- /(__init__\.py)|(api\.py)|(_version\.py)|(testing\.py)|(conftest\.py)$
- |/tests/
- |/_testing/
- id: check-test-naming
name: check that test names start with 'test'
entry: python -m scripts.check_test_naming
diff --git a/pyproject.toml b/pyproject.toml
index 5e65edf81f9c7..8724a25909543 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -259,6 +259,8 @@ select = [
"FLY",
# flake8-logging-format
"G",
+ # flake8-future-annotations
+ "FA",
]
ignore = [
| Backport PR #56666: STY: Use ruff instead of pygrep check for future annotation import | https://api.github.com/repos/pandas-dev/pandas/pulls/56683 | 2023-12-29T21:54:07Z | 2023-12-29T22:58:59Z | 2023-12-29T22:58:59Z | 2023-12-29T22:58:59Z |
CLN: NEP 50 followups | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 6ca4d19196874..293cf3a6a9bac 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -92,7 +92,7 @@ jobs:
- name: "Numpy Dev"
env_file: actions-311-numpydev.yaml
pattern: "not slow and not network and not single_cpu"
- test_args: "-W error::FutureWarning"
+ test_args: "-W error::DeprecationWarning -W error::FutureWarning"
- name: "Pyarrow Nightly"
env_file: actions-311-pyarrownightly.yaml
pattern: "not slow and not network and not single_cpu"
diff --git a/ci/deps/actions-310.yaml b/ci/deps/actions-310.yaml
index 4b62ecc79e4ef..45f114322015b 100644
--- a/ci/deps/actions-310.yaml
+++ b/ci/deps/actions-310.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-311-downstream_compat.yaml b/ci/deps/actions-311-downstream_compat.yaml
index 95c0319d6f5b8..d6bf9ec7843de 100644
--- a/ci/deps/actions-311-downstream_compat.yaml
+++ b/ci/deps/actions-311-downstream_compat.yaml
@@ -21,7 +21,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-311-pyarrownightly.yaml b/ci/deps/actions-311-pyarrownightly.yaml
index 5455b9b84b034..d84063ac2a9ba 100644
--- a/ci/deps/actions-311-pyarrownightly.yaml
+++ b/ci/deps/actions-311-pyarrownightly.yaml
@@ -18,7 +18,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
- pip
diff --git a/ci/deps/actions-311-sanitizers.yaml b/ci/deps/actions-311-sanitizers.yaml
index dcd381066b0ea..f5f04c90bffad 100644
--- a/ci/deps/actions-311-sanitizers.yaml
+++ b/ci/deps/actions-311-sanitizers.yaml
@@ -22,7 +22,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# pandas dependencies
diff --git a/ci/deps/actions-311.yaml b/ci/deps/actions-311.yaml
index 52074ae00ea18..d14686696e669 100644
--- a/ci/deps/actions-311.yaml
+++ b/ci/deps/actions-311.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
index 4c51e9e6029e3..86aaf24b4e15c 100644
--- a/ci/deps/actions-312.yaml
+++ b/ci/deps/actions-312.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-39-minimum_versions.yaml b/ci/deps/actions-39-minimum_versions.yaml
index fd71315d2e7ac..7067048c4434d 100644
--- a/ci/deps/actions-39-minimum_versions.yaml
+++ b/ci/deps/actions-39-minimum_versions.yaml
@@ -22,7 +22,7 @@ dependencies:
# required dependencies
- python-dateutil=2.8.2
- - numpy=1.22.4, <2
+ - numpy=1.22.4
- pytz=2020.1
# optional dependencies
diff --git a/ci/deps/actions-39.yaml b/ci/deps/actions-39.yaml
index cbe8f77c15730..31ee74174cd46 100644
--- a/ci/deps/actions-39.yaml
+++ b/ci/deps/actions-39.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/ci/deps/actions-pypy-39.yaml b/ci/deps/actions-pypy-39.yaml
index 5a5a01f7aec72..d9c8dd81b7c33 100644
--- a/ci/deps/actions-pypy-39.yaml
+++ b/ci/deps/actions-pypy-39.yaml
@@ -20,7 +20,7 @@ dependencies:
- hypothesis>=6.46.1
# required
- - numpy<2
+ - numpy
- python-dateutil
- pytz
- pip:
diff --git a/ci/deps/circle-310-arm64.yaml b/ci/deps/circle-310-arm64.yaml
index 8e106445cd4e0..a19ffd485262d 100644
--- a/ci/deps/circle-310-arm64.yaml
+++ b/ci/deps/circle-310-arm64.yaml
@@ -20,7 +20,7 @@ dependencies:
# required dependencies
- python-dateutil
- - numpy<2
+ - numpy
- pytz
# optional dependencies
diff --git a/pandas/core/dtypes/cast.py b/pandas/core/dtypes/cast.py
index 72c33e95f68a0..d7a177c2a19c0 100644
--- a/pandas/core/dtypes/cast.py
+++ b/pandas/core/dtypes/cast.py
@@ -1332,7 +1332,7 @@ def find_result_type(left_dtype: DtypeObj, right: Any) -> DtypeObj:
right = left_dtype
elif (
not np.issubdtype(left_dtype, np.unsignedinteger)
- and 0 < right <= 2 ** (8 * right_dtype.itemsize - 1) - 1
+ and 0 < right <= np.iinfo(right_dtype).max
):
# If left dtype isn't unsigned, check if it fits in the signed dtype
right = np.dtype(f"i{right_dtype.itemsize}")
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56682 | 2023-12-29T18:52:52Z | 2023-12-29T22:59:21Z | 2023-12-29T22:59:20Z | 2024-03-16T17:12:32Z |
DEPR: utcnow, utcfromtimestamp | diff --git a/doc/source/whatsnew/v2.3.0.rst b/doc/source/whatsnew/v2.3.0.rst
index 1f1b0c7d7195a..bba1ab68d9d05 100644
--- a/doc/source/whatsnew/v2.3.0.rst
+++ b/doc/source/whatsnew/v2.3.0.rst
@@ -92,7 +92,8 @@ Other API changes
Deprecations
~~~~~~~~~~~~
--
+- Deprecated :meth:`Timestamp.utcfromtimestamp`, use ``Timestamp.fromtimestamp(ts, "UTC")`` instead (:issue:`56680`)
+- Deprecated :meth:`Timestamp.utcnow`, use ``Timestamp.now("UTC")`` instead (:issue:`56680`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/_libs/tslibs/strptime.pyx b/pandas/_libs/tslibs/strptime.pyx
index ee72b1311051e..c09835c9661f3 100644
--- a/pandas/_libs/tslibs/strptime.pyx
+++ b/pandas/_libs/tslibs/strptime.pyx
@@ -132,7 +132,7 @@ cdef bint parse_today_now(
if infer_reso:
creso = NPY_DATETIMEUNIT.NPY_FR_us
if utc:
- ts = <_Timestamp>Timestamp.utcnow()
+ ts = <_Timestamp>Timestamp.now(timezone.utc)
iresult[0] = ts._as_creso(creso)._value
else:
# GH#18705 make sure to_datetime("now") matches Timestamp("now")
diff --git a/pandas/_libs/tslibs/timestamps.pyx b/pandas/_libs/tslibs/timestamps.pyx
index 568539b53aee0..1dae2403706e8 100644
--- a/pandas/_libs/tslibs/timestamps.pyx
+++ b/pandas/_libs/tslibs/timestamps.pyx
@@ -1418,6 +1418,14 @@ class Timestamp(_Timestamp):
>>> pd.Timestamp.utcnow() # doctest: +SKIP
Timestamp('2020-11-16 22:50:18.092888+0000', tz='UTC')
"""
+ warnings.warn(
+ # The stdlib datetime.utcnow is deprecated, so we deprecate to match.
+ # GH#56680
+ "Timestamp.utcnow is deprecated and will be removed in a future "
+ "version. Use Timestamp.now('UTC') instead.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
return cls.now(UTC)
@classmethod
@@ -1438,6 +1446,14 @@ class Timestamp(_Timestamp):
Timestamp('2020-03-14 15:32:52+0000', tz='UTC')
"""
# GH#22451
+ warnings.warn(
+ # The stdlib datetime.utcfromtimestamp is deprecated, so we deprecate
+ # to match. GH#56680
+ "Timestamp.utcfromtimestamp is deprecated and will be removed in a "
+ "future version. Use Timestamp.fromtimestamp(ts, 'UTC') instead.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
return cls.fromtimestamp(ts, tz="UTC")
@classmethod
diff --git a/pandas/tests/groupby/methods/test_groupby_shift_diff.py b/pandas/tests/groupby/methods/test_groupby_shift_diff.py
index 94e672d4892fe..41e0ee93a5941 100644
--- a/pandas/tests/groupby/methods/test_groupby_shift_diff.py
+++ b/pandas/tests/groupby/methods/test_groupby_shift_diff.py
@@ -63,7 +63,7 @@ def test_group_shift_with_fill_value():
def test_group_shift_lose_timezone():
# GH 30134
- now_dt = Timestamp.utcnow().as_unit("ns")
+ now_dt = Timestamp.now("UTC").as_unit("ns")
df = DataFrame({"a": [1, 1], "date": now_dt})
result = df.groupby("a").shift(0).iloc[0]
expected = Series({"date": now_dt}, name=result.name)
diff --git a/pandas/tests/scalar/timestamp/test_constructors.py b/pandas/tests/scalar/timestamp/test_constructors.py
index 3975f3c46aaa1..f92e9145a2205 100644
--- a/pandas/tests/scalar/timestamp/test_constructors.py
+++ b/pandas/tests/scalar/timestamp/test_constructors.py
@@ -28,6 +28,7 @@
Timedelta,
Timestamp,
)
+import pandas._testing as tm
class TestTimestampConstructorUnitKeyword:
@@ -329,6 +330,18 @@ def test_constructor_positional_keyword_mixed_with_tzinfo(self, kwd, request):
class TestTimestampClassMethodConstructors:
# Timestamp constructors other than __new__
+ def test_utcnow_deprecated(self):
+ # GH#56680
+ msg = "Timestamp.utcnow is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ Timestamp.utcnow()
+
+ def test_utcfromtimestamp_deprecated(self):
+ # GH#56680
+ msg = "Timestamp.utcfromtimestamp is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ Timestamp.utcfromtimestamp(43)
+
def test_constructor_strptime(self):
# GH#25016
# Test support for Timestamp.strptime
diff --git a/pandas/tests/scalar/timestamp/test_timestamp.py b/pandas/tests/scalar/timestamp/test_timestamp.py
index 05e1c93e1a676..e0734b314a0bd 100644
--- a/pandas/tests/scalar/timestamp/test_timestamp.py
+++ b/pandas/tests/scalar/timestamp/test_timestamp.py
@@ -269,7 +269,9 @@ def test_disallow_setting_tz(self, tz):
ts.tz = tz
def test_default_to_stdlib_utc(self):
- assert Timestamp.utcnow().tz is timezone.utc
+ msg = "Timestamp.utcnow is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ assert Timestamp.utcnow().tz is timezone.utc
assert Timestamp.now("UTC").tz is timezone.utc
assert Timestamp("2016-01-01", tz="UTC").tz is timezone.utc
@@ -312,11 +314,15 @@ def compare(x, y):
compare(Timestamp.now(), datetime.now())
compare(Timestamp.now("UTC"), datetime.now(pytz.timezone("UTC")))
compare(Timestamp.now("UTC"), datetime.now(tzutc()))
- compare(Timestamp.utcnow(), datetime.now(timezone.utc))
+ msg = "Timestamp.utcnow is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ compare(Timestamp.utcnow(), datetime.now(timezone.utc))
compare(Timestamp.today(), datetime.today())
current_time = calendar.timegm(datetime.now().utctimetuple())
- ts_utc = Timestamp.utcfromtimestamp(current_time)
+ msg = "Timestamp.utcfromtimestamp is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ ts_utc = Timestamp.utcfromtimestamp(current_time)
assert ts_utc.timestamp() == current_time
compare(
Timestamp.fromtimestamp(current_time), datetime.fromtimestamp(current_time)
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index cb94427ae8961..e4e87a4b9c95e 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -1073,6 +1073,7 @@ def test_to_datetime_today(self, tz):
def test_to_datetime_today_now_unicode_bytes(self, arg):
to_datetime([arg])
+ @pytest.mark.filterwarnings("ignore:Timestamp.utcnow is deprecated:FutureWarning")
@pytest.mark.parametrize(
"format, expected_ds",
[
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
The stdlib has deprecated these, let's do it too! | https://api.github.com/repos/pandas-dev/pandas/pulls/56680 | 2023-12-29T17:47:56Z | 2024-01-08T21:23:54Z | 2024-01-08T21:23:54Z | 2024-01-11T00:05:19Z |
Fix integral truediv and floordiv for pyarrow types with large divisor and avoid floating points for floordiv | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 129f5cedb86c2..75971f4ca109e 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -776,6 +776,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
+- Bug in :meth:`Series.__floordiv__` and :meth:`Series.__truediv__` for :class:`ArrowDtype` with integral dtypes raising for large divisors (:issue:`56706`)
- Bug in :meth:`Series.__floordiv__` for :class:`ArrowDtype` with integral dtypes raising for large values (:issue:`56645`)
- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index b1164301e6d79..3633e3f5d75d8 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -109,30 +109,50 @@
def cast_for_truediv(
arrow_array: pa.ChunkedArray, pa_object: pa.Array | pa.Scalar
- ) -> pa.ChunkedArray:
+ ) -> tuple[pa.ChunkedArray, pa.Array | pa.Scalar]:
# Ensure int / int -> float mirroring Python/Numpy behavior
# as pc.divide_checked(int, int) -> int
if pa.types.is_integer(arrow_array.type) and pa.types.is_integer(
pa_object.type
):
+ # GH: 56645.
# https://github.com/apache/arrow/issues/35563
- # Arrow does not allow safe casting large integral values to float64.
- # Intentionally not using arrow_array.cast because it could be a scalar
- # value in reflected case, and safe=False only added to
- # scalar cast in pyarrow 13.
- return pc.cast(arrow_array, pa.float64(), safe=False)
- return arrow_array
+ return pc.cast(arrow_array, pa.float64(), safe=False), pc.cast(
+ pa_object, pa.float64(), safe=False
+ )
+
+ return arrow_array, pa_object
def floordiv_compat(
left: pa.ChunkedArray | pa.Array | pa.Scalar,
right: pa.ChunkedArray | pa.Array | pa.Scalar,
) -> pa.ChunkedArray:
- # Ensure int // int -> int mirroring Python/Numpy behavior
- # as pc.floor(pc.divide_checked(int, int)) -> float
- converted_left = cast_for_truediv(left, right)
- result = pc.floor(pc.divide(converted_left, right))
+ # TODO: Replace with pyarrow floordiv kernel.
+ # https://github.com/apache/arrow/issues/39386
if pa.types.is_integer(left.type) and pa.types.is_integer(right.type):
+ divided = pc.divide_checked(left, right)
+ if pa.types.is_signed_integer(divided.type):
+ # GH 56676
+ has_remainder = pc.not_equal(pc.multiply(divided, right), left)
+ has_one_negative_operand = pc.less(
+ pc.bit_wise_xor(left, right),
+ pa.scalar(0, type=divided.type),
+ )
+ result = pc.if_else(
+ pc.and_(
+ has_remainder,
+ has_one_negative_operand,
+ ),
+ # GH: 55561
+ pc.subtract(divided, pa.scalar(1, type=divided.type)),
+ divided,
+ )
+ else:
+ result = divided
result = result.cast(left.type)
+ else:
+ divided = pc.divide(left, right)
+ result = pc.floor(divided)
return result
ARROW_ARITHMETIC_FUNCS = {
@@ -142,8 +162,8 @@ def floordiv_compat(
"rsub": lambda x, y: pc.subtract_checked(y, x),
"mul": pc.multiply_checked,
"rmul": lambda x, y: pc.multiply_checked(y, x),
- "truediv": lambda x, y: pc.divide(cast_for_truediv(x, y), y),
- "rtruediv": lambda x, y: pc.divide(y, cast_for_truediv(x, y)),
+ "truediv": lambda x, y: pc.divide(*cast_for_truediv(x, y)),
+ "rtruediv": lambda x, y: pc.divide(*cast_for_truediv(y, x)),
"floordiv": lambda x, y: floordiv_compat(x, y),
"rfloordiv": lambda x, y: floordiv_compat(y, x),
"mod": NotImplemented,
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index ed1b7b199a16f..7ce2e841a76f8 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3239,13 +3239,82 @@ def test_arrow_floordiv():
def test_arrow_floordiv_large_values():
- # GH 55561
+ # GH 56645
a = pd.Series([1425801600000000000], dtype="int64[pyarrow]")
expected = pd.Series([1425801600000], dtype="int64[pyarrow]")
result = a // 1_000_000
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_floordiv_large_integral_result(dtype):
+ # GH 56676
+ a = pd.Series([18014398509481983], dtype=dtype)
+ result = a // 1
+ tm.assert_series_equal(result, a)
+
+
+@pytest.mark.parametrize("pa_type", tm.SIGNED_INT_PYARROW_DTYPES)
+def test_arrow_floordiv_larger_divisor(pa_type):
+ # GH 56676
+ dtype = ArrowDtype(pa_type)
+ a = pd.Series([-23], dtype=dtype)
+ result = a // 24
+ expected = pd.Series([-1], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("pa_type", tm.SIGNED_INT_PYARROW_DTYPES)
+def test_arrow_floordiv_integral_invalid(pa_type):
+ # GH 56676
+ min_value = np.iinfo(pa_type.to_pandas_dtype()).min
+ a = pd.Series([min_value], dtype=ArrowDtype(pa_type))
+ with pytest.raises(pa.lib.ArrowInvalid, match="overflow|not in range"):
+ a // -1
+ with pytest.raises(pa.lib.ArrowInvalid, match="divide by zero"):
+ a // 0
+
+
+@pytest.mark.parametrize("dtype", tm.FLOAT_PYARROW_DTYPES_STR_REPR)
+def test_arrow_floordiv_floating_0_divisor(dtype):
+ # GH 56676
+ a = pd.Series([2], dtype=dtype)
+ result = a // 0
+ expected = pd.Series([float("inf")], dtype=dtype)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("pa_type", tm.ALL_INT_PYARROW_DTYPES)
+def test_arrow_integral_floordiv_large_values(pa_type):
+ # GH 56676
+ max_value = np.iinfo(pa_type.to_pandas_dtype()).max
+ dtype = ArrowDtype(pa_type)
+ a = pd.Series([max_value], dtype=dtype)
+ b = pd.Series([1], dtype=dtype)
+ result = a // b
+ tm.assert_series_equal(result, a)
+
+
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_true_division_large_divisor(dtype):
+ # GH 56706
+ a = pd.Series([0], dtype=dtype)
+ b = pd.Series([18014398509481983], dtype=dtype)
+ expected = pd.Series([0], dtype="float64[pyarrow]")
+ result = a / b
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize("dtype", ["int64[pyarrow]", "uint64[pyarrow]"])
+def test_arrow_floor_division_large_divisor(dtype):
+ # GH 56706
+ a = pd.Series([0], dtype=dtype)
+ b = pd.Series([18014398509481983], dtype=dtype)
+ expected = pd.Series([0], dtype=dtype)
+ result = a // b
+ tm.assert_series_equal(result, expected)
+
+
def test_string_to_datetime_parsing_cast():
# GH 56266
string_dates = ["2020-01-01 04:30:00", "2020-01-02 00:00:00", "2020-01-03 00:00:00"]
| - [x] closes #56676(Replace xxxx with the GitHub issue number)
- [x] closes #56706(Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56677 | 2023-12-29T04:29:31Z | 2024-01-05T18:08:22Z | 2024-01-05T18:08:22Z | 2024-01-05T22:44:52Z |
TYP: more misc annotations | diff --git a/pandas/_config/config.py b/pandas/_config/config.py
index c391939d22491..73d69105541d8 100644
--- a/pandas/_config/config.py
+++ b/pandas/_config/config.py
@@ -75,6 +75,7 @@
from collections.abc import (
Generator,
Iterable,
+ Sequence,
)
@@ -853,7 +854,7 @@ def inner(x) -> None:
return inner
-def is_instance_factory(_type) -> Callable[[Any], None]:
+def is_instance_factory(_type: type | tuple[type, ...]) -> Callable[[Any], None]:
"""
Parameters
@@ -866,8 +867,7 @@ def is_instance_factory(_type) -> Callable[[Any], None]:
ValueError if x is not an instance of `_type`
"""
- if isinstance(_type, (tuple, list)):
- _type = tuple(_type)
+ if isinstance(_type, tuple):
type_repr = "|".join(map(str, _type))
else:
type_repr = f"'{_type}'"
@@ -879,7 +879,7 @@ def inner(x) -> None:
return inner
-def is_one_of_factory(legal_values) -> Callable[[Any], None]:
+def is_one_of_factory(legal_values: Sequence) -> Callable[[Any], None]:
callables = [c for c in legal_values if callable(c)]
legal_values = [c for c in legal_values if not callable(c)]
@@ -930,7 +930,7 @@ def is_nonnegative_int(value: object) -> None:
is_text = is_instance_factory((str, bytes))
-def is_callable(obj) -> bool:
+def is_callable(obj: object) -> bool:
"""
Parameters
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 2930b979bfe78..df5ec356175bf 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -759,7 +759,7 @@ def asfreq(self, freq=None, how: str = "E") -> Self:
# ------------------------------------------------------------------
# Rendering Methods
- def _formatter(self, boxed: bool = False):
+ def _formatter(self, boxed: bool = False) -> Callable[[object], str]:
if boxed:
return str
return "'{}'".format
diff --git a/pandas/core/arrays/timedeltas.py b/pandas/core/arrays/timedeltas.py
index 1b885a2bdcd47..58455f8cb8398 100644
--- a/pandas/core/arrays/timedeltas.py
+++ b/pandas/core/arrays/timedeltas.py
@@ -1080,7 +1080,7 @@ def sequence_to_td64ns(
return data, inferred_freq
-def _ints_to_td64ns(data, unit: str = "ns"):
+def _ints_to_td64ns(data, unit: str = "ns") -> tuple[np.ndarray, bool]:
"""
Convert an ndarray with integer-dtype to timedelta64[ns] dtype, treating
the integers as multiples of the given timedelta unit.
@@ -1120,7 +1120,9 @@ def _ints_to_td64ns(data, unit: str = "ns"):
return data, copy_made
-def _objects_to_td64ns(data, unit=None, errors: DateTimeErrorChoices = "raise"):
+def _objects_to_td64ns(
+ data, unit=None, errors: DateTimeErrorChoices = "raise"
+) -> np.ndarray:
"""
Convert a object-dtyped or string-dtyped array into an
timedelta64[ns]-dtyped array.
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index a8b63f97141c2..0a9d5af7cbd42 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -329,7 +329,7 @@ def is_terminal() -> bool:
"min_rows",
10,
pc_min_rows_doc,
- validator=is_instance_factory([type(None), int]),
+ validator=is_instance_factory((type(None), int)),
)
cf.register_option("max_categories", 8, pc_max_categories_doc, validator=is_int)
@@ -369,7 +369,7 @@ def is_terminal() -> bool:
cf.register_option("chop_threshold", None, pc_chop_threshold_doc)
cf.register_option("max_seq_items", 100, pc_max_seq_items)
cf.register_option(
- "width", 80, pc_width_doc, validator=is_instance_factory([type(None), int])
+ "width", 80, pc_width_doc, validator=is_instance_factory((type(None), int))
)
cf.register_option(
"memory_usage",
@@ -850,14 +850,14 @@ def register_converter_cb(key) -> None:
"format.thousands",
None,
styler_thousands,
- validator=is_instance_factory([type(None), str]),
+ validator=is_instance_factory((type(None), str)),
)
cf.register_option(
"format.na_rep",
None,
styler_na_rep,
- validator=is_instance_factory([type(None), str]),
+ validator=is_instance_factory((type(None), str)),
)
cf.register_option(
@@ -867,11 +867,15 @@ def register_converter_cb(key) -> None:
validator=is_one_of_factory([None, "html", "latex", "latex-math"]),
)
+ # error: Argument 1 to "is_instance_factory" has incompatible type "tuple[
+ # ..., <typing special form>, ...]"; expected "type | tuple[type, ...]"
cf.register_option(
"format.formatter",
None,
styler_formatter,
- validator=is_instance_factory([type(None), dict, Callable, str]),
+ validator=is_instance_factory(
+ (type(None), dict, Callable, str) # type: ignore[arg-type]
+ ),
)
cf.register_option("html.mathjax", True, styler_mathjax, validator=is_bool)
@@ -898,7 +902,7 @@ def register_converter_cb(key) -> None:
"latex.environment",
None,
styler_environment,
- validator=is_instance_factory([type(None), str]),
+ validator=is_instance_factory((type(None), str)),
)
diff --git a/pandas/core/dtypes/missing.py b/pandas/core/dtypes/missing.py
index 4dc0d477f89e8..3e4227a8a2598 100644
--- a/pandas/core/dtypes/missing.py
+++ b/pandas/core/dtypes/missing.py
@@ -557,11 +557,13 @@ def _array_equivalent_float(left: np.ndarray, right: np.ndarray) -> bool:
return bool(((left == right) | (np.isnan(left) & np.isnan(right))).all())
-def _array_equivalent_datetimelike(left: np.ndarray, right: np.ndarray):
+def _array_equivalent_datetimelike(left: np.ndarray, right: np.ndarray) -> bool:
return np.array_equal(left.view("i8"), right.view("i8"))
-def _array_equivalent_object(left: np.ndarray, right: np.ndarray, strict_nan: bool):
+def _array_equivalent_object(
+ left: np.ndarray, right: np.ndarray, strict_nan: bool
+) -> bool:
left = ensure_object(left)
right = ensure_object(right)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c24ef4d6d6d42..73b5804d8c168 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -233,6 +233,7 @@
IndexLabel,
JoinValidate,
Level,
+ ListLike,
MergeHow,
MergeValidate,
MutableMappingT,
@@ -5349,11 +5350,11 @@ def reindex(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level = ...,
inplace: Literal[True],
errors: IgnoreRaise = ...,
@@ -5363,11 +5364,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level = ...,
inplace: Literal[False] = ...,
errors: IgnoreRaise = ...,
@@ -5377,11 +5378,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level = ...,
inplace: bool = ...,
errors: IgnoreRaise = ...,
@@ -5390,11 +5391,11 @@ def drop(
def drop(
self,
- labels: IndexLabel | None = None,
+ labels: IndexLabel | ListLike = None,
*,
axis: Axis = 0,
- index: IndexLabel | None = None,
- columns: IndexLabel | None = None,
+ index: IndexLabel | ListLike = None,
+ columns: IndexLabel | ListLike = None,
level: Level | None = None,
inplace: bool = False,
errors: IgnoreRaise = "raise",
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 91a150c63c5b6..9b70c24aa67c6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -65,6 +65,7 @@
IntervalClosedType,
JSONSerializable,
Level,
+ ListLike,
Manager,
NaPosition,
NDFrameT,
@@ -4709,11 +4710,11 @@ def reindex_like(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: Literal[True],
errors: IgnoreRaise = ...,
@@ -4723,11 +4724,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: Literal[False] = ...,
errors: IgnoreRaise = ...,
@@ -4737,11 +4738,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: bool_t = ...,
errors: IgnoreRaise = ...,
@@ -4750,11 +4751,11 @@ def drop(
def drop(
self,
- labels: IndexLabel | None = None,
+ labels: IndexLabel | ListLike = None,
*,
axis: Axis = 0,
- index: IndexLabel | None = None,
- columns: IndexLabel | None = None,
+ index: IndexLabel | ListLike = None,
+ columns: IndexLabel | ListLike = None,
level: Level | None = None,
inplace: bool_t = False,
errors: IgnoreRaise = "raise",
diff --git a/pandas/core/methods/selectn.py b/pandas/core/methods/selectn.py
index a2f8ca94134b8..5256c0a1c73a4 100644
--- a/pandas/core/methods/selectn.py
+++ b/pandas/core/methods/selectn.py
@@ -10,6 +10,7 @@
)
from typing import (
TYPE_CHECKING,
+ Generic,
cast,
final,
)
@@ -32,16 +33,25 @@
from pandas._typing import (
DtypeObj,
IndexLabel,
+ NDFrameT,
)
from pandas import (
DataFrame,
Series,
)
+else:
+ # Generic[...] requires a non-str, provide it with a plain TypeVar at
+ # runtime to avoid circular imports
+ from pandas._typing import T
+ NDFrameT = T
+ DataFrame = T
+ Series = T
-class SelectN:
- def __init__(self, obj, n: int, keep: str) -> None:
+
+class SelectN(Generic[NDFrameT]):
+ def __init__(self, obj: NDFrameT, n: int, keep: str) -> None:
self.obj = obj
self.n = n
self.keep = keep
@@ -49,15 +59,15 @@ def __init__(self, obj, n: int, keep: str) -> None:
if self.keep not in ("first", "last", "all"):
raise ValueError('keep must be either "first", "last" or "all"')
- def compute(self, method: str) -> DataFrame | Series:
+ def compute(self, method: str) -> NDFrameT:
raise NotImplementedError
@final
- def nlargest(self):
+ def nlargest(self) -> NDFrameT:
return self.compute("nlargest")
@final
- def nsmallest(self):
+ def nsmallest(self) -> NDFrameT:
return self.compute("nsmallest")
@final
@@ -72,7 +82,7 @@ def is_valid_dtype_n_method(dtype: DtypeObj) -> bool:
return needs_i8_conversion(dtype)
-class SelectNSeries(SelectN):
+class SelectNSeries(SelectN[Series]):
"""
Implement n largest/smallest for Series
@@ -163,7 +173,7 @@ def compute(self, method: str) -> Series:
return concat([dropped.iloc[inds], nan_index]).iloc[:findex]
-class SelectNFrame(SelectN):
+class SelectNFrame(SelectN[DataFrame]):
"""
Implement n largest/smallest for DataFrame
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index d275445983b6f..0d857f6b21517 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -960,14 +960,11 @@ def _pad_2d(
values: np.ndarray,
limit: int | None = None,
mask: npt.NDArray[np.bool_] | None = None,
-):
+) -> tuple[np.ndarray, npt.NDArray[np.bool_]]:
mask = _fillna_prep(values, mask)
if values.size:
algos.pad_2d_inplace(values, mask, limit=limit)
- else:
- # for test coverage
- pass
return values, mask
diff --git a/pandas/core/series.py b/pandas/core/series.py
index e3b401cd3c88b..487f57b7390a8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -179,6 +179,7 @@
IndexKeyFunc,
IndexLabel,
Level,
+ ListLike,
MutableMappingT,
NaPosition,
NumpySorter,
@@ -5192,11 +5193,11 @@ def rename_axis(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: Literal[True],
errors: IgnoreRaise = ...,
@@ -5206,11 +5207,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: Literal[False] = ...,
errors: IgnoreRaise = ...,
@@ -5220,11 +5221,11 @@ def drop(
@overload
def drop(
self,
- labels: IndexLabel = ...,
+ labels: IndexLabel | ListLike = ...,
*,
axis: Axis = ...,
- index: IndexLabel = ...,
- columns: IndexLabel = ...,
+ index: IndexLabel | ListLike = ...,
+ columns: IndexLabel | ListLike = ...,
level: Level | None = ...,
inplace: bool = ...,
errors: IgnoreRaise = ...,
@@ -5233,11 +5234,11 @@ def drop(
def drop(
self,
- labels: IndexLabel | None = None,
+ labels: IndexLabel | ListLike = None,
*,
axis: Axis = 0,
- index: IndexLabel | None = None,
- columns: IndexLabel | None = None,
+ index: IndexLabel | ListLike = None,
+ columns: IndexLabel | ListLike = None,
level: Level | None = None,
inplace: bool = False,
errors: IgnoreRaise = "raise",
diff --git a/pandas/core/tools/timedeltas.py b/pandas/core/tools/timedeltas.py
index d772c908c4731..b80ed9ac50dce 100644
--- a/pandas/core/tools/timedeltas.py
+++ b/pandas/core/tools/timedeltas.py
@@ -89,7 +89,7 @@ def to_timedelta(
| Series,
unit: UnitChoices | None = None,
errors: DateTimeErrorChoices = "raise",
-) -> Timedelta | TimedeltaIndex | Series:
+) -> Timedelta | TimedeltaIndex | Series | NaTType:
"""
Convert argument to timedelta.
@@ -225,7 +225,7 @@ def to_timedelta(
def _coerce_scalar_to_timedelta_type(
r, unit: UnitChoices | None = "ns", errors: DateTimeErrorChoices = "raise"
-):
+) -> Timedelta | NaTType:
"""Convert string 'r' to a timedelta object."""
result: Timedelta | NaTType
diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index 01094ba36b9dd..d3ca9c8521203 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -532,7 +532,7 @@ class ChainedAssignmentError(Warning):
)
-def _check_cacher(obj):
+def _check_cacher(obj) -> bool:
# This is a mess, selection paths that return a view set the _cacher attribute
# on the Series; most of them also set _item_cache which adds 1 to our relevant
# reference count, but iloc does not, so we have to check if we are actually
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 72c9deeb54fc7..57bc6c1379d77 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -1066,7 +1066,7 @@ class _IOWrapper:
def __init__(self, buffer: BaseBuffer) -> None:
self.buffer = buffer
- def __getattr__(self, name: str):
+ def __getattr__(self, name: str) -> Any:
return getattr(self.buffer, name)
def readable(self) -> bool:
@@ -1097,7 +1097,7 @@ def __init__(self, buffer: StringIO | TextIOBase, encoding: str = "utf-8") -> No
# overflow to the front of the bytestring the next time reading is performed
self.overflow = b""
- def __getattr__(self, attr: str):
+ def __getattr__(self, attr: str) -> Any:
return getattr(self.buffer, attr)
def read(self, n: int | None = -1) -> bytes:
diff --git a/pandas/io/formats/css.py b/pandas/io/formats/css.py
index ccce60c00a9e0..cddff9a97056a 100644
--- a/pandas/io/formats/css.py
+++ b/pandas/io/formats/css.py
@@ -340,7 +340,7 @@ def _update_other_units(self, props: dict[str, str]) -> dict[str, str]:
return props
def size_to_pt(self, in_val, em_pt=None, conversions=UNIT_RATIOS) -> str:
- def _error():
+ def _error() -> str:
warnings.warn(
f"Unhandled size: {repr(in_val)}",
CSSWarning,
diff --git a/pandas/io/gbq.py b/pandas/io/gbq.py
index 350002bf461ff..fe8702c2e16ae 100644
--- a/pandas/io/gbq.py
+++ b/pandas/io/gbq.py
@@ -11,12 +11,14 @@
from pandas.util._exceptions import find_stack_level
if TYPE_CHECKING:
+ from types import ModuleType
+
import google.auth
from pandas import DataFrame
-def _try_import():
+def _try_import() -> ModuleType:
# since pandas is a dependency of pandas-gbq
# we need to import on first use
msg = (
diff --git a/pyproject.toml b/pyproject.toml
index 430bb8e505df0..51d7603489cb8 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -671,9 +671,7 @@ module = [
"pandas.io.sas.sas7bdat", # TODO
"pandas.io.clipboards", # TODO
"pandas.io.common", # TODO
- "pandas.io.gbq", # TODO
"pandas.io.html", # TODO
- "pandas.io.gbq", # TODO
"pandas.io.parquet", # TODO
"pandas.io.pytables", # TODO
"pandas.io.sql", # TODO
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56675 | 2023-12-29T03:56:55Z | 2023-12-29T19:51:08Z | 2023-12-29T19:51:08Z | 2023-12-29T19:51:15Z |
BUG: dictionary type astype categorical using dictionary as categories | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 129f5cedb86c2..df899469d1b2d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -732,6 +732,7 @@ Categorical
^^^^^^^^^^^
- :meth:`Categorical.isin` raising ``InvalidIndexError`` for categorical containing overlapping :class:`Interval` values (:issue:`34974`)
- Bug in :meth:`CategoricalDtype.__eq__` returning ``False`` for unordered categorical data with mixed types (:issue:`55468`)
+- Bug when casting ``pa.dictionary`` to :class:`CategoricalDtype` using a ``pa.DictionaryArray`` as categories (:issue:`56672`)
Datetimelike
^^^^^^^^^^^^
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 8a88227ad54a3..606f0a366c7d5 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -44,7 +44,9 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
CategoricalDtype,
+ CategoricalDtypeType,
ExtensionDtype,
)
from pandas.core.dtypes.generic import (
@@ -443,24 +445,32 @@ def __init__(
values = arr
if dtype.categories is None:
- if not isinstance(values, ABCIndex):
- # in particular RangeIndex xref test_index_equal_range_categories
- values = sanitize_array(values, None)
- try:
- codes, categories = factorize(values, sort=True)
- except TypeError as err:
- codes, categories = factorize(values, sort=False)
- if dtype.ordered:
- # raise, as we don't have a sortable data structure and so
- # the user should give us one by specifying categories
- raise TypeError(
- "'values' is not ordered, please "
- "explicitly specify the categories order "
- "by passing in a categories argument."
- ) from err
-
- # we're inferring from values
- dtype = CategoricalDtype(categories, dtype.ordered)
+ if isinstance(values.dtype, ArrowDtype) and issubclass(
+ values.dtype.type, CategoricalDtypeType
+ ):
+ arr = values._pa_array.combine_chunks()
+ categories = arr.dictionary.to_pandas(types_mapper=ArrowDtype)
+ codes = arr.indices.to_numpy()
+ dtype = CategoricalDtype(categories, values.dtype.pyarrow_dtype.ordered)
+ else:
+ if not isinstance(values, ABCIndex):
+ # in particular RangeIndex xref test_index_equal_range_categories
+ values = sanitize_array(values, None)
+ try:
+ codes, categories = factorize(values, sort=True)
+ except TypeError as err:
+ codes, categories = factorize(values, sort=False)
+ if dtype.ordered:
+ # raise, as we don't have a sortable data structure and so
+ # the user should give us one by specifying categories
+ raise TypeError(
+ "'values' is not ordered, please "
+ "explicitly specify the categories order "
+ "by passing in a categories argument."
+ ) from err
+
+ # we're inferring from values
+ dtype = CategoricalDtype(categories, dtype.ordered)
elif isinstance(values.dtype, CategoricalDtype):
old_codes = extract_array(values)._codes
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index ed1b7b199a16f..b2330f8b39642 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3229,6 +3229,22 @@ def test_factorize_chunked_dictionary():
tm.assert_index_equal(res_uniques, exp_uniques)
+def test_dictionary_astype_categorical():
+ # GH#56672
+ arrs = [
+ pa.array(np.array(["a", "x", "c", "a"])).dictionary_encode(),
+ pa.array(np.array(["a", "d", "c"])).dictionary_encode(),
+ ]
+ ser = pd.Series(ArrowExtensionArray(pa.chunked_array(arrs)))
+ result = ser.astype("category")
+ categories = pd.Index(["a", "x", "c", "d"], dtype=ArrowDtype(pa.string()))
+ expected = pd.Series(
+ ["a", "x", "c", "a", "a", "d", "c"],
+ dtype=pd.CategoricalDtype(categories=categories),
+ )
+ tm.assert_series_equal(result, expected)
+
+
def test_arrow_floordiv():
# GH 55561
a = pd.Series([-7], dtype="int64[pyarrow]")
| Currently, we use a dictionary array as categories, which is weird, this just uses the underlying dictionary and extracts the codes, which should be fine for us. The dictionary array causes all kinds of trouble when sorting and doing similar things
| https://api.github.com/repos/pandas-dev/pandas/pulls/56672 | 2023-12-28T22:39:39Z | 2024-01-03T22:49:01Z | 2024-01-03T22:49:00Z | 2024-01-03T22:49:04Z |
STY: Enable ruff pytest checks | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 6033bda99e8c8..73ac14f1ed5ce 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -272,13 +272,6 @@ repos:
language: python
types: [rst]
files: ^doc/source/(development|reference)/
- - id: unwanted-patterns-bare-pytest-raises
- name: Check for use of bare pytest raises
- language: python
- entry: python scripts/validate_unwanted_patterns.py --validation-type="bare_pytest_raises"
- types: [python]
- files: ^pandas/tests/
- exclude: ^pandas/tests/extension/
- id: unwanted-patterns-private-function-across-module
name: Check for use of private functions across modules
language: python
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 4a3fb5c2916c6..16b437b9a4723 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -1971,6 +1971,6 @@ def warsaw(request) -> str:
return request.param
-@pytest.fixture()
+@pytest.fixture
def arrow_string_storage():
return ("pyarrow", "pyarrow_numpy")
diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index b4e8d09c18163..75259cb7e2f05 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -1385,7 +1385,6 @@ def test_dt64arr_add_sub_relativedelta_offsets(self, box_with_array, unit):
"SemiMonthBegin",
"Week",
("Week", {"weekday": 3}),
- "Week",
("Week", {"weekday": 6}),
"BusinessDay",
"BDay",
diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py
index 41d9db7335957..cff8afaa17516 100644
--- a/pandas/tests/arrays/categorical/test_api.py
+++ b/pandas/tests/arrays/categorical/test_api.py
@@ -291,12 +291,12 @@ def test_set_categories(self):
(["a", "b", "c"], ["a", "b"], ["a", "b"]),
(["a", "b", "c"], ["a", "b"], ["b", "a"]),
(["b", "a", "c"], ["a", "b"], ["a", "b"]),
- (["b", "a", "c"], ["a", "b"], ["a", "b"]),
+ (["b", "a", "c"], ["a", "b"], ["b", "a"]),
# Introduce NaNs
(["a", "b", "c"], ["a", "b"], ["a"]),
(["a", "b", "c"], ["a", "b"], ["b"]),
(["b", "a", "c"], ["a", "b"], ["a"]),
- (["b", "a", "c"], ["a", "b"], ["a"]),
+ (["b", "a", "c"], ["a", "b"], ["b"]),
# No overlap
(["a", "b", "c"], ["a", "b"], ["d", "e"]),
],
diff --git a/pandas/tests/arrays/categorical/test_dtypes.py b/pandas/tests/arrays/categorical/test_dtypes.py
index f2f2851c22794..ec1d501ddba16 100644
--- a/pandas/tests/arrays/categorical/test_dtypes.py
+++ b/pandas/tests/arrays/categorical/test_dtypes.py
@@ -73,12 +73,12 @@ def test_set_dtype_new_categories(self):
(["a", "b", "c"], ["a", "b"], ["a", "b"]),
(["a", "b", "c"], ["a", "b"], ["b", "a"]),
(["b", "a", "c"], ["a", "b"], ["a", "b"]),
- (["b", "a", "c"], ["a", "b"], ["a", "b"]),
+ (["b", "a", "c"], ["a", "b"], ["b", "a"]),
# Introduce NaNs
(["a", "b", "c"], ["a", "b"], ["a"]),
(["a", "b", "c"], ["a", "b"], ["b"]),
(["b", "a", "c"], ["a", "b"], ["a"]),
- (["b", "a", "c"], ["a", "b"], ["a"]),
+ (["b", "a", "c"], ["a", "b"], ["b"]),
# No overlap
(["a", "b", "c"], ["a", "b"], ["d", "e"]),
],
diff --git a/pandas/tests/arrays/masked/test_function.py b/pandas/tests/arrays/masked/test_function.py
index 4c7bd6e293ef4..d5ea60ecb754d 100644
--- a/pandas/tests/arrays/masked/test_function.py
+++ b/pandas/tests/arrays/masked/test_function.py
@@ -21,7 +21,7 @@ def data(request):
return request.param
-@pytest.fixture()
+@pytest.fixture
def numpy_dtype(data):
"""
Fixture returning numpy dtype from 'data' input array.
diff --git a/pandas/tests/arrays/sparse/test_arithmetics.py b/pandas/tests/arrays/sparse/test_arithmetics.py
index ffc93b4e4f176..f84d03e851621 100644
--- a/pandas/tests/arrays/sparse/test_arithmetics.py
+++ b/pandas/tests/arrays/sparse/test_arithmetics.py
@@ -433,9 +433,6 @@ def test_ufuncs(ufunc, arr):
[
(SparseArray([0, 0, 0]), np.array([0, 1, 2])),
(SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
- (SparseArray([0, 0, 0], fill_value=1), np.array([0, 1, 2])),
],
)
@pytest.mark.parametrize("ufunc", [np.add, np.greater])
diff --git a/pandas/tests/arrays/string_/test_string_arrow.py b/pandas/tests/arrays/string_/test_string_arrow.py
index d7811b6fed883..405c1c217b04d 100644
--- a/pandas/tests/arrays/string_/test_string_arrow.py
+++ b/pandas/tests/arrays/string_/test_string_arrow.py
@@ -220,22 +220,22 @@ def test_setitem_invalid_indexer_raises():
arr = ArrowStringArray(pa.array(list("abcde")))
- with pytest.raises(IndexError, match=None):
+ with tm.external_error_raised(IndexError):
arr[5] = "foo"
- with pytest.raises(IndexError, match=None):
+ with tm.external_error_raised(IndexError):
arr[-6] = "foo"
- with pytest.raises(IndexError, match=None):
+ with tm.external_error_raised(IndexError):
arr[[0, 5]] = "foo"
- with pytest.raises(IndexError, match=None):
+ with tm.external_error_raised(IndexError):
arr[[0, -6]] = "foo"
- with pytest.raises(IndexError, match=None):
+ with tm.external_error_raised(IndexError):
arr[[True, True, False]] = "foo"
- with pytest.raises(ValueError, match=None):
+ with tm.external_error_raised(ValueError):
arr[[0, 1]] = ["foo", "bar", "baz"]
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index 96263f498935b..a84fefebf044c 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -29,7 +29,7 @@
)
-@pytest.mark.parametrize("dtype_unit", ["M8[h]", "M8[m]", "m8[h]", "M8[m]"])
+@pytest.mark.parametrize("dtype_unit", ["M8[h]", "M8[m]", "m8[h]"])
def test_dt64_array(dtype_unit):
# PR 53817
dtype_var = np.dtype(dtype_unit)
diff --git a/pandas/tests/copy_view/test_internals.py b/pandas/tests/copy_view/test_internals.py
index 615b024bd06bf..400fb8e03c18c 100644
--- a/pandas/tests/copy_view/test_internals.py
+++ b/pandas/tests/copy_view/test_internals.py
@@ -83,7 +83,6 @@ def test_switch_options():
([0, 1, 2], np.array([[-1, -2, -3], [-4, -5, -6], [-4, -5, -6]]).T),
([1, 2], np.array([[-1, -2, -3], [-4, -5, -6]]).T),
([1, 3], np.array([[-1, -2, -3], [-4, -5, -6]]).T),
- ([1, 3], np.array([[-1, -2, -3], [-4, -5, -6]]).T),
],
)
def test_iset_splits_blocks_inplace(using_copy_on_write, locs, arr, dtype):
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index 7105755df6f88..c205f35b0ced8 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -531,7 +531,6 @@ def test_array_equivalent_different_dtype_but_equal():
(fix_now, fix_utcnow),
(fix_now.to_datetime64(), fix_utcnow),
(fix_now.to_pydatetime(), fix_utcnow),
- (fix_now, fix_utcnow),
(fix_now.to_datetime64(), fix_utcnow.to_pydatetime()),
(fix_now.to_pydatetime(), fix_utcnow.to_pydatetime()),
],
diff --git a/pandas/tests/extension/base/dim2.py b/pandas/tests/extension/base/dim2.py
index 132cda5a94ed0..4da9fe8917d55 100644
--- a/pandas/tests/extension/base/dim2.py
+++ b/pandas/tests/extension/base/dim2.py
@@ -90,9 +90,9 @@ def test_reshape(self, data):
assert arr2d.shape == (data.size, 1)
assert len(arr2d) == len(data)
- with pytest.raises(ValueError):
+ with tm.external_error_raised(ValueError):
data.reshape((data.size, 2))
- with pytest.raises(ValueError):
+ with tm.external_error_raised(ValueError):
data.reshape(data.size, 2)
def test_getitem_2d(self, data):
diff --git a/pandas/tests/extension/base/getitem.py b/pandas/tests/extension/base/getitem.py
index 5f0c1b960a475..1f89c7ad9d4e4 100644
--- a/pandas/tests/extension/base/getitem.py
+++ b/pandas/tests/extension/base/getitem.py
@@ -394,7 +394,7 @@ def test_take_non_na_fill_value(self, data_missing):
tm.assert_extension_array_equal(result, expected)
def test_take_pandas_style_negative_raises(self, data, na_value):
- with pytest.raises(ValueError, match=""):
+ with tm.external_error_raised(ValueError):
data.take([0, -2], fill_value=na_value, allow_fill=True)
@pytest.mark.parametrize("allow_fill", [True, False])
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index 4a2942776b25e..ba756b471eb8b 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -209,7 +209,8 @@ def test_setitem_integer_array(self, data, idx, box_in_series):
[0, 1, 2, pd.NA], True, marks=pytest.mark.xfail(reason="GH-31948")
),
(pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
+ # TODO: change False to True?
+ (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False), # noqa: PT014
],
ids=["list-False", "list-True", "integer-array-False", "integer-array-True"],
)
@@ -332,7 +333,7 @@ def test_setitem_loc_iloc_slice(self, data):
def test_setitem_slice_mismatch_length_raises(self, data):
arr = data[:5]
- with pytest.raises(ValueError):
+ with tm.external_error_raised(ValueError):
arr[:1] = arr[:2]
def test_setitem_slice_array(self, data):
@@ -342,7 +343,7 @@ def test_setitem_slice_array(self, data):
def test_setitem_scalar_key_sequence_raise(self, data):
arr = data[:5].copy()
- with pytest.raises(ValueError):
+ with tm.external_error_raised(ValueError):
arr[0] = arr[[0, 1]]
def test_setitem_preserves_views(self, data):
diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py
index 9de4f17a27333..50f2fe03cfa13 100644
--- a/pandas/tests/extension/json/test_json.py
+++ b/pandas/tests/extension/json/test_json.py
@@ -355,7 +355,7 @@ def test_setitem_integer_array(self, data, idx, box_in_series, request):
[0, 1, 2, pd.NA], True, marks=pytest.mark.xfail(reason="GH-31948")
),
(pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
+ (pd.array([0, 1, 2, pd.NA], dtype="Int64"), True),
],
ids=["list-False", "list-True", "integer-array-False", "integer-array-True"],
)
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index 9a0cb67a87f30..0893c6231197e 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -373,19 +373,6 @@ def test_setitem_mask(self, data, mask, box_in_series):
def test_setitem_integer_array(self, data, idx, box_in_series):
super().test_setitem_integer_array(data, idx, box_in_series)
- @pytest.mark.parametrize(
- "idx, box_in_series",
- [
- ([0, 1, 2, pd.NA], False),
- pytest.param([0, 1, 2, pd.NA], True, marks=pytest.mark.xfail),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- (pd.array([0, 1, 2, pd.NA], dtype="Int64"), False),
- ],
- ids=["list-False", "list-True", "integer-array-False", "integer-array-True"],
- )
- def test_setitem_integer_with_missing_raises(self, data, idx, box_in_series):
- super().test_setitem_integer_with_missing_raises(data, idx, box_in_series)
-
@skip_nested
def test_setitem_slice(self, data, box_in_series):
super().test_setitem_slice(data, box_in_series)
diff --git a/pandas/tests/extension/test_sparse.py b/pandas/tests/extension/test_sparse.py
index 03f179fdd261e..2efcc192aa15b 100644
--- a/pandas/tests/extension/test_sparse.py
+++ b/pandas/tests/extension/test_sparse.py
@@ -70,7 +70,7 @@ def gen(count):
for _ in range(count):
yield SparseArray(make_data(request.param), fill_value=request.param)
- yield gen
+ return gen
@pytest.fixture(params=[0, np.nan])
diff --git a/pandas/tests/frame/methods/test_astype.py b/pandas/tests/frame/methods/test_astype.py
index eab8dbd2787f7..e7f2c410bf4ac 100644
--- a/pandas/tests/frame/methods/test_astype.py
+++ b/pandas/tests/frame/methods/test_astype.py
@@ -867,7 +867,7 @@ class IntegerArrayNoCopy(pd.core.arrays.IntegerArray):
# GH 42501
def copy(self):
- assert False
+ raise NotImplementedError
class Int16DtypeNoCopy(pd.Int16Dtype):
diff --git a/pandas/tests/frame/methods/test_explode.py b/pandas/tests/frame/methods/test_explode.py
index 5cd54db62d783..ca9764c023244 100644
--- a/pandas/tests/frame/methods/test_explode.py
+++ b/pandas/tests/frame/methods/test_explode.py
@@ -38,10 +38,6 @@ def test_error():
[],
"column must be nonempty",
),
- (
- list("AC"),
- "columns must have matching element counts",
- ),
],
)
def test_error_multi_columns(input_subset, error_message):
diff --git a/pandas/tests/frame/methods/test_filter.py b/pandas/tests/frame/methods/test_filter.py
index 9d5e6876bb08c..382615aaef627 100644
--- a/pandas/tests/frame/methods/test_filter.py
+++ b/pandas/tests/frame/methods/test_filter.py
@@ -100,7 +100,6 @@ def test_filter_regex_search(self, float_frame):
@pytest.mark.parametrize(
"name,expected",
[
- ("a", DataFrame({"a": [1, 2]})),
("a", DataFrame({"a": [1, 2]})),
("あ", DataFrame({"あ": [3, 4]})),
],
@@ -112,9 +111,9 @@ def test_filter_unicode(self, name, expected):
tm.assert_frame_equal(df.filter(like=name), expected)
tm.assert_frame_equal(df.filter(regex=name), expected)
- @pytest.mark.parametrize("name", ["a", "a"])
- def test_filter_bytestring(self, name):
+ def test_filter_bytestring(self):
# GH13101
+ name = "a"
df = DataFrame({b"a": [1, 2], b"b": [3, 4]})
expected = DataFrame({b"a": [1, 2]})
diff --git a/pandas/tests/frame/methods/test_reindex.py b/pandas/tests/frame/methods/test_reindex.py
index d2ec84bc9371f..2a889efe79064 100644
--- a/pandas/tests/frame/methods/test_reindex.py
+++ b/pandas/tests/frame/methods/test_reindex.py
@@ -452,19 +452,16 @@ def f(val):
("mid",),
("mid", "btm"),
("mid", "btm", "top"),
- ("mid",),
("mid", "top"),
("mid", "top", "btm"),
("btm",),
("btm", "mid"),
("btm", "mid", "top"),
- ("btm",),
("btm", "top"),
("btm", "top", "mid"),
("top",),
("top", "mid"),
("top", "mid", "btm"),
- ("top",),
("top", "btm"),
("top", "btm", "mid"),
],
diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py
index 9d07b8ab2288f..a77e063b5f353 100644
--- a/pandas/tests/frame/methods/test_reset_index.py
+++ b/pandas/tests/frame/methods/test_reset_index.py
@@ -27,7 +27,7 @@
import pandas._testing as tm
-@pytest.fixture()
+@pytest.fixture
def multiindex_df():
levels = [["A", ""], ["B", "b"]]
return DataFrame([[0, 2], [1, 3]], columns=MultiIndex.from_tuples(levels))
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index aefb0377d1bf4..d44de380d243a 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -1555,7 +1555,7 @@ def test_constructor_mixed_type_rows(self):
"tuples,lists",
[
((), []),
- ((()), []),
+ (((),), [[]]),
(((), ()), [(), ()]),
(((), ()), [[], []]),
(([], []), [[], []]),
diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py
index a498296e09c52..0a869d8f94f47 100644
--- a/pandas/tests/frame/test_query_eval.py
+++ b/pandas/tests/frame/test_query_eval.py
@@ -1187,7 +1187,7 @@ def df(self):
by backticks. The last two columns cannot be escaped by backticks
and should raise a ValueError.
"""
- yield DataFrame(
+ return DataFrame(
{
"A": [1, 2, 3],
"B B": [3, 2, 1],
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 5aaa11b848be4..fa233619ad3a3 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -1866,7 +1866,6 @@ def test_df_empty_min_count_1(self, opname, dtype, exp_dtype):
[
("sum", "Int8", 0, ("Int32" if is_windows_np2_or_is32 else "Int64")),
("prod", "Int8", 1, ("Int32" if is_windows_np2_or_is32 else "Int64")),
- ("prod", "Int8", 1, ("Int32" if is_windows_np2_or_is32 else "Int64")),
("sum", "Int64", 0, "Int64"),
("prod", "Int64", 1, "Int64"),
("sum", "UInt8", 0, ("UInt32" if is_windows_np2_or_is32 else "UInt64")),
diff --git a/pandas/tests/groupby/conftest.py b/pandas/tests/groupby/conftest.py
index 331dbd85e0f36..2745f7c2b8d0f 100644
--- a/pandas/tests/groupby/conftest.py
+++ b/pandas/tests/groupby/conftest.py
@@ -92,7 +92,7 @@ def three_group():
)
-@pytest.fixture()
+@pytest.fixture
def slice_test_df():
data = [
[0, "a", "a0_at_0"],
@@ -108,7 +108,7 @@ def slice_test_df():
return df.set_index("Index")
-@pytest.fixture()
+@pytest.fixture
def slice_test_grouped(slice_test_df):
return slice_test_df.groupby("Group", as_index=False)
diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py
index a2cc7fd782396..3e1a244a8a72e 100644
--- a/pandas/tests/groupby/test_categorical.py
+++ b/pandas/tests/groupby/test_categorical.py
@@ -647,7 +647,7 @@ def test_dataframe_categorical_ordered_observed_sort(ordered, observed, sort):
f"for (ordered={ordered}, observed={observed}, sort={sort})\n"
f"Result:\n{result}"
)
- assert False, msg
+ pytest.fail(msg)
def test_datetime():
diff --git a/pandas/tests/groupby/test_groupby_dropna.py b/pandas/tests/groupby/test_groupby_dropna.py
index ca097bc2be8bb..768ab2db5cea5 100644
--- a/pandas/tests/groupby/test_groupby_dropna.py
+++ b/pandas/tests/groupby/test_groupby_dropna.py
@@ -410,7 +410,6 @@ def test_groupby_drop_nan_with_multi_index():
"UInt64",
"Int64",
"Float32",
- "Int64",
"Float64",
"category",
"string",
diff --git a/pandas/tests/indexes/datetimelike_/test_sort_values.py b/pandas/tests/indexes/datetimelike_/test_sort_values.py
index a2c349c8b0ef6..bfef0faebeebf 100644
--- a/pandas/tests/indexes/datetimelike_/test_sort_values.py
+++ b/pandas/tests/indexes/datetimelike_/test_sort_values.py
@@ -185,10 +185,6 @@ def test_sort_values_without_freq_timedeltaindex(self):
@pytest.mark.parametrize(
"index_dates,expected_dates",
[
- (
- ["2011-01-01", "2011-01-03", "2011-01-05", "2011-01-02", "2011-01-01"],
- ["2011-01-01", "2011-01-01", "2011-01-02", "2011-01-03", "2011-01-05"],
- ),
(
["2011-01-01", "2011-01-03", "2011-01-05", "2011-01-02", "2011-01-01"],
["2011-01-01", "2011-01-01", "2011-01-02", "2011-01-03", "2011-01-05"],
diff --git a/pandas/tests/indexes/multi/test_constructors.py b/pandas/tests/indexes/multi/test_constructors.py
index 8456e6a7acba5..38e0920b7004e 100644
--- a/pandas/tests/indexes/multi/test_constructors.py
+++ b/pandas/tests/indexes/multi/test_constructors.py
@@ -304,7 +304,6 @@ def test_from_arrays_empty():
(1, 2),
([1], 2),
(1, [2]),
- "a",
("a",),
("a", "b"),
(["a"], "b"),
diff --git a/pandas/tests/internals/test_internals.py b/pandas/tests/internals/test_internals.py
index 66dd893df51de..1806242b83126 100644
--- a/pandas/tests/internals/test_internals.py
+++ b/pandas/tests/internals/test_internals.py
@@ -1158,7 +1158,6 @@ def test_array_to_slice_conversion(self, arr, slc):
[-1],
[-1, -2, -3],
[-10],
- [-1],
[-1, 0, 1, 2],
[-2, 0, 2, 4],
[1, 0, -1],
diff --git a/pandas/tests/io/formats/test_css.py b/pandas/tests/io/formats/test_css.py
index db436d8283b99..8bf9aa4ac04d3 100644
--- a/pandas/tests/io/formats/test_css.py
+++ b/pandas/tests/io/formats/test_css.py
@@ -243,7 +243,6 @@ def test_css_none_absent(style, equiv):
("02.54cm", "72pt"),
("25.4mm", "72pt"),
("101.6q", "72pt"),
- ("101.6q", "72pt"),
],
)
@pytest.mark.parametrize("relative_to", [None, "16pt"]) # invariant to inherited size
diff --git a/pandas/tests/io/formats/test_to_latex.py b/pandas/tests/io/formats/test_to_latex.py
index 98f1e0245b353..4c8cd4b6a2b8e 100644
--- a/pandas/tests/io/formats/test_to_latex.py
+++ b/pandas/tests/io/formats/test_to_latex.py
@@ -771,7 +771,7 @@ def df_with_symbols(self):
"""Dataframe with special characters for testing chars escaping."""
a = "a"
b = "b"
- yield DataFrame({"co$e^x$": {a: "a", b: "b"}, "co^l1": {a: "a", b: "b"}})
+ return DataFrame({"co$e^x$": {a: "a", b: "b"}, "co^l1": {a: "a", b: "b"}})
def test_to_latex_escape_false(self, df_with_symbols):
result = df_with_symbols.to_latex(escape=False)
@@ -1010,7 +1010,7 @@ class TestToLatexMultiindex:
@pytest.fixture
def multiindex_frame(self):
"""Multiindex dataframe for testing multirow LaTeX macros."""
- yield DataFrame.from_dict(
+ return DataFrame.from_dict(
{
("c1", 0): Series({x: x for x in range(4)}),
("c1", 1): Series({x: x + 4 for x in range(4)}),
@@ -1023,7 +1023,7 @@ def multiindex_frame(self):
@pytest.fixture
def multicolumn_frame(self):
"""Multicolumn dataframe for testing multicolumn LaTeX macros."""
- yield DataFrame(
+ return DataFrame(
{
("c1", 0): {x: x for x in range(5)},
("c1", 1): {x: x + 5 for x in range(5)},
diff --git a/pandas/tests/io/json/test_json_table_schema.py b/pandas/tests/io/json/test_json_table_schema.py
index cc101bb9c8b6d..d5ea470af79d6 100644
--- a/pandas/tests/io/json/test_json_table_schema.py
+++ b/pandas/tests/io/json/test_json_table_schema.py
@@ -179,7 +179,6 @@ def test_as_json_table_type_string_data(self, str_data):
pd.Categorical([1]),
pd.Series(pd.Categorical([1])),
pd.CategoricalIndex([1]),
- pd.Categorical([1]),
],
)
def test_as_json_table_type_categorical_data(self, cat_data):
diff --git a/pandas/tests/io/parser/common/test_file_buffer_url.py b/pandas/tests/io/parser/common/test_file_buffer_url.py
index a7a8d031da215..5e31b4c6b644d 100644
--- a/pandas/tests/io/parser/common/test_file_buffer_url.py
+++ b/pandas/tests/io/parser/common/test_file_buffer_url.py
@@ -425,7 +425,7 @@ def test_context_manager(all_parsers, datapath):
try:
with reader:
next(reader)
- assert False
+ raise AssertionError
except AssertionError:
assert reader.handles.handle.closed
@@ -446,7 +446,7 @@ def test_context_manageri_user_provided(all_parsers, datapath):
try:
with reader:
next(reader)
- assert False
+ raise AssertionError
except AssertionError:
assert not reader.handles.handle.closed
diff --git a/pandas/tests/io/parser/usecols/test_strings.py b/pandas/tests/io/parser/usecols/test_strings.py
index d4ade41d38465..0d51c2cb3cdb4 100644
--- a/pandas/tests/io/parser/usecols/test_strings.py
+++ b/pandas/tests/io/parser/usecols/test_strings.py
@@ -74,8 +74,7 @@ def test_usecols_with_mixed_encoding_strings(all_parsers, usecols):
parser.read_csv(StringIO(data), usecols=usecols)
-@pytest.mark.parametrize("usecols", [["あああ", "いい"], ["あああ", "いい"]])
-def test_usecols_with_multi_byte_characters(all_parsers, usecols):
+def test_usecols_with_multi_byte_characters(all_parsers):
data = """あああ,いい,ううう,ええええ
0.056674973,8,True,a
2.613230982,2,False,b
@@ -92,5 +91,5 @@ def test_usecols_with_multi_byte_characters(all_parsers, usecols):
}
expected = DataFrame(exp_data)
- result = parser.read_csv(StringIO(data), usecols=usecols)
+ result = parser.read_csv(StringIO(data), usecols=["あああ", "いい"])
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/io/pytables/test_compat.py b/pandas/tests/io/pytables/test_compat.py
index b07fb3ddd3ac8..b78a503a2b8a3 100644
--- a/pandas/tests/io/pytables/test_compat.py
+++ b/pandas/tests/io/pytables/test_compat.py
@@ -36,7 +36,7 @@ def pytables_hdf5_file(tmp_path):
t.row[key] = value
t.row.append()
- yield path, objname, pd.DataFrame(testsamples)
+ return path, objname, pd.DataFrame(testsamples)
class TestReadPyTablesHDF5:
diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py
index 8fc02cc7799ed..a6967732cf702 100644
--- a/pandas/tests/io/test_parquet.py
+++ b/pandas/tests/io/test_parquet.py
@@ -1327,7 +1327,7 @@ def test_use_nullable_dtypes_not_supported(self, fp):
def test_close_file_handle_on_read_error(self):
with tm.ensure_clean("test.parquet") as path:
pathlib.Path(path).write_bytes(b"breakit")
- with pytest.raises(Exception, match=""): # Not important which exception
+ with tm.external_error_raised(Exception): # Not important which exception
read_parquet(path, engine="fastparquet")
# The next line raises an error on Windows if the file is still open
pathlib.Path(path).unlink(missing_ok=False)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 6645aefd4f0a7..2ddbbaa1bf17c 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -620,13 +620,13 @@ def mysql_pymysql_engine():
def mysql_pymysql_engine_iris(mysql_pymysql_engine, iris_path):
create_and_load_iris(mysql_pymysql_engine, iris_path)
create_and_load_iris_view(mysql_pymysql_engine)
- yield mysql_pymysql_engine
+ return mysql_pymysql_engine
@pytest.fixture
def mysql_pymysql_engine_types(mysql_pymysql_engine, types_data):
create_and_load_types(mysql_pymysql_engine, types_data, "mysql")
- yield mysql_pymysql_engine
+ return mysql_pymysql_engine
@pytest.fixture
@@ -667,13 +667,13 @@ def postgresql_psycopg2_engine():
def postgresql_psycopg2_engine_iris(postgresql_psycopg2_engine, iris_path):
create_and_load_iris(postgresql_psycopg2_engine, iris_path)
create_and_load_iris_view(postgresql_psycopg2_engine)
- yield postgresql_psycopg2_engine
+ return postgresql_psycopg2_engine
@pytest.fixture
def postgresql_psycopg2_engine_types(postgresql_psycopg2_engine, types_data):
create_and_load_types(postgresql_psycopg2_engine, types_data, "postgres")
- yield postgresql_psycopg2_engine
+ return postgresql_psycopg2_engine
@pytest.fixture
@@ -713,7 +713,7 @@ def postgresql_adbc_iris(postgresql_adbc_conn, iris_path):
except mgr.ProgrammingError: # note arrow-adbc issue 1022
conn.rollback()
create_and_load_iris_view(conn)
- yield conn
+ return conn
@pytest.fixture
@@ -730,7 +730,7 @@ def postgresql_adbc_types(postgresql_adbc_conn, types_data):
create_and_load_types_postgresql(conn, new_data)
- yield conn
+ return conn
@pytest.fixture
@@ -784,7 +784,7 @@ def sqlite_str_iris(sqlite_str, iris_path):
def sqlite_engine_iris(sqlite_engine, iris_path):
create_and_load_iris(sqlite_engine, iris_path)
create_and_load_iris_view(sqlite_engine)
- yield sqlite_engine
+ return sqlite_engine
@pytest.fixture
@@ -805,7 +805,7 @@ def sqlite_str_types(sqlite_str, types_data):
@pytest.fixture
def sqlite_engine_types(sqlite_engine, types_data):
create_and_load_types(sqlite_engine, types_data, "sqlite")
- yield sqlite_engine
+ return sqlite_engine
@pytest.fixture
@@ -845,7 +845,7 @@ def sqlite_adbc_iris(sqlite_adbc_conn, iris_path):
except mgr.ProgrammingError:
conn.rollback()
create_and_load_iris_view(conn)
- yield conn
+ return conn
@pytest.fixture
@@ -867,7 +867,7 @@ def sqlite_adbc_types(sqlite_adbc_conn, types_data):
create_and_load_types_sqlite3(conn, new_data)
conn.commit()
- yield conn
+ return conn
@pytest.fixture
@@ -881,14 +881,14 @@ def sqlite_buildin():
def sqlite_buildin_iris(sqlite_buildin, iris_path):
create_and_load_iris_sqlite3(sqlite_buildin, iris_path)
create_and_load_iris_view(sqlite_buildin)
- yield sqlite_buildin
+ return sqlite_buildin
@pytest.fixture
def sqlite_buildin_types(sqlite_buildin, types_data):
types_data = [tuple(entry.values()) for entry in types_data]
create_and_load_types_sqlite3(sqlite_buildin, types_data)
- yield sqlite_buildin
+ return sqlite_buildin
mysql_connectable = [
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index fcb5b65e59402..ec9ad8af1239e 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -801,7 +801,7 @@ def test_var_masked_array(self, ddof, exp):
assert result == result_numpy_dtype
assert result == exp
- @pytest.mark.parametrize("dtype", ("m8[ns]", "m8[ns]", "M8[ns]", "M8[ns, UTC]"))
+ @pytest.mark.parametrize("dtype", ("m8[ns]", "M8[ns]", "M8[ns, UTC]"))
def test_empty_timeseries_reductions_return_nat(self, dtype, skipna):
# covers GH#11245
assert Series([], dtype=dtype).min(skipna=skipna) is NaT
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 21a38c43f4294..00f151dfc3e67 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -1635,7 +1635,6 @@ def test_merge_incompat_dtypes_are_ok(self, df1_vals, df2_vals):
(Series([1, 2], dtype="int32"), ["a", "b", "c"]),
([0, 1, 2], ["0", "1", "2"]),
([0.0, 1.0, 2.0], ["0", "1", "2"]),
- ([0, 1, 2], ["0", "1", "2"]),
(
pd.date_range("1/1/2011", periods=2, freq="D"),
["2011-01-01", "2011-01-02"],
diff --git a/pandas/tests/scalar/test_nat.py b/pandas/tests/scalar/test_nat.py
index cb046e0133245..e352e2601cef3 100644
--- a/pandas/tests/scalar/test_nat.py
+++ b/pandas/tests/scalar/test_nat.py
@@ -146,7 +146,6 @@ def test_round_nat(klass, method, freq):
"utcnow",
"utcoffset",
"utctimetuple",
- "timestamp",
],
)
def test_nat_methods_raise(method):
diff --git a/pandas/tests/series/methods/test_size.py b/pandas/tests/series/methods/test_size.py
index 20a454996fa44..043e4b66dbf16 100644
--- a/pandas/tests/series/methods/test_size.py
+++ b/pandas/tests/series/methods/test_size.py
@@ -10,8 +10,6 @@
({"a": 1, "b": 2, "c": 3}, None, 3),
([1, 2, 3], ["x", "y", "z"], 3),
([1, 2, 3, 4, 5], ["x", "y", "z", "w", "n"], 5),
- ([1, 2, 3], None, 3),
- ([1, 2, 3], ["x", "y", "z"], 3),
([1, 2, 3, 4], ["x", "y", "z", "w"], 4),
],
)
diff --git a/pandas/tests/strings/test_extract.py b/pandas/tests/strings/test_extract.py
index 77d008c650264..7ebcbdc7a8533 100644
--- a/pandas/tests/strings/test_extract.py
+++ b/pandas/tests/strings/test_extract.py
@@ -548,7 +548,6 @@ def test_extractall_single_group_with_quantifier(any_string_dtype):
(["a3", "b3", "d4c2"], (None,)),
(["a3", "b3", "d4c2"], ("i1", "i2")),
(["a3", "b3", "d4c2"], (None, "i2")),
- (["a3", "b3", "d4c2"], ("i1", "i2")),
],
)
def test_extractall_no_matches(data, names, any_string_dtype):
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 95d0953301a42..ed125ece349a9 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -748,7 +748,6 @@ def test_nancov(self):
("arr_bool", False),
("arr_str", False),
("arr_utf", False),
- ("arr_complex", False),
("arr_complex_nan", False),
("arr_nan_nanj", False),
("arr_nan_infj", True),
diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 4a012f34ddc3b..46b4b97c437b6 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -3043,7 +3043,6 @@ def test_day_not_in_month_raise_value(self, cache, arg, format, msg):
[
["2015-02-29", None],
["2015-02-29", "%Y-%m-%d"],
- ["2015-02-29", "%Y-%m-%d"],
["2015-04-31", "%Y-%m-%d"],
],
)
diff --git a/pandas/tests/tseries/frequencies/test_inference.py b/pandas/tests/tseries/frequencies/test_inference.py
index 0d37273e89092..edfc1973a2bd9 100644
--- a/pandas/tests/tseries/frequencies/test_inference.py
+++ b/pandas/tests/tseries/frequencies/test_inference.py
@@ -499,7 +499,6 @@ def test_series_datetime_index(freq):
"YE@OCT",
"YE@NOV",
"YE@DEC",
- "YE@JAN",
"WOM@1MON",
"WOM@2MON",
"WOM@3MON",
diff --git a/pandas/tests/tslibs/test_to_offset.py b/pandas/tests/tslibs/test_to_offset.py
index ef68408305232..204775347e47a 100644
--- a/pandas/tests/tslibs/test_to_offset.py
+++ b/pandas/tests/tslibs/test_to_offset.py
@@ -24,7 +24,6 @@
("15ms500us", offsets.Micro(15500)),
("10s75ms", offsets.Milli(10075)),
("1s0.25ms", offsets.Micro(1000250)),
- ("1s0.25ms", offsets.Micro(1000250)),
("2800ns", offsets.Nano(2800)),
("2SME", offsets.SemiMonthEnd(2)),
("2SME-16", offsets.SemiMonthEnd(2, day_of_month=16)),
diff --git a/pyproject.toml b/pyproject.toml
index f693048adb60c..ebdf9deb034b5 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -212,6 +212,8 @@ select = [
"INT",
# pylint
"PL",
+ # flake8-pytest-style
+ "PT",
# misc lints
"PIE",
# flake8-pyi
@@ -304,6 +306,24 @@ ignore = [
"PERF102",
# try-except-in-loop, becomes useless in Python 3.11
"PERF203",
+ # pytest-missing-fixture-name-underscore
+ "PT004",
+ # pytest-incorrect-fixture-name-underscore
+ "PT005",
+ # pytest-parametrize-names-wrong-type
+ "PT006",
+ # pytest-parametrize-values-wrong-type
+ "PT007",
+ # pytest-patch-with-lambda
+ "PT008",
+ # pytest-raises-with-multiple-statements
+ "PT012",
+ # pytest-assert-in-except
+ "PT017",
+ # pytest-composite-assertion
+ "PT018",
+ # pytest-fixture-param-without-value
+ "PT019",
# The following rules may cause conflicts when used with the formatter:
"ISC001",
@@ -351,6 +371,10 @@ exclude = [
# Keep this one enabled
"pandas/_typing.py" = ["TCH"]
+[tool.ruff.lint.flake8-pytest-style]
+fixture-parentheses = false
+mark-parentheses = false
+
[tool.pylint.messages_control]
max-line-length = 88
disable = [
diff --git a/scripts/tests/test_validate_unwanted_patterns.py b/scripts/tests/test_validate_unwanted_patterns.py
index bef9d369a0a3c..4c433d03aff4d 100644
--- a/scripts/tests/test_validate_unwanted_patterns.py
+++ b/scripts/tests/test_validate_unwanted_patterns.py
@@ -5,154 +5,6 @@
from scripts import validate_unwanted_patterns
-class TestBarePytestRaises:
- @pytest.mark.parametrize(
- "data",
- [
- (
- """
- with pytest.raises(ValueError, match="foo"):
- pass
- """
- ),
- (
- """
- # with pytest.raises(ValueError, match="foo"):
- # pass
- """
- ),
- (
- """
- # with pytest.raises(ValueError):
- # pass
- """
- ),
- (
- """
- with pytest.raises(
- ValueError,
- match="foo"
- ):
- pass
- """
- ),
- ],
- )
- def test_pytest_raises(self, data) -> None:
- fd = io.StringIO(data.strip())
- result = list(validate_unwanted_patterns.bare_pytest_raises(fd))
- assert result == []
-
- @pytest.mark.parametrize(
- "data, expected",
- [
- (
- (
- """
- with pytest.raises(ValueError):
- pass
- """
- ),
- [
- (
- 1,
- (
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' "
- "as well the exception."
- ),
- ),
- ],
- ),
- (
- (
- """
- with pytest.raises(ValueError, match="foo"):
- with pytest.raises(ValueError):
- pass
- pass
- """
- ),
- [
- (
- 2,
- (
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' "
- "as well the exception."
- ),
- ),
- ],
- ),
- (
- (
- """
- with pytest.raises(ValueError):
- with pytest.raises(ValueError, match="foo"):
- pass
- pass
- """
- ),
- [
- (
- 1,
- (
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' "
- "as well the exception."
- ),
- ),
- ],
- ),
- (
- (
- """
- with pytest.raises(
- ValueError
- ):
- pass
- """
- ),
- [
- (
- 1,
- (
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' "
- "as well the exception."
- ),
- ),
- ],
- ),
- (
- (
- """
- with pytest.raises(
- ValueError,
- # match = "foo"
- ):
- pass
- """
- ),
- [
- (
- 1,
- (
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' "
- "as well the exception."
- ),
- ),
- ],
- ),
- ],
- )
- def test_pytest_raises_raises(self, data, expected) -> None:
- fd = io.StringIO(data.strip())
- result = list(validate_unwanted_patterns.bare_pytest_raises(fd))
- assert result == expected
-
-
class TestStringsWithWrongPlacedWhitespace:
@pytest.mark.parametrize(
"data",
diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py
index 0d724779abfda..ee7f9226a7090 100755
--- a/scripts/validate_unwanted_patterns.py
+++ b/scripts/validate_unwanted_patterns.py
@@ -95,65 +95,6 @@ def _get_literal_string_prefix_len(token_string: str) -> int:
return 0
-def bare_pytest_raises(file_obj: IO[str]) -> Iterable[tuple[int, str]]:
- """
- Test Case for bare pytest raises.
-
- For example, this is wrong:
-
- >>> with pytest.raise(ValueError):
- ... # Some code that raises ValueError
-
- And this is what we want instead:
-
- >>> with pytest.raise(ValueError, match="foo"):
- ... # Some code that raises ValueError
-
- Parameters
- ----------
- file_obj : IO
- File-like object containing the Python code to validate.
-
- Yields
- ------
- line_number : int
- Line number of unconcatenated string.
- msg : str
- Explanation of the error.
-
- Notes
- -----
- GH #23922
- """
- contents = file_obj.read()
- tree = ast.parse(contents)
-
- for node in ast.walk(tree):
- if not isinstance(node, ast.Call):
- continue
-
- try:
- if not (node.func.value.id == "pytest" and node.func.attr == "raises"):
- continue
- except AttributeError:
- continue
-
- if not node.keywords:
- yield (
- node.lineno,
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' as well the exception.",
- )
- # Means that there are arguments that are being passed in,
- # now we validate that `match` is one of the passed in arguments
- elif not any(keyword.arg == "match" for keyword in node.keywords):
- yield (
- node.lineno,
- "Bare pytests raise have been found. "
- "Please pass in the argument 'match' as well the exception.",
- )
-
-
PRIVATE_FUNCTIONS_ALLOWED = {"sys._getframe"} # no known alternative
@@ -457,7 +398,6 @@ def main(
if __name__ == "__main__":
available_validation_types: list[str] = [
- "bare_pytest_raises",
"private_function_across_module",
"private_import_across_module",
"strings_with_wrong_placed_whitespace",
diff --git a/web/tests/test_pandas_web.py b/web/tests/test_pandas_web.py
index a5f76875dfe23..aacdfbcd6d26e 100644
--- a/web/tests/test_pandas_web.py
+++ b/web/tests/test_pandas_web.py
@@ -30,7 +30,7 @@ def context() -> dict:
}
-@pytest.fixture(scope="function")
+@pytest.fixture
def mock_response(monkeypatch, request) -> None:
def mocked_resp(*args, **kwargs):
status_code, response = request.param
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56671 | 2023-12-28T22:26:00Z | 2024-01-05T20:23:52Z | 2024-01-05T20:23:52Z | 2024-01-05T20:23:56Z |
Backport PR #56664 on branch 2.2.x (CI: Run jobs on 2.2.x branch) | diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index b49b9a67c4743..8e29d56f47dcf 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
env:
ENV_FILE: environment.yml
diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml
index da232404e6ff5..73acd9acc129a 100644
--- a/.github/workflows/docbuild-and-upload.yml
+++ b/.github/workflows/docbuild-and-upload.yml
@@ -4,13 +4,13 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
tags:
- '*'
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
env:
ENV_FILE: environment.yml
diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml
index 04d8b8e006985..d59ddf272f705 100644
--- a/.github/workflows/package-checks.yml
+++ b/.github/workflows/package-checks.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
types: [ labeled, opened, synchronize, reopened ]
permissions:
diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 6ca4d19196874..12e645dc9da81 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
paths-ignore:
- "doc/**"
- "web/**"
| Backport PR #56664: CI: Run jobs on 2.2.x branch | https://api.github.com/repos/pandas-dev/pandas/pulls/56669 | 2023-12-28T21:45:24Z | 2023-12-28T22:43:47Z | 2023-12-28T22:43:47Z | 2023-12-28T22:43:47Z |
Backport PR #56654 on branch 2.2.x (BUG: assert_series_equal not properly respecting check-dtype) | diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 800b03707540f..d0f38c85868d4 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -949,9 +949,15 @@ def assert_series_equal(
obj=str(obj),
)
else:
+ # convert both to NumPy if not, check_dtype would raise earlier
+ lv, rv = left_values, right_values
+ if isinstance(left_values, ExtensionArray):
+ lv = left_values.to_numpy()
+ if isinstance(right_values, ExtensionArray):
+ rv = right_values.to_numpy()
assert_numpy_array_equal(
- left_values,
- right_values,
+ lv,
+ rv,
check_dtype=check_dtype,
obj=str(obj),
index_values=left.index,
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index aaf49f53ba02b..e38144f4c615b 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -421,16 +421,6 @@ def test_index_from_listlike_with_dtype(self, data):
def test_EA_types(self, engine, data, request):
super().test_EA_types(engine, data, request)
- @pytest.mark.xfail(reason="Expect NumpyEA, get np.ndarray")
- def test_compare_array(self, data, comparison_op):
- super().test_compare_array(data, comparison_op)
-
- def test_compare_scalar(self, data, comparison_op, request):
- if data.dtype.kind == "f" or comparison_op.__name__ in ["eq", "ne"]:
- mark = pytest.mark.xfail(reason="Expect NumpyEA, get np.ndarray")
- request.applymarker(mark)
- super().test_compare_scalar(data, comparison_op)
-
class Test2DCompat(base.NDArrayBacked2DTests):
pass
diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index a074898f6046d..79132591b15b3 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -211,10 +211,7 @@ def test_assert_frame_equal_extension_dtype_mismatch():
"\\[right\\]: int[32|64]"
)
- # TODO: this shouldn't raise (or should raise a better error message)
- # https://github.com/pandas-dev/pandas/issues/56131
- with pytest.raises(AssertionError, match="classes are different"):
- tm.assert_frame_equal(left, right, check_dtype=False)
+ tm.assert_frame_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_frame_equal(left, right, check_dtype=True)
@@ -246,7 +243,6 @@ def test_assert_frame_equal_ignore_extension_dtype_mismatch():
tm.assert_frame_equal(left, right, check_dtype=False)
-@pytest.mark.xfail(reason="https://github.com/pandas-dev/pandas/issues/56131")
def test_assert_frame_equal_ignore_extension_dtype_mismatch_cross_class():
# https://github.com/pandas-dev/pandas/issues/35715
left = DataFrame({"a": [1, 2, 3]}, dtype="Int64")
@@ -300,9 +296,7 @@ def test_frame_equal_mixed_dtypes(frame_or_series, any_numeric_ea_dtype, indexer
dtypes = (any_numeric_ea_dtype, "int64")
obj1 = frame_or_series([1, 2], dtype=dtypes[indexer[0]])
obj2 = frame_or_series([1, 2], dtype=dtypes[indexer[1]])
- msg = r'(Series|DataFrame.iloc\[:, 0\] \(column name="0"\) classes) are different'
- with pytest.raises(AssertionError, match=msg):
- tm.assert_equal(obj1, obj2, check_exact=True, check_dtype=False)
+ tm.assert_equal(obj1, obj2, check_exact=True, check_dtype=False)
def test_assert_frame_equal_check_like_different_indexes():
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index f722f619bc456..c4ffc197298f0 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -290,10 +290,7 @@ def test_assert_series_equal_extension_dtype_mismatch():
\\[left\\]: Int64
\\[right\\]: int[32|64]"""
- # TODO: this shouldn't raise (or should raise a better error message)
- # https://github.com/pandas-dev/pandas/issues/56131
- with pytest.raises(AssertionError, match="Series classes are different"):
- tm.assert_series_equal(left, right, check_dtype=False)
+ tm.assert_series_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(left, right, check_dtype=True)
@@ -372,7 +369,6 @@ def test_assert_series_equal_ignore_extension_dtype_mismatch():
tm.assert_series_equal(left, right, check_dtype=False)
-@pytest.mark.xfail(reason="https://github.com/pandas-dev/pandas/issues/56131")
def test_assert_series_equal_ignore_extension_dtype_mismatch_cross_class():
# https://github.com/pandas-dev/pandas/issues/35715
left = Series([1, 2, 3], dtype="Int64")
@@ -456,3 +452,13 @@ def test_large_unequal_ints(dtype):
right = Series([1577840521123543], dtype=dtype)
with pytest.raises(AssertionError, match="Series are different"):
tm.assert_series_equal(left, right)
+
+
+@pytest.mark.parametrize("dtype", [None, object])
+@pytest.mark.parametrize("check_exact", [True, False])
+@pytest.mark.parametrize("val", [3, 3.5])
+def test_ea_and_numpy_no_dtype_check(val, check_exact, dtype):
+ # GH#56651
+ left = Series([1, 2, val], dtype=dtype)
+ right = Series(pd.array([1, 2, val]))
+ tm.assert_series_equal(left, right, check_dtype=False, check_exact=check_exact)
| Backport PR #56654: BUG: assert_series_equal not properly respecting check-dtype | https://api.github.com/repos/pandas-dev/pandas/pulls/56668 | 2023-12-28T21:44:11Z | 2023-12-28T22:17:15Z | 2023-12-28T22:17:15Z | 2023-12-28T22:17:15Z |
TYP: misc annotations | diff --git a/pandas/_testing/_warnings.py b/pandas/_testing/_warnings.py
index f11dc11f6ac0d..f9cf390ba59de 100644
--- a/pandas/_testing/_warnings.py
+++ b/pandas/_testing/_warnings.py
@@ -1,6 +1,7 @@
from __future__ import annotations
from contextlib import (
+ AbstractContextManager,
contextmanager,
nullcontext,
)
@@ -112,7 +113,9 @@ class for all warnings. To raise multiple types of exceptions,
)
-def maybe_produces_warning(warning: type[Warning], condition: bool, **kwargs):
+def maybe_produces_warning(
+ warning: type[Warning], condition: bool, **kwargs
+) -> AbstractContextManager:
"""
Return a context manager that possibly checks a warning based on the condition
"""
diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py
index 9d04d7c0a1216..3cdeae52a25ba 100644
--- a/pandas/compat/_optional.py
+++ b/pandas/compat/_optional.py
@@ -2,7 +2,11 @@
import importlib
import sys
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ Literal,
+ overload,
+)
import warnings
from pandas.util._exceptions import find_stack_level
@@ -82,12 +86,35 @@ def get_version(module: types.ModuleType) -> str:
return version
+@overload
+def import_optional_dependency(
+ name: str,
+ extra: str = ...,
+ min_version: str | None = ...,
+ *,
+ errors: Literal["raise"] = ...,
+) -> types.ModuleType:
+ ...
+
+
+@overload
+def import_optional_dependency(
+ name: str,
+ extra: str = ...,
+ min_version: str | None = ...,
+ *,
+ errors: Literal["warn", "ignore"],
+) -> types.ModuleType | None:
+ ...
+
+
def import_optional_dependency(
name: str,
extra: str = "",
- errors: str = "raise",
min_version: str | None = None,
-):
+ *,
+ errors: Literal["raise", "warn", "ignore"] = "raise",
+) -> types.ModuleType | None:
"""
Import an optional dependency.
diff --git a/pandas/core/arrays/_arrow_string_mixins.py b/pandas/core/arrays/_arrow_string_mixins.py
index cc41985843574..bfff19a123a08 100644
--- a/pandas/core/arrays/_arrow_string_mixins.py
+++ b/pandas/core/arrays/_arrow_string_mixins.py
@@ -1,6 +1,9 @@
from __future__ import annotations
-from typing import Literal
+from typing import (
+ TYPE_CHECKING,
+ Literal,
+)
import numpy as np
@@ -10,6 +13,9 @@
import pyarrow as pa
import pyarrow.compute as pc
+if TYPE_CHECKING:
+ from pandas._typing import Self
+
class ArrowStringArrayMixin:
_pa_array = None
@@ -22,7 +28,7 @@ def _str_pad(
width: int,
side: Literal["left", "right", "both"] = "left",
fillchar: str = " ",
- ):
+ ) -> Self:
if side == "left":
pa_pad = pc.utf8_lpad
elif side == "right":
@@ -35,7 +41,7 @@ def _str_pad(
)
return type(self)(pa_pad(self._pa_array, width=width, padding=fillchar))
- def _str_get(self, i: int):
+ def _str_get(self, i: int) -> Self:
lengths = pc.utf8_length(self._pa_array)
if i >= 0:
out_of_bounds = pc.greater_equal(i, lengths)
@@ -59,7 +65,7 @@ def _str_get(self, i: int):
def _str_slice_replace(
self, start: int | None = None, stop: int | None = None, repl: str | None = None
- ):
+ ) -> Self:
if repl is None:
repl = ""
if start is None:
@@ -68,13 +74,13 @@ def _str_slice_replace(
stop = np.iinfo(np.int64).max
return type(self)(pc.utf8_replace_slice(self._pa_array, start, stop, repl))
- def _str_capitalize(self):
+ def _str_capitalize(self) -> Self:
return type(self)(pc.utf8_capitalize(self._pa_array))
- def _str_title(self):
+ def _str_title(self) -> Self:
return type(self)(pc.utf8_title(self._pa_array))
- def _str_swapcase(self):
+ def _str_swapcase(self) -> Self:
return type(self)(pc.utf8_swapcase(self._pa_array))
def _str_removesuffix(self, suffix: str):
diff --git a/pandas/core/dtypes/inference.py b/pandas/core/dtypes/inference.py
index f551716772f61..e87b7f02b9b05 100644
--- a/pandas/core/dtypes/inference.py
+++ b/pandas/core/dtypes/inference.py
@@ -36,7 +36,7 @@
is_iterator = lib.is_iterator
-def is_number(obj) -> TypeGuard[Number | np.number]:
+def is_number(obj: object) -> TypeGuard[Number | np.number]:
"""
Check if the object is a number.
@@ -77,7 +77,7 @@ def is_number(obj) -> TypeGuard[Number | np.number]:
return isinstance(obj, (Number, np.number))
-def iterable_not_string(obj) -> bool:
+def iterable_not_string(obj: object) -> bool:
"""
Check if the object is an iterable but not a string.
@@ -102,7 +102,7 @@ def iterable_not_string(obj) -> bool:
return isinstance(obj, abc.Iterable) and not isinstance(obj, str)
-def is_file_like(obj) -> bool:
+def is_file_like(obj: object) -> bool:
"""
Check if the object is a file-like object.
@@ -138,7 +138,7 @@ def is_file_like(obj) -> bool:
return bool(hasattr(obj, "__iter__"))
-def is_re(obj) -> TypeGuard[Pattern]:
+def is_re(obj: object) -> TypeGuard[Pattern]:
"""
Check if the object is a regex pattern instance.
@@ -163,7 +163,7 @@ def is_re(obj) -> TypeGuard[Pattern]:
return isinstance(obj, Pattern)
-def is_re_compilable(obj) -> bool:
+def is_re_compilable(obj: object) -> bool:
"""
Check if the object can be compiled into a regex pattern instance.
@@ -185,14 +185,14 @@ def is_re_compilable(obj) -> bool:
False
"""
try:
- re.compile(obj)
+ re.compile(obj) # type: ignore[call-overload]
except TypeError:
return False
else:
return True
-def is_array_like(obj) -> bool:
+def is_array_like(obj: object) -> bool:
"""
Check if the object is array-like.
@@ -224,7 +224,7 @@ def is_array_like(obj) -> bool:
return is_list_like(obj) and hasattr(obj, "dtype")
-def is_nested_list_like(obj) -> bool:
+def is_nested_list_like(obj: object) -> bool:
"""
Check if the object is list-like, and that all of its elements
are also list-like.
@@ -265,12 +265,13 @@ def is_nested_list_like(obj) -> bool:
return (
is_list_like(obj)
and hasattr(obj, "__len__")
- and len(obj) > 0
- and all(is_list_like(item) for item in obj)
+ # need PEP 724 to handle these typing errors
+ and len(obj) > 0 # pyright: ignore[reportGeneralTypeIssues]
+ and all(is_list_like(item) for item in obj) # type: ignore[attr-defined]
)
-def is_dict_like(obj) -> bool:
+def is_dict_like(obj: object) -> bool:
"""
Check if the object is dict-like.
@@ -303,7 +304,7 @@ def is_dict_like(obj) -> bool:
)
-def is_named_tuple(obj) -> bool:
+def is_named_tuple(obj: object) -> bool:
"""
Check if the object is a named tuple.
@@ -331,7 +332,7 @@ def is_named_tuple(obj) -> bool:
return isinstance(obj, abc.Sequence) and hasattr(obj, "_fields")
-def is_hashable(obj) -> TypeGuard[Hashable]:
+def is_hashable(obj: object) -> TypeGuard[Hashable]:
"""
Return True if hash(obj) will succeed, False otherwise.
@@ -370,7 +371,7 @@ def is_hashable(obj) -> TypeGuard[Hashable]:
return True
-def is_sequence(obj) -> bool:
+def is_sequence(obj: object) -> bool:
"""
Check if the object is a sequence of objects.
String types are not included as sequences here.
@@ -394,14 +395,16 @@ def is_sequence(obj) -> bool:
False
"""
try:
- iter(obj) # Can iterate over it.
- len(obj) # Has a length associated with it.
+ # Can iterate over it.
+ iter(obj) # type: ignore[call-overload]
+ # Has a length associated with it.
+ len(obj) # type: ignore[arg-type]
return not isinstance(obj, (str, bytes))
except (TypeError, AttributeError):
return False
-def is_dataclass(item) -> bool:
+def is_dataclass(item: object) -> bool:
"""
Checks if the object is a data-class instance
diff --git a/pandas/core/flags.py b/pandas/core/flags.py
index aff7a15f283ba..394695e69a3d3 100644
--- a/pandas/core/flags.py
+++ b/pandas/core/flags.py
@@ -111,7 +111,7 @@ def __setitem__(self, key: str, value) -> None:
def __repr__(self) -> str:
return f"<Flags(allows_duplicate_labels={self.allows_duplicate_labels})>"
- def __eq__(self, other) -> bool:
+ def __eq__(self, other: object) -> bool:
if isinstance(other, type(self)):
return self.allows_duplicate_labels == other.allows_duplicate_labels
return False
diff --git a/pandas/core/ops/invalid.py b/pandas/core/ops/invalid.py
index e5ae6d359ac22..8af95de285938 100644
--- a/pandas/core/ops/invalid.py
+++ b/pandas/core/ops/invalid.py
@@ -4,7 +4,11 @@
from __future__ import annotations
import operator
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ Callable,
+ NoReturn,
+)
import numpy as np
@@ -41,7 +45,7 @@ def invalid_comparison(left, right, op) -> npt.NDArray[np.bool_]:
return res_values
-def make_invalid_op(name: str):
+def make_invalid_op(name: str) -> Callable[..., NoReturn]:
"""
Return a binary method that always raises a TypeError.
@@ -54,7 +58,7 @@ def make_invalid_op(name: str):
invalid_op : function
"""
- def invalid_op(self, other=None):
+ def invalid_op(self, other=None) -> NoReturn:
typ = type(self).__name__
raise TypeError(f"cannot perform {name} with this index type: {typ}")
diff --git a/pandas/core/ops/missing.py b/pandas/core/ops/missing.py
index fc685935a35fc..fb5980184355c 100644
--- a/pandas/core/ops/missing.py
+++ b/pandas/core/ops/missing.py
@@ -30,7 +30,7 @@
from pandas.core import roperator
-def _fill_zeros(result: np.ndarray, x, y):
+def _fill_zeros(result: np.ndarray, x, y) -> np.ndarray:
"""
If this is a reversed op, then flip x,y
diff --git a/pandas/io/orc.py b/pandas/io/orc.py
index fed9463c38d5d..ed9bc21075e73 100644
--- a/pandas/io/orc.py
+++ b/pandas/io/orc.py
@@ -2,7 +2,6 @@
from __future__ import annotations
import io
-from types import ModuleType
from typing import (
TYPE_CHECKING,
Any,
@@ -218,7 +217,7 @@ def to_orc(
if engine != "pyarrow":
raise ValueError("engine must be 'pyarrow'")
- engine = import_optional_dependency(engine, min_version="10.0.1")
+ pyarrow = import_optional_dependency(engine, min_version="10.0.1")
pa = import_optional_dependency("pyarrow")
orc = import_optional_dependency("pyarrow.orc")
@@ -227,10 +226,9 @@ def to_orc(
path = io.BytesIO()
assert path is not None # For mypy
with get_handle(path, "wb", is_text=False) as handles:
- assert isinstance(engine, ModuleType) # For mypy
try:
orc.write_table(
- engine.Table.from_pandas(df, preserve_index=index),
+ pyarrow.Table.from_pandas(df, preserve_index=index),
handles.handle,
**engine_kwargs,
)
diff --git a/pandas/util/__init__.py b/pandas/util/__init__.py
index 82b3aa56c653c..91282fde8b11d 100644
--- a/pandas/util/__init__.py
+++ b/pandas/util/__init__.py
@@ -25,5 +25,5 @@ def __getattr__(key: str):
raise AttributeError(f"module 'pandas.util' has no attribute '{key}'")
-def capitalize_first_letter(s):
+def capitalize_first_letter(s: str) -> str:
return s[:1].upper() + s[1:]
diff --git a/pandas/util/_decorators.py b/pandas/util/_decorators.py
index 4e8189e72c427..aef91064d12fb 100644
--- a/pandas/util/_decorators.py
+++ b/pandas/util/_decorators.py
@@ -340,7 +340,7 @@ def wrapper(*args, **kwargs):
return decorate
-def doc(*docstrings: None | str | Callable, **params) -> Callable[[F], F]:
+def doc(*docstrings: None | str | Callable, **params: object) -> Callable[[F], F]:
"""
A decorator to take docstring templates, concatenate them and perform string
substitution on them.
diff --git a/pyproject.toml b/pyproject.toml
index 5e65edf81f9c7..430bb8e505df0 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -583,7 +583,6 @@ module = [
"pandas._testing.*", # TODO
"pandas.arrays", # TODO
"pandas.compat.numpy.function", # TODO
- "pandas.compat._optional", # TODO
"pandas.compat.compressors", # TODO
"pandas.compat.pickle_compat", # TODO
"pandas.core._numba.executor", # TODO
@@ -602,7 +601,6 @@ module = [
"pandas.core.dtypes.concat", # TODO
"pandas.core.dtypes.dtypes", # TODO
"pandas.core.dtypes.generic", # TODO
- "pandas.core.dtypes.inference", # TODO
"pandas.core.dtypes.missing", # TODO
"pandas.core.groupby.categorical", # TODO
"pandas.core.groupby.generic", # TODO
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56667 | 2023-12-28T21:08:14Z | 2023-12-29T00:41:42Z | 2023-12-29T00:41:42Z | 2024-01-17T02:49:48Z |
STY: Use ruff instead of pygrep check for future annotation import | diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index 7f3fc95ce00cc..4b02ad7cf886f 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -358,18 +358,6 @@ repos:
files: ^pandas/
exclude: ^(pandas/_libs/|pandas/tests/|pandas/errors/__init__.py$|pandas/_version.py)
types: [python]
- - id: future-annotations
- name: import annotations from __future__
- entry: 'from __future__ import annotations'
- language: pygrep
- args: [--negate]
- files: ^pandas/
- types: [python]
- exclude: |
- (?x)
- /(__init__\.py)|(api\.py)|(_version\.py)|(testing\.py)|(conftest\.py)$
- |/tests/
- |/_testing/
- id: check-test-naming
name: check that test names start with 'test'
entry: python -m scripts.check_test_naming
diff --git a/pyproject.toml b/pyproject.toml
index 5e65edf81f9c7..8724a25909543 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -259,6 +259,8 @@ select = [
"FLY",
# flake8-logging-format
"G",
+ # flake8-future-annotations
+ "FA",
]
ignore = [
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56666 | 2023-12-28T20:05:38Z | 2023-12-29T21:53:09Z | 2023-12-29T21:53:09Z | 2023-12-29T22:58:39Z |
Backport PR #56370 on branch 2.2.x (BUG: rolling with datetime ArrowDtype) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 1da18cd9be8f9..129f5cedb86c2 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -865,6 +865,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.resample` when resampling on a :class:`ArrowDtype` of ``pyarrow.timestamp`` or ``pyarrow.duration`` type (:issue:`55989`)
- Bug in :meth:`DataFrame.resample` where bin edges were not correct for :class:`~pandas.tseries.offsets.BusinessDay` (:issue:`55281`)
- Bug in :meth:`DataFrame.resample` where bin edges were not correct for :class:`~pandas.tseries.offsets.MonthBegin` (:issue:`55271`)
+- Bug in :meth:`DataFrame.rolling` and :meth:`Series.rolling` where either the ``index`` or ``on`` column was :class:`ArrowDtype` with ``pyarrow.timestamp`` type (:issue:`55849`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index 11a0c7bf18fcb..a0e0a1434e871 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -92,6 +92,7 @@
pandas_dtype,
)
from pandas.core.dtypes.dtypes import (
+ ArrowDtype,
CategoricalDtype,
DatetimeTZDtype,
ExtensionDtype,
@@ -2531,7 +2532,7 @@ def _validate_inferred_freq(
return freq
-def dtype_to_unit(dtype: DatetimeTZDtype | np.dtype) -> str:
+def dtype_to_unit(dtype: DatetimeTZDtype | np.dtype | ArrowDtype) -> str:
"""
Return the unit str corresponding to the dtype's resolution.
@@ -2546,4 +2547,8 @@ def dtype_to_unit(dtype: DatetimeTZDtype | np.dtype) -> str:
"""
if isinstance(dtype, DatetimeTZDtype):
return dtype.unit
+ elif isinstance(dtype, ArrowDtype):
+ if dtype.kind not in "mM":
+ raise ValueError(f"{dtype=} does not have a resolution.")
+ return dtype.pyarrow_dtype.unit
return np.datetime_data(dtype)[0]
diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index e78bd258c11ff..68cec16ec9eca 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -14,7 +14,6 @@
Any,
Callable,
Literal,
- cast,
)
import numpy as np
@@ -39,6 +38,7 @@
is_numeric_dtype,
needs_i8_conversion,
)
+from pandas.core.dtypes.dtypes import ArrowDtype
from pandas.core.dtypes.generic import (
ABCDataFrame,
ABCSeries,
@@ -104,6 +104,7 @@
NDFrameT,
QuantileInterpolation,
WindowingRankType,
+ npt,
)
from pandas import (
@@ -404,11 +405,12 @@ def _insert_on_column(self, result: DataFrame, obj: DataFrame) -> None:
result[name] = extra_col
@property
- def _index_array(self):
+ def _index_array(self) -> npt.NDArray[np.int64] | None:
# TODO: why do we get here with e.g. MultiIndex?
- if needs_i8_conversion(self._on.dtype):
- idx = cast("PeriodIndex | DatetimeIndex | TimedeltaIndex", self._on)
- return idx.asi8
+ if isinstance(self._on, (PeriodIndex, DatetimeIndex, TimedeltaIndex)):
+ return self._on.asi8
+ elif isinstance(self._on.dtype, ArrowDtype) and self._on.dtype.kind in "mM":
+ return self._on.to_numpy(dtype=np.int64)
return None
def _resolve_output(self, out: DataFrame, obj: DataFrame) -> DataFrame:
@@ -439,7 +441,7 @@ def _apply_series(
self, homogeneous_func: Callable[..., ArrayLike], name: str | None = None
) -> Series:
"""
- Series version of _apply_blockwise
+ Series version of _apply_columnwise
"""
obj = self._create_data(self._selected_obj)
@@ -455,7 +457,7 @@ def _apply_series(
index = self._slice_axis_for_step(obj.index, result)
return obj._constructor(result, index=index, name=obj.name)
- def _apply_blockwise(
+ def _apply_columnwise(
self,
homogeneous_func: Callable[..., ArrayLike],
name: str,
@@ -614,7 +616,7 @@ def calc(x):
return result
if self.method == "single":
- return self._apply_blockwise(homogeneous_func, name, numeric_only)
+ return self._apply_columnwise(homogeneous_func, name, numeric_only)
else:
return self._apply_tablewise(homogeneous_func, name, numeric_only)
@@ -1232,7 +1234,9 @@ def calc(x):
return result
- return self._apply_blockwise(homogeneous_func, name, numeric_only)[:: self.step]
+ return self._apply_columnwise(homogeneous_func, name, numeric_only)[
+ :: self.step
+ ]
@doc(
_shared_docs["aggregate"],
@@ -1868,6 +1872,7 @@ def _validate(self):
if (
self.obj.empty
or isinstance(self._on, (DatetimeIndex, TimedeltaIndex, PeriodIndex))
+ or (isinstance(self._on.dtype, ArrowDtype) and self._on.dtype.kind in "mM")
) and isinstance(self.window, (str, BaseOffset, timedelta)):
self._validate_datetimelike_monotonic()
diff --git a/pandas/tests/window/test_timeseries_window.py b/pandas/tests/window/test_timeseries_window.py
index c99fc8a8eb60f..bd0fadeb3e475 100644
--- a/pandas/tests/window/test_timeseries_window.py
+++ b/pandas/tests/window/test_timeseries_window.py
@@ -1,9 +1,12 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas import (
DataFrame,
DatetimeIndex,
+ Index,
MultiIndex,
NaT,
Series,
@@ -697,3 +700,16 @@ def test_nat_axis_error(msg, axis):
with pytest.raises(ValueError, match=f"{msg} values must not have NaT"):
with tm.assert_produces_warning(FutureWarning, match=warn_msg):
df.rolling("D", axis=axis).mean()
+
+
+@td.skip_if_no("pyarrow")
+def test_arrow_datetime_axis():
+ # GH 55849
+ expected = Series(
+ np.arange(5, dtype=np.float64),
+ index=Index(
+ date_range("2020-01-01", periods=5), dtype="timestamp[ns][pyarrow]"
+ ),
+ )
+ result = expected.rolling("1D").sum()
+ tm.assert_series_equal(result, expected)
| Backport PR #56370: BUG: rolling with datetime ArrowDtype | https://api.github.com/repos/pandas-dev/pandas/pulls/56665 | 2023-12-28T19:32:08Z | 2023-12-28T21:44:56Z | 2023-12-28T21:44:56Z | 2023-12-28T21:44:56Z |
CI: Run jobs on 2.2.x branch | diff --git a/.github/workflows/code-checks.yml b/.github/workflows/code-checks.yml
index b49b9a67c4743..8e29d56f47dcf 100644
--- a/.github/workflows/code-checks.yml
+++ b/.github/workflows/code-checks.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
env:
ENV_FILE: environment.yml
diff --git a/.github/workflows/docbuild-and-upload.yml b/.github/workflows/docbuild-and-upload.yml
index da232404e6ff5..73acd9acc129a 100644
--- a/.github/workflows/docbuild-and-upload.yml
+++ b/.github/workflows/docbuild-and-upload.yml
@@ -4,13 +4,13 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
tags:
- '*'
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
env:
ENV_FILE: environment.yml
diff --git a/.github/workflows/package-checks.yml b/.github/workflows/package-checks.yml
index 04d8b8e006985..d59ddf272f705 100644
--- a/.github/workflows/package-checks.yml
+++ b/.github/workflows/package-checks.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
types: [ labeled, opened, synchronize, reopened ]
permissions:
diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 6ca4d19196874..12e645dc9da81 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -4,11 +4,11 @@ on:
push:
branches:
- main
- - 2.1.x
+ - 2.2.x
pull_request:
branches:
- main
- - 2.1.x
+ - 2.2.x
paths-ignore:
- "doc/**"
- "web/**"
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56664 | 2023-12-28T19:27:52Z | 2023-12-28T21:45:17Z | 2023-12-28T21:45:17Z | 2023-12-28T22:28:08Z |
Backport PR #56641 on branch 2.2.x (DOC: Add optional dependencies table in 2.2 whatsnew) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d60dbefd83195..2ad5717bc0320 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -417,15 +417,63 @@ Backwards incompatible API changes
Increased minimum versions for dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
-The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
-Optional libraries below the lowest tested version may still work, but are not considered supported.
-
-+-----------------+-----------------+---------+
-| Package | Minimum Version | Changed |
-+=================+=================+=========+
-| mypy (dev) | 1.8.0 | X |
-+-----------------+-----------------+---------+
+For `optional dependencies <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
+Optional dependencies below the lowest tested version may still work but are not considered supported.
+The following table lists the optional dependencies that have had their minimum tested version increased.
+
++-----------------+---------------------+
+| Package | New Minimum Version |
++=================+=====================+
+| beautifulsoup4 | 4.11.2 |
++-----------------+---------------------+
+| blosc | 1.21.3 |
++-----------------+---------------------+
+| bottleneck | 1.3.6 |
++-----------------+---------------------+
+| fastparquet | 2022.12.0 |
++-----------------+---------------------+
+| fsspec | 2022.11.0 |
++-----------------+---------------------+
+| gcsfs | 2022.11.0 |
++-----------------+---------------------+
+| lxml | 4.9.2 |
++-----------------+---------------------+
+| matplotlib | 3.6.3 |
++-----------------+---------------------+
+| numba | 0.56.4 |
++-----------------+---------------------+
+| numexpr | 2.8.4 |
++-----------------+---------------------+
+| qtpy | 2.3.0 |
++-----------------+---------------------+
+| openpyxl | 3.1.0 |
++-----------------+---------------------+
+| psycopg2 | 2.9.6 |
++-----------------+---------------------+
+| pyreadstat | 1.2.0 |
++-----------------+---------------------+
+| pytables | 3.8.0 |
++-----------------+---------------------+
+| pyxlsb | 1.0.10 |
++-----------------+---------------------+
+| s3fs | 2022.11.0 |
++-----------------+---------------------+
+| scipy | 1.10.0 |
++-----------------+---------------------+
+| sqlalchemy | 2.0.0 |
++-----------------+---------------------+
+| tabulate | 0.9.0 |
++-----------------+---------------------+
+| xarray | 2022.12.0 |
++-----------------+---------------------+
+| xlsxwriter | 3.0.5 |
++-----------------+---------------------+
+| zstandard | 0.19.0 |
++-----------------+---------------------+
+| pyqt5 | 5.15.8 |
++-----------------+---------------------+
+| tzdata | 2022.7 |
++-----------------+---------------------+
See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
| Backport PR #56641: DOC: Add optional dependencies table in 2.2 whatsnew | https://api.github.com/repos/pandas-dev/pandas/pulls/56662 | 2023-12-28T19:04:00Z | 2023-12-28T19:23:31Z | 2023-12-28T19:23:31Z | 2023-12-28T19:23:31Z |
Backport PR #56635 on branch 2.2.x (CoW: Boolean indexer in MultiIndex raising read-only error) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 34c9c142d3870..cbce6717fef51 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -761,6 +761,7 @@ Interval
Indexing
^^^^^^^^
+- Bug in :meth:`DataFrame.loc` mutating a boolean indexer when :class:`DataFrame` has a :class:`MultiIndex` (:issue:`56635`)
- Bug in :meth:`DataFrame.loc` when setting :class:`Series` with extension dtype into NumPy dtype (:issue:`55604`)
- Bug in :meth:`Index.difference` not returning a unique set of values when ``other`` is empty or ``other`` is considered non-comparable (:issue:`55113`)
- Bug in setting :class:`Categorical` values into a :class:`DataFrame` with numpy dtypes raising ``RecursionError`` (:issue:`52927`)
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 2a4e027e2b806..02a841a2075fd 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -3488,6 +3488,8 @@ def _to_bool_indexer(indexer) -> npt.NDArray[np.bool_]:
"is not the same length as the index"
)
lvl_indexer = np.asarray(k)
+ if indexer is None:
+ lvl_indexer = lvl_indexer.copy()
elif is_list_like(k):
# a collection of labels to include from this level (these are or'd)
diff --git a/pandas/tests/copy_view/test_indexing.py b/pandas/tests/copy_view/test_indexing.py
index 6f3850ab64daa..2681c07f01990 100644
--- a/pandas/tests/copy_view/test_indexing.py
+++ b/pandas/tests/copy_view/test_indexing.py
@@ -1224,6 +1224,27 @@ def test_series_midx_tuples_slice(using_copy_on_write, warn_copy_on_write):
tm.assert_series_equal(ser, expected)
+def test_midx_read_only_bool_indexer():
+ # GH#56635
+ def mklbl(prefix, n):
+ return [f"{prefix}{i}" for i in range(n)]
+
+ idx = pd.MultiIndex.from_product(
+ [mklbl("A", 4), mklbl("B", 2), mklbl("C", 4), mklbl("D", 2)]
+ )
+ cols = pd.MultiIndex.from_tuples(
+ [("a", "foo"), ("a", "bar"), ("b", "foo"), ("b", "bah")], names=["lvl0", "lvl1"]
+ )
+ df = DataFrame(1, index=idx, columns=cols).sort_index().sort_index(axis=1)
+
+ mask = df[("a", "foo")] == 1
+ expected_mask = mask.copy()
+ result = df.loc[pd.IndexSlice[mask, :, ["C1", "C3"]], :]
+ expected = df.loc[pd.IndexSlice[:, :, ["C1", "C3"]], :]
+ tm.assert_frame_equal(result, expected)
+ tm.assert_series_equal(mask, expected_mask)
+
+
def test_loc_enlarging_with_dataframe(using_copy_on_write):
df = DataFrame({"a": [1, 2, 3]})
rhs = DataFrame({"b": [1, 2, 3], "c": [4, 5, 6]})
| Backport PR #56635: CoW: Boolean indexer in MultiIndex raising read-only error | https://api.github.com/repos/pandas-dev/pandas/pulls/56660 | 2023-12-28T18:48:32Z | 2023-12-28T19:22:59Z | 2023-12-28T19:22:59Z | 2023-12-28T19:22:59Z |
Backport PR #56613 on branch 2.2.x (BUG: Added raising when merging datetime columns with timedelta columns) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 34c9c142d3870..d60dbefd83195 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -824,6 +824,7 @@ Reshaping
- Bug in :func:`merge_asof` raising ``TypeError`` when ``by`` dtype is not ``object``, ``int64``, or ``uint64`` (:issue:`22794`)
- Bug in :func:`merge_asof` raising incorrect error for string dtype (:issue:`56444`)
- Bug in :func:`merge_asof` when using a :class:`Timedelta` tolerance on a :class:`ArrowDtype` column (:issue:`56486`)
+- Bug in :func:`merge` not raising when merging datetime columns with timedelta columns (:issue:`56455`)
- Bug in :func:`merge` not raising when merging string columns with numeric columns (:issue:`56441`)
- Bug in :func:`merge` returning columns in incorrect order when left and/or right is empty (:issue:`51929`)
- Bug in :meth:`DataFrame.melt` where an exception was raised if ``var_name`` was not a string (:issue:`55948`)
diff --git a/pandas/core/reshape/merge.py b/pandas/core/reshape/merge.py
index 690e3c2700c6c..320e4e33a29fb 100644
--- a/pandas/core/reshape/merge.py
+++ b/pandas/core/reshape/merge.py
@@ -1526,6 +1526,11 @@ def _maybe_coerce_merge_keys(self) -> None:
) or (lk.dtype.kind == "M" and rk.dtype.kind == "M"):
# allows datetime with different resolutions
continue
+ # datetime and timedelta not allowed
+ elif lk.dtype.kind == "M" and rk.dtype.kind == "m":
+ raise ValueError(msg)
+ elif lk.dtype.kind == "m" and rk.dtype.kind == "M":
+ raise ValueError(msg)
elif is_object_dtype(lk.dtype) and is_object_dtype(rk.dtype):
continue
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index d7a343ae9f152..ab8d22e567d27 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -2988,3 +2988,23 @@ def test_merge_empty_frames_column_order(left_empty, right_empty):
elif right_empty:
expected.loc[:, ["C", "D"]] = np.nan
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("how", ["left", "right", "inner", "outer"])
+def test_merge_datetime_and_timedelta(how):
+ left = DataFrame({"key": Series([1, None], dtype="datetime64[ns]")})
+ right = DataFrame({"key": Series([1], dtype="timedelta64[ns]")})
+
+ msg = (
+ f"You are trying to merge on {left['key'].dtype} and {right['key'].dtype} "
+ "columns for key 'key'. If you wish to proceed you should use pd.concat"
+ )
+ with pytest.raises(ValueError, match=re.escape(msg)):
+ left.merge(right, on="key", how=how)
+
+ msg = (
+ f"You are trying to merge on {right['key'].dtype} and {left['key'].dtype} "
+ "columns for key 'key'. If you wish to proceed you should use pd.concat"
+ )
+ with pytest.raises(ValueError, match=re.escape(msg)):
+ right.merge(left, on="key", how=how)
| Backport PR #56613: BUG: Added raising when merging datetime columns with timedelta columns | https://api.github.com/repos/pandas-dev/pandas/pulls/56658 | 2023-12-28T17:33:33Z | 2023-12-28T18:52:44Z | 2023-12-28T18:52:44Z | 2023-12-28T18:52:44Z |
Backport PR #56650 on branch 2.2.x (ENH: Implement dt methods for pyarrow duration types) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5b955aa45219a..f7e1cc9cbe36d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -316,6 +316,7 @@ Other enhancements
- :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, and :meth:`DataFrame.bfill` have gained the argument ``limit_area`` (:issue:`56492`)
- Allow passing ``read_only``, ``data_only`` and ``keep_links`` arguments to openpyxl using ``engine_kwargs`` of :func:`read_excel` (:issue:`55027`)
- Implement masked algorithms for :meth:`Series.value_counts` (:issue:`54984`)
+- Implemented :meth:`Series.dt` methods and attributes for :class:`ArrowDtype` with ``pyarrow.duration`` type (:issue:`52284`)
- Implemented :meth:`Series.str.extract` for :class:`ArrowDtype` (:issue:`56268`)
- Improved error message that appears in :meth:`DatetimeIndex.to_period` with frequencies which are not supported as period frequencies, such as ``"BMS"`` (:issue:`56243`)
- Improved error message when constructing :class:`Period` with invalid offsets such as ``"QS"`` (:issue:`55785`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index de1ed9ecfdaf1..32a4cadff8270 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -17,6 +17,7 @@
from pandas._libs import lib
from pandas._libs.tslibs import (
+ NaT,
Timedelta,
Timestamp,
timezones,
@@ -2498,6 +2499,92 @@ def _str_wrap(self, width: int, **kwargs):
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
+ @property
+ def _dt_days(self):
+ return type(self)(
+ pa.array(self._to_timedeltaarray().days, from_pandas=True, type=pa.int32())
+ )
+
+ @property
+ def _dt_hours(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.hours if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_minutes(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.minutes if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_seconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().seconds, from_pandas=True, type=pa.int32()
+ )
+ )
+
+ @property
+ def _dt_milliseconds(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.milliseconds if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_microseconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().microseconds,
+ from_pandas=True,
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_nanoseconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().nanoseconds, from_pandas=True, type=pa.int32()
+ )
+ )
+
+ def _dt_to_pytimedelta(self):
+ data = self._pa_array.to_pylist()
+ if self._dtype.pyarrow_dtype.unit == "ns":
+ data = [None if ts is None else ts.to_pytimedelta() for ts in data]
+ return np.array(data, dtype=object)
+
+ def _dt_total_seconds(self):
+ return type(self)(
+ pa.array(self._to_timedeltaarray().total_seconds(), from_pandas=True)
+ )
+
+ def _dt_as_unit(self, unit: str):
+ if pa.types.is_date(self.dtype.pyarrow_dtype):
+ raise NotImplementedError("as_unit not implemented for date types")
+ pd_array = self._maybe_convert_datelike_array()
+ # Don't just cast _pa_array in order to follow pandas unit conversion rules
+ return type(self)(pa.array(pd_array.as_unit(unit), from_pandas=True))
+
@property
def _dt_year(self):
return type(self)(pc.year(self._pa_array))
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index 929c7f4a63f8f..7e3ba4089ff60 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -148,6 +148,20 @@ def _delegate_method(self, name: str, *args, **kwargs):
return result
+@delegate_names(
+ delegate=ArrowExtensionArray,
+ accessors=TimedeltaArray._datetimelike_ops,
+ typ="property",
+ accessor_mapping=lambda x: f"_dt_{x}",
+ raise_on_missing=False,
+)
+@delegate_names(
+ delegate=ArrowExtensionArray,
+ accessors=TimedeltaArray._datetimelike_methods,
+ typ="method",
+ accessor_mapping=lambda x: f"_dt_{x}",
+ raise_on_missing=False,
+)
@delegate_names(
delegate=ArrowExtensionArray,
accessors=DatetimeArray._datetimelike_ops,
@@ -213,6 +227,9 @@ def _delegate_method(self, name: str, *args, **kwargs):
return result
+ def to_pytimedelta(self):
+ return cast(ArrowExtensionArray, self._parent.array)._dt_to_pytimedelta()
+
def to_pydatetime(self):
# GH#20306
warnings.warn(
@@ -241,6 +258,26 @@ def isocalendar(self) -> DataFrame:
)
return iso_calendar_df
+ @property
+ def components(self) -> DataFrame:
+ from pandas import DataFrame
+
+ components_df = DataFrame(
+ {
+ col: getattr(self._parent.array, f"_dt_{col}")
+ for col in [
+ "days",
+ "hours",
+ "minutes",
+ "seconds",
+ "milliseconds",
+ "microseconds",
+ "nanoseconds",
+ ]
+ }
+ )
+ return components_df
+
@delegate_names(
delegate=DatetimeArray,
@@ -592,7 +629,7 @@ def __new__(cls, data: Series): # pyright: ignore[reportInconsistentConstructor
index=orig.index,
)
- if isinstance(data.dtype, ArrowDtype) and data.dtype.kind == "M":
+ if isinstance(data.dtype, ArrowDtype) and data.dtype.kind in "Mm":
return ArrowTemporalProperties(data, orig)
if lib.is_np_dtype(data.dtype, "M"):
return DatetimeProperties(data, orig)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 5624acfb64764..20cdcb9ce9ab8 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2723,6 +2723,111 @@ def test_dt_tz_convert(unit):
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("dtype", ["timestamp[ms][pyarrow]", "duration[ms][pyarrow]"])
+def test_as_unit(dtype):
+ # GH 52284
+ ser = pd.Series([1000, None], dtype=dtype)
+ result = ser.dt.as_unit("ns")
+ expected = ser.astype(dtype.replace("ms", "ns"))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "prop, expected",
+ [
+ ["days", 1],
+ ["seconds", 2],
+ ["microseconds", 3],
+ ["nanoseconds", 4],
+ ],
+)
+def test_dt_timedelta_properties(prop, expected):
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = getattr(ser.dt, prop)
+ expected = pd.Series(
+ ArrowExtensionArray(pa.array([expected, None], type=pa.int32()))
+ )
+ tm.assert_series_equal(result, expected)
+
+
+def test_dt_timedelta_total_seconds():
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = ser.dt.total_seconds()
+ expected = pd.Series(
+ ArrowExtensionArray(pa.array([86402.000003, None], type=pa.float64()))
+ )
+ tm.assert_series_equal(result, expected)
+
+
+def test_dt_to_pytimedelta():
+ # GH 52284
+ data = [timedelta(1, 2, 3), timedelta(1, 2, 4)]
+ ser = pd.Series(data, dtype=ArrowDtype(pa.duration("ns")))
+
+ result = ser.dt.to_pytimedelta()
+ expected = np.array(data, dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+ assert all(type(res) is timedelta for res in result)
+
+ expected = ser.astype("timedelta64[ns]").dt.to_pytimedelta()
+ tm.assert_numpy_array_equal(result, expected)
+
+
+def test_dt_components():
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = ser.dt.components
+ expected = pd.DataFrame(
+ [[1, 0, 0, 2, 0, 3, 4], [None, None, None, None, None, None, None]],
+ columns=[
+ "days",
+ "hours",
+ "minutes",
+ "seconds",
+ "milliseconds",
+ "microseconds",
+ "nanoseconds",
+ ],
+ dtype="int32[pyarrow]",
+ )
+ tm.assert_frame_equal(result, expected)
+
+
@pytest.mark.parametrize("skipna", [True, False])
def test_boolean_reduce_series_all_null(all_boolean_reductions, skipna):
# GH51624
| Backport PR #56650: ENH: Implement dt methods for pyarrow duration types | https://api.github.com/repos/pandas-dev/pandas/pulls/56656 | 2023-12-28T15:46:58Z | 2023-12-28T16:16:27Z | 2023-12-28T16:16:27Z | 2023-12-28T16:16:27Z |
Backport PR #56647 on branch 2.2.x (floordiv fix for large values) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5b955aa45219a..2fcab46c9e229 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -727,6 +727,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
+- Bug in :meth:`Series.__floordiv__` for :class:`ArrowDtype` with integral dtypes raising for large values (:issue:`56645`)
- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
Conversion
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index de1ed9ecfdaf1..59f0a3af2b1ab 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -114,7 +114,12 @@ def cast_for_truediv(
if pa.types.is_integer(arrow_array.type) and pa.types.is_integer(
pa_object.type
):
- return arrow_array.cast(pa.float64())
+ # https://github.com/apache/arrow/issues/35563
+ # Arrow does not allow safe casting large integral values to float64.
+ # Intentionally not using arrow_array.cast because it could be a scalar
+ # value in reflected case, and safe=False only added to
+ # scalar cast in pyarrow 13.
+ return pc.cast(arrow_array, pa.float64(), safe=False)
return arrow_array
def floordiv_compat(
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 5624acfb64764..643a42d32ebe2 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3133,6 +3133,14 @@ def test_arrow_floordiv():
tm.assert_series_equal(result, expected)
+def test_arrow_floordiv_large_values():
+ # GH 55561
+ a = pd.Series([1425801600000000000], dtype="int64[pyarrow]")
+ expected = pd.Series([1425801600000], dtype="int64[pyarrow]")
+ result = a // 1_000_000
+ tm.assert_series_equal(result, expected)
+
+
def test_string_to_datetime_parsing_cast():
# GH 56266
string_dates = ["2020-01-01 04:30:00", "2020-01-02 00:00:00", "2020-01-03 00:00:00"]
| Backport PR #56647: floordiv fix for large values | https://api.github.com/repos/pandas-dev/pandas/pulls/56655 | 2023-12-28T15:38:51Z | 2023-12-28T16:16:37Z | 2023-12-28T16:16:37Z | 2023-12-28T16:16:37Z |
BUG: assert_series_equal not properly respecting check-dtype | diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 800b03707540f..d0f38c85868d4 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -949,9 +949,15 @@ def assert_series_equal(
obj=str(obj),
)
else:
+ # convert both to NumPy if not, check_dtype would raise earlier
+ lv, rv = left_values, right_values
+ if isinstance(left_values, ExtensionArray):
+ lv = left_values.to_numpy()
+ if isinstance(right_values, ExtensionArray):
+ rv = right_values.to_numpy()
assert_numpy_array_equal(
- left_values,
- right_values,
+ lv,
+ rv,
check_dtype=check_dtype,
obj=str(obj),
index_values=left.index,
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index aaf49f53ba02b..e38144f4c615b 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -421,16 +421,6 @@ def test_index_from_listlike_with_dtype(self, data):
def test_EA_types(self, engine, data, request):
super().test_EA_types(engine, data, request)
- @pytest.mark.xfail(reason="Expect NumpyEA, get np.ndarray")
- def test_compare_array(self, data, comparison_op):
- super().test_compare_array(data, comparison_op)
-
- def test_compare_scalar(self, data, comparison_op, request):
- if data.dtype.kind == "f" or comparison_op.__name__ in ["eq", "ne"]:
- mark = pytest.mark.xfail(reason="Expect NumpyEA, get np.ndarray")
- request.applymarker(mark)
- super().test_compare_scalar(data, comparison_op)
-
class Test2DCompat(base.NDArrayBacked2DTests):
pass
diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index a074898f6046d..79132591b15b3 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -211,10 +211,7 @@ def test_assert_frame_equal_extension_dtype_mismatch():
"\\[right\\]: int[32|64]"
)
- # TODO: this shouldn't raise (or should raise a better error message)
- # https://github.com/pandas-dev/pandas/issues/56131
- with pytest.raises(AssertionError, match="classes are different"):
- tm.assert_frame_equal(left, right, check_dtype=False)
+ tm.assert_frame_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_frame_equal(left, right, check_dtype=True)
@@ -246,7 +243,6 @@ def test_assert_frame_equal_ignore_extension_dtype_mismatch():
tm.assert_frame_equal(left, right, check_dtype=False)
-@pytest.mark.xfail(reason="https://github.com/pandas-dev/pandas/issues/56131")
def test_assert_frame_equal_ignore_extension_dtype_mismatch_cross_class():
# https://github.com/pandas-dev/pandas/issues/35715
left = DataFrame({"a": [1, 2, 3]}, dtype="Int64")
@@ -300,9 +296,7 @@ def test_frame_equal_mixed_dtypes(frame_or_series, any_numeric_ea_dtype, indexer
dtypes = (any_numeric_ea_dtype, "int64")
obj1 = frame_or_series([1, 2], dtype=dtypes[indexer[0]])
obj2 = frame_or_series([1, 2], dtype=dtypes[indexer[1]])
- msg = r'(Series|DataFrame.iloc\[:, 0\] \(column name="0"\) classes) are different'
- with pytest.raises(AssertionError, match=msg):
- tm.assert_equal(obj1, obj2, check_exact=True, check_dtype=False)
+ tm.assert_equal(obj1, obj2, check_exact=True, check_dtype=False)
def test_assert_frame_equal_check_like_different_indexes():
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index f722f619bc456..c4ffc197298f0 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -290,10 +290,7 @@ def test_assert_series_equal_extension_dtype_mismatch():
\\[left\\]: Int64
\\[right\\]: int[32|64]"""
- # TODO: this shouldn't raise (or should raise a better error message)
- # https://github.com/pandas-dev/pandas/issues/56131
- with pytest.raises(AssertionError, match="Series classes are different"):
- tm.assert_series_equal(left, right, check_dtype=False)
+ tm.assert_series_equal(left, right, check_dtype=False)
with pytest.raises(AssertionError, match=msg):
tm.assert_series_equal(left, right, check_dtype=True)
@@ -372,7 +369,6 @@ def test_assert_series_equal_ignore_extension_dtype_mismatch():
tm.assert_series_equal(left, right, check_dtype=False)
-@pytest.mark.xfail(reason="https://github.com/pandas-dev/pandas/issues/56131")
def test_assert_series_equal_ignore_extension_dtype_mismatch_cross_class():
# https://github.com/pandas-dev/pandas/issues/35715
left = Series([1, 2, 3], dtype="Int64")
@@ -456,3 +452,13 @@ def test_large_unequal_ints(dtype):
right = Series([1577840521123543], dtype=dtype)
with pytest.raises(AssertionError, match="Series are different"):
tm.assert_series_equal(left, right)
+
+
+@pytest.mark.parametrize("dtype", [None, object])
+@pytest.mark.parametrize("check_exact", [True, False])
+@pytest.mark.parametrize("val", [3, 3.5])
+def test_ea_and_numpy_no_dtype_check(val, check_exact, dtype):
+ # GH#56651
+ left = Series([1, 2, val], dtype=dtype)
+ right = Series(pd.array([1, 2, val]))
+ tm.assert_series_equal(left, right, check_dtype=False, check_exact=check_exact)
| - [ ] closes #56651 (Replace xxxx with the GitHub issue number)
- [ ] closes #56340
- [ ] closes #56131
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56654 | 2023-12-28T15:34:16Z | 2023-12-28T21:44:04Z | 2023-12-28T21:44:04Z | 2023-12-28T21:44:07Z |
ENH: Implement dt methods for pyarrow duration types | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5b955aa45219a..f7e1cc9cbe36d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -316,6 +316,7 @@ Other enhancements
- :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, and :meth:`DataFrame.bfill` have gained the argument ``limit_area`` (:issue:`56492`)
- Allow passing ``read_only``, ``data_only`` and ``keep_links`` arguments to openpyxl using ``engine_kwargs`` of :func:`read_excel` (:issue:`55027`)
- Implement masked algorithms for :meth:`Series.value_counts` (:issue:`54984`)
+- Implemented :meth:`Series.dt` methods and attributes for :class:`ArrowDtype` with ``pyarrow.duration`` type (:issue:`52284`)
- Implemented :meth:`Series.str.extract` for :class:`ArrowDtype` (:issue:`56268`)
- Improved error message that appears in :meth:`DatetimeIndex.to_period` with frequencies which are not supported as period frequencies, such as ``"BMS"`` (:issue:`56243`)
- Improved error message when constructing :class:`Period` with invalid offsets such as ``"QS"`` (:issue:`55785`)
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 23b5448029dd9..5d0be2aac47c4 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -17,6 +17,7 @@
from pandas._libs import lib
from pandas._libs.tslibs import (
+ NaT,
Timedelta,
Timestamp,
timezones,
@@ -2489,6 +2490,92 @@ def _str_wrap(self, width: int, **kwargs):
result = self._apply_elementwise(predicate)
return type(self)(pa.chunked_array(result))
+ @property
+ def _dt_days(self):
+ return type(self)(
+ pa.array(self._to_timedeltaarray().days, from_pandas=True, type=pa.int32())
+ )
+
+ @property
+ def _dt_hours(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.hours if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_minutes(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.minutes if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_seconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().seconds, from_pandas=True, type=pa.int32()
+ )
+ )
+
+ @property
+ def _dt_milliseconds(self):
+ return type(self)(
+ pa.array(
+ [
+ td.components.milliseconds if td is not NaT else None
+ for td in self._to_timedeltaarray()
+ ],
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_microseconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().microseconds,
+ from_pandas=True,
+ type=pa.int32(),
+ )
+ )
+
+ @property
+ def _dt_nanoseconds(self):
+ return type(self)(
+ pa.array(
+ self._to_timedeltaarray().nanoseconds, from_pandas=True, type=pa.int32()
+ )
+ )
+
+ def _dt_to_pytimedelta(self):
+ data = self._pa_array.to_pylist()
+ if self._dtype.pyarrow_dtype.unit == "ns":
+ data = [None if ts is None else ts.to_pytimedelta() for ts in data]
+ return np.array(data, dtype=object)
+
+ def _dt_total_seconds(self):
+ return type(self)(
+ pa.array(self._to_timedeltaarray().total_seconds(), from_pandas=True)
+ )
+
+ def _dt_as_unit(self, unit: str):
+ if pa.types.is_date(self.dtype.pyarrow_dtype):
+ raise NotImplementedError("as_unit not implemented for date types")
+ pd_array = self._maybe_convert_datelike_array()
+ # Don't just cast _pa_array in order to follow pandas unit conversion rules
+ return type(self)(pa.array(pd_array.as_unit(unit), from_pandas=True))
+
@property
def _dt_year(self):
return type(self)(pc.year(self._pa_array))
diff --git a/pandas/core/indexes/accessors.py b/pandas/core/indexes/accessors.py
index 929c7f4a63f8f..7e3ba4089ff60 100644
--- a/pandas/core/indexes/accessors.py
+++ b/pandas/core/indexes/accessors.py
@@ -148,6 +148,20 @@ def _delegate_method(self, name: str, *args, **kwargs):
return result
+@delegate_names(
+ delegate=ArrowExtensionArray,
+ accessors=TimedeltaArray._datetimelike_ops,
+ typ="property",
+ accessor_mapping=lambda x: f"_dt_{x}",
+ raise_on_missing=False,
+)
+@delegate_names(
+ delegate=ArrowExtensionArray,
+ accessors=TimedeltaArray._datetimelike_methods,
+ typ="method",
+ accessor_mapping=lambda x: f"_dt_{x}",
+ raise_on_missing=False,
+)
@delegate_names(
delegate=ArrowExtensionArray,
accessors=DatetimeArray._datetimelike_ops,
@@ -213,6 +227,9 @@ def _delegate_method(self, name: str, *args, **kwargs):
return result
+ def to_pytimedelta(self):
+ return cast(ArrowExtensionArray, self._parent.array)._dt_to_pytimedelta()
+
def to_pydatetime(self):
# GH#20306
warnings.warn(
@@ -241,6 +258,26 @@ def isocalendar(self) -> DataFrame:
)
return iso_calendar_df
+ @property
+ def components(self) -> DataFrame:
+ from pandas import DataFrame
+
+ components_df = DataFrame(
+ {
+ col: getattr(self._parent.array, f"_dt_{col}")
+ for col in [
+ "days",
+ "hours",
+ "minutes",
+ "seconds",
+ "milliseconds",
+ "microseconds",
+ "nanoseconds",
+ ]
+ }
+ )
+ return components_df
+
@delegate_names(
delegate=DatetimeArray,
@@ -592,7 +629,7 @@ def __new__(cls, data: Series): # pyright: ignore[reportInconsistentConstructor
index=orig.index,
)
- if isinstance(data.dtype, ArrowDtype) and data.dtype.kind == "M":
+ if isinstance(data.dtype, ArrowDtype) and data.dtype.kind in "Mm":
return ArrowTemporalProperties(data, orig)
if lib.is_np_dtype(data.dtype, "M"):
return DatetimeProperties(data, orig)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 3b03272f18203..dad2c0ce5995a 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -2723,6 +2723,111 @@ def test_dt_tz_convert(unit):
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize("dtype", ["timestamp[ms][pyarrow]", "duration[ms][pyarrow]"])
+def test_as_unit(dtype):
+ # GH 52284
+ ser = pd.Series([1000, None], dtype=dtype)
+ result = ser.dt.as_unit("ns")
+ expected = ser.astype(dtype.replace("ms", "ns"))
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "prop, expected",
+ [
+ ["days", 1],
+ ["seconds", 2],
+ ["microseconds", 3],
+ ["nanoseconds", 4],
+ ],
+)
+def test_dt_timedelta_properties(prop, expected):
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = getattr(ser.dt, prop)
+ expected = pd.Series(
+ ArrowExtensionArray(pa.array([expected, None], type=pa.int32()))
+ )
+ tm.assert_series_equal(result, expected)
+
+
+def test_dt_timedelta_total_seconds():
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = ser.dt.total_seconds()
+ expected = pd.Series(
+ ArrowExtensionArray(pa.array([86402.000003, None], type=pa.float64()))
+ )
+ tm.assert_series_equal(result, expected)
+
+
+def test_dt_to_pytimedelta():
+ # GH 52284
+ data = [timedelta(1, 2, 3), timedelta(1, 2, 4)]
+ ser = pd.Series(data, dtype=ArrowDtype(pa.duration("ns")))
+
+ result = ser.dt.to_pytimedelta()
+ expected = np.array(data, dtype=object)
+ tm.assert_numpy_array_equal(result, expected)
+ assert all(type(res) is timedelta for res in result)
+
+ expected = ser.astype("timedelta64[ns]").dt.to_pytimedelta()
+ tm.assert_numpy_array_equal(result, expected)
+
+
+def test_dt_components():
+ # GH 52284
+ ser = pd.Series(
+ [
+ pd.Timedelta(
+ days=1,
+ seconds=2,
+ microseconds=3,
+ nanoseconds=4,
+ ),
+ None,
+ ],
+ dtype=ArrowDtype(pa.duration("ns")),
+ )
+ result = ser.dt.components
+ expected = pd.DataFrame(
+ [[1, 0, 0, 2, 0, 3, 4], [None, None, None, None, None, None, None]],
+ columns=[
+ "days",
+ "hours",
+ "minutes",
+ "seconds",
+ "milliseconds",
+ "microseconds",
+ "nanoseconds",
+ ],
+ dtype="int32[pyarrow]",
+ )
+ tm.assert_frame_equal(result, expected)
+
+
@pytest.mark.parametrize("skipna", [True, False])
def test_boolean_reduce_series_all_null(all_boolean_reductions, skipna):
# GH51624
| - [x] closes #52284 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56650 | 2023-12-28T01:23:14Z | 2023-12-28T15:46:51Z | 2023-12-28T15:46:51Z | 2023-12-28T18:46:16Z |
Backport PR #56644 on branch 2.2.x (BUG: Series.to_numpy raising for arrow floats to numpy floats) | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 23b5448029dd9..de1ed9ecfdaf1 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -37,6 +37,7 @@
CategoricalDtype,
is_array_like,
is_bool_dtype,
+ is_float_dtype,
is_integer,
is_list_like,
is_numeric_dtype,
@@ -1320,6 +1321,7 @@ def to_numpy(
copy: bool = False,
na_value: object = lib.no_default,
) -> np.ndarray:
+ original_na_value = na_value
dtype, na_value = to_numpy_dtype_inference(self, dtype, na_value, self._hasna)
pa_type = self._pa_array.type
if not self._hasna or isna(na_value) or pa.types.is_null(pa_type):
@@ -1345,7 +1347,14 @@ def to_numpy(
if dtype is not None and isna(na_value):
na_value = None
result = np.full(len(data), fill_value=na_value, dtype=dtype)
- elif not data._hasna or (pa.types.is_floating(pa_type) and na_value is np.nan):
+ elif not data._hasna or (
+ pa.types.is_floating(pa_type)
+ and (
+ na_value is np.nan
+ or original_na_value is lib.no_default
+ and is_float_dtype(dtype)
+ )
+ ):
result = data._pa_array.to_numpy()
if dtype is not None:
result = result.astype(dtype, copy=False)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 3b03272f18203..5624acfb64764 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3153,6 +3153,14 @@ def test_string_to_time_parsing_cast():
tm.assert_series_equal(result, expected)
+def test_to_numpy_float():
+ # GH#56267
+ ser = pd.Series([32, 40, None], dtype="float[pyarrow]")
+ result = ser.astype("float64")
+ expected = pd.Series([32, 40, np.nan], dtype="float64")
+ tm.assert_series_equal(result, expected)
+
+
def test_to_numpy_timestamp_to_int():
# GH 55997
ser = pd.Series(["2020-01-01 04:30:00"], dtype="timestamp[ns][pyarrow]")
| Backport PR #56644: BUG: Series.to_numpy raising for arrow floats to numpy floats | https://api.github.com/repos/pandas-dev/pandas/pulls/56648 | 2023-12-28T00:02:47Z | 2023-12-28T01:25:53Z | 2023-12-28T01:25:53Z | 2023-12-28T01:25:53Z |
floordiv fix for large values | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5b955aa45219a..2fcab46c9e229 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -727,6 +727,7 @@ Timezones
Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
+- Bug in :meth:`Series.__floordiv__` for :class:`ArrowDtype` with integral dtypes raising for large values (:issue:`56645`)
- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
Conversion
diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 23b5448029dd9..5d4af24221086 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -113,7 +113,12 @@ def cast_for_truediv(
if pa.types.is_integer(arrow_array.type) and pa.types.is_integer(
pa_object.type
):
- return arrow_array.cast(pa.float64())
+ # https://github.com/apache/arrow/issues/35563
+ # Arrow does not allow safe casting large integral values to float64.
+ # Intentionally not using arrow_array.cast because it could be a scalar
+ # value in reflected case, and safe=False only added to
+ # scalar cast in pyarrow 13.
+ return pc.cast(arrow_array, pa.float64(), safe=False)
return arrow_array
def floordiv_compat(
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 3b03272f18203..1ade1d398a4dd 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3133,6 +3133,14 @@ def test_arrow_floordiv():
tm.assert_series_equal(result, expected)
+def test_arrow_floordiv_large_values():
+ # GH 55561
+ a = pd.Series([1425801600000000000], dtype="int64[pyarrow]")
+ expected = pd.Series([1425801600000], dtype="int64[pyarrow]")
+ result = a // 1_000_000
+ tm.assert_series_equal(result, expected)
+
+
def test_string_to_datetime_parsing_cast():
# GH 56266
string_dates = ["2020-01-01 04:30:00", "2020-01-02 00:00:00", "2020-01-03 00:00:00"]
| - [ ] closes #56645 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56647 | 2023-12-27T23:47:02Z | 2023-12-28T15:37:53Z | 2023-12-28T15:37:53Z | 2023-12-28T16:41:33Z |
BUG: Series.to_numpy raising for arrow floats to numpy floats | diff --git a/pandas/core/arrays/arrow/array.py b/pandas/core/arrays/arrow/array.py
index 23b5448029dd9..de1ed9ecfdaf1 100644
--- a/pandas/core/arrays/arrow/array.py
+++ b/pandas/core/arrays/arrow/array.py
@@ -37,6 +37,7 @@
CategoricalDtype,
is_array_like,
is_bool_dtype,
+ is_float_dtype,
is_integer,
is_list_like,
is_numeric_dtype,
@@ -1320,6 +1321,7 @@ def to_numpy(
copy: bool = False,
na_value: object = lib.no_default,
) -> np.ndarray:
+ original_na_value = na_value
dtype, na_value = to_numpy_dtype_inference(self, dtype, na_value, self._hasna)
pa_type = self._pa_array.type
if not self._hasna or isna(na_value) or pa.types.is_null(pa_type):
@@ -1345,7 +1347,14 @@ def to_numpy(
if dtype is not None and isna(na_value):
na_value = None
result = np.full(len(data), fill_value=na_value, dtype=dtype)
- elif not data._hasna or (pa.types.is_floating(pa_type) and na_value is np.nan):
+ elif not data._hasna or (
+ pa.types.is_floating(pa_type)
+ and (
+ na_value is np.nan
+ or original_na_value is lib.no_default
+ and is_float_dtype(dtype)
+ )
+ ):
result = data._pa_array.to_numpy()
if dtype is not None:
result = result.astype(dtype, copy=False)
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 3b03272f18203..5624acfb64764 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -3153,6 +3153,14 @@ def test_string_to_time_parsing_cast():
tm.assert_series_equal(result, expected)
+def test_to_numpy_float():
+ # GH#56267
+ ser = pd.Series([32, 40, None], dtype="float[pyarrow]")
+ result = ser.astype("float64")
+ expected = pd.Series([32, 40, np.nan], dtype="float64")
+ tm.assert_series_equal(result, expected)
+
+
def test_to_numpy_timestamp_to_int():
# GH 55997
ser = pd.Series(["2020-01-01 04:30:00"], dtype="timestamp[ns][pyarrow]")
| - [ ] xref #56267 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56644 | 2023-12-27T21:36:29Z | 2023-12-28T00:02:39Z | 2023-12-28T00:02:39Z | 2023-12-28T15:22:37Z |
TYP: Fix some PythonParser and Plotting types | diff --git a/pandas/core/interchange/from_dataframe.py b/pandas/core/interchange/from_dataframe.py
index d45ae37890ba7..73f492c83c2ff 100644
--- a/pandas/core/interchange/from_dataframe.py
+++ b/pandas/core/interchange/from_dataframe.py
@@ -2,7 +2,10 @@
import ctypes
import re
-from typing import Any
+from typing import (
+ Any,
+ overload,
+)
import numpy as np
@@ -459,12 +462,42 @@ def buffer_to_ndarray(
return np.array([], dtype=ctypes_type)
+@overload
+def set_nulls(
+ data: np.ndarray,
+ col: Column,
+ validity: tuple[Buffer, tuple[DtypeKind, int, str, str]] | None,
+ allow_modify_inplace: bool = ...,
+) -> np.ndarray:
+ ...
+
+
+@overload
+def set_nulls(
+ data: pd.Series,
+ col: Column,
+ validity: tuple[Buffer, tuple[DtypeKind, int, str, str]] | None,
+ allow_modify_inplace: bool = ...,
+) -> pd.Series:
+ ...
+
+
+@overload
+def set_nulls(
+ data: np.ndarray | pd.Series,
+ col: Column,
+ validity: tuple[Buffer, tuple[DtypeKind, int, str, str]] | None,
+ allow_modify_inplace: bool = ...,
+) -> np.ndarray | pd.Series:
+ ...
+
+
def set_nulls(
data: np.ndarray | pd.Series,
col: Column,
validity: tuple[Buffer, tuple[DtypeKind, int, str, str]] | None,
allow_modify_inplace: bool = True,
-):
+) -> np.ndarray | pd.Series:
"""
Set null values for the data according to the column null kind.
diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py
index 79e7554a5744c..c1880eb815032 100644
--- a/pandas/io/parsers/python_parser.py
+++ b/pandas/io/parsers/python_parser.py
@@ -4,12 +4,6 @@
abc,
defaultdict,
)
-from collections.abc import (
- Hashable,
- Iterator,
- Mapping,
- Sequence,
-)
import csv
from io import StringIO
import re
@@ -50,15 +44,24 @@
)
if TYPE_CHECKING:
+ from collections.abc import (
+ Hashable,
+ Iterator,
+ Mapping,
+ Sequence,
+ )
+
from pandas._typing import (
ArrayLike,
ReadCsvBuffer,
Scalar,
+ T,
)
from pandas import (
Index,
MultiIndex,
+ Series,
)
# BOM character (byte order mark)
@@ -77,7 +80,7 @@ def __init__(self, f: ReadCsvBuffer[str] | list, **kwds) -> None:
"""
super().__init__(kwds)
- self.data: Iterator[str] | None = None
+ self.data: Iterator[list[str]] | list[list[Scalar]] = []
self.buf: list = []
self.pos = 0
self.line_pos = 0
@@ -116,10 +119,11 @@ def __init__(self, f: ReadCsvBuffer[str] | list, **kwds) -> None:
# Set self.data to something that can read lines.
if isinstance(f, list):
- # read_excel: f is a list
- self.data = cast(Iterator[str], f)
+ # read_excel: f is a nested list, can contain non-str
+ self.data = f
else:
assert hasattr(f, "readline")
+ # yields list of str
self.data = self._make_reader(f)
# Get columns in two steps: infer from data, then
@@ -179,7 +183,7 @@ def num(self) -> re.Pattern:
)
return re.compile(regex)
- def _make_reader(self, f: IO[str] | ReadCsvBuffer[str]):
+ def _make_reader(self, f: IO[str] | ReadCsvBuffer[str]) -> Iterator[list[str]]:
sep = self.delimiter
if sep is None or len(sep) == 1:
@@ -246,7 +250,9 @@ def _read():
def read(
self, rows: int | None = None
) -> tuple[
- Index | None, Sequence[Hashable] | MultiIndex, Mapping[Hashable, ArrayLike]
+ Index | None,
+ Sequence[Hashable] | MultiIndex,
+ Mapping[Hashable, ArrayLike | Series],
]:
try:
content = self._get_lines(rows)
@@ -326,7 +332,9 @@ def _exclude_implicit_index(
def get_chunk(
self, size: int | None = None
) -> tuple[
- Index | None, Sequence[Hashable] | MultiIndex, Mapping[Hashable, ArrayLike]
+ Index | None,
+ Sequence[Hashable] | MultiIndex,
+ Mapping[Hashable, ArrayLike | Series],
]:
if size is None:
# error: "PythonParser" has no attribute "chunksize"
@@ -689,7 +697,7 @@ def _check_for_bom(self, first_row: list[Scalar]) -> list[Scalar]:
new_row_list: list[Scalar] = [new_row]
return new_row_list + first_row[1:]
- def _is_line_empty(self, line: list[Scalar]) -> bool:
+ def _is_line_empty(self, line: Sequence[Scalar]) -> bool:
"""
Check if a line is empty or not.
@@ -730,8 +738,6 @@ def _next_line(self) -> list[Scalar]:
else:
while self.skipfunc(self.pos):
self.pos += 1
- # assert for mypy, data is Iterator[str] or None, would error in next
- assert self.data is not None
next(self.data)
while True:
@@ -800,12 +806,10 @@ def _next_iter_line(self, row_num: int) -> list[Scalar] | None:
The row number of the line being parsed.
"""
try:
- # assert for mypy, data is Iterator[str] or None, would error in next
- assert self.data is not None
+ assert not isinstance(self.data, list)
line = next(self.data)
- # for mypy
- assert isinstance(line, list)
- return line
+ # lie about list[str] vs list[Scalar] to minimize ignores
+ return line # type: ignore[return-value]
except csv.Error as e:
if self.on_bad_lines in (
self.BadLineHandleMethod.ERROR,
@@ -855,7 +859,7 @@ def _check_comments(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
ret.append(rl)
return ret
- def _remove_empty_lines(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
+ def _remove_empty_lines(self, lines: list[list[T]]) -> list[list[T]]:
"""
Iterate through the lines and remove any that are
either empty or contain only one whitespace value
@@ -1121,9 +1125,6 @@ def _get_lines(self, rows: int | None = None) -> list[list[Scalar]]:
row_ct = 0
offset = self.pos if self.pos is not None else 0
while row_ct < rows:
- # assert for mypy, data is Iterator[str] or None, would
- # error in next
- assert self.data is not None
new_row = next(self.data)
if not self.skipfunc(offset + row_index):
row_ct += 1
@@ -1338,7 +1339,7 @@ def _make_reader(self, f: IO[str] | ReadCsvBuffer[str]) -> FixedWidthReader:
self.infer_nrows,
)
- def _remove_empty_lines(self, lines: list[list[Scalar]]) -> list[list[Scalar]]:
+ def _remove_empty_lines(self, lines: list[list[T]]) -> list[list[T]]:
"""
Returns the list of lines without the empty ones. With fixed-width
fields, empty lines become arrays of empty strings.
diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py
index d2b76decaa75d..084452ec23719 100644
--- a/pandas/plotting/_matplotlib/boxplot.py
+++ b/pandas/plotting/_matplotlib/boxplot.py
@@ -371,8 +371,8 @@ def _get_colors():
# num_colors=3 is required as method maybe_color_bp takes the colors
# in positions 0 and 2.
# if colors not provided, use same defaults as DataFrame.plot.box
- result = get_standard_colors(num_colors=3)
- result = np.take(result, [0, 0, 2])
+ result_list = get_standard_colors(num_colors=3)
+ result = np.take(result_list, [0, 0, 2])
result = np.append(result, "k")
colors = kwds.pop("color", None)
diff --git a/pandas/plotting/_matplotlib/hist.py b/pandas/plotting/_matplotlib/hist.py
index e610f1adb602c..898abc9b78e3f 100644
--- a/pandas/plotting/_matplotlib/hist.py
+++ b/pandas/plotting/_matplotlib/hist.py
@@ -457,10 +457,8 @@ def hist_series(
ax.grid(grid)
axes = np.array([ax])
- # error: Argument 1 to "set_ticks_props" has incompatible type "ndarray[Any,
- # dtype[Any]]"; expected "Axes | Sequence[Axes]"
set_ticks_props(
- axes, # type: ignore[arg-type]
+ axes,
xlabelsize=xlabelsize,
xrot=xrot,
ylabelsize=ylabelsize,
diff --git a/pandas/plotting/_matplotlib/style.py b/pandas/plotting/_matplotlib/style.py
index bf4e4be3bfd82..45a077a6151cf 100644
--- a/pandas/plotting/_matplotlib/style.py
+++ b/pandas/plotting/_matplotlib/style.py
@@ -3,11 +3,13 @@
from collections.abc import (
Collection,
Iterator,
+ Sequence,
)
import itertools
from typing import (
TYPE_CHECKING,
cast,
+ overload,
)
import warnings
@@ -26,12 +28,46 @@
from matplotlib.colors import Colormap
+@overload
+def get_standard_colors(
+ num_colors: int,
+ colormap: Colormap | None = ...,
+ color_type: str = ...,
+ *,
+ color: dict[str, Color],
+) -> dict[str, Color]:
+ ...
+
+
+@overload
+def get_standard_colors(
+ num_colors: int,
+ colormap: Colormap | None = ...,
+ color_type: str = ...,
+ *,
+ color: Color | Sequence[Color] | None = ...,
+) -> list[Color]:
+ ...
+
+
+@overload
+def get_standard_colors(
+ num_colors: int,
+ colormap: Colormap | None = ...,
+ color_type: str = ...,
+ *,
+ color: dict[str, Color] | Color | Sequence[Color] | None = ...,
+) -> dict[str, Color] | list[Color]:
+ ...
+
+
def get_standard_colors(
num_colors: int,
colormap: Colormap | None = None,
color_type: str = "default",
- color: dict[str, Color] | Color | Collection[Color] | None = None,
-):
+ *,
+ color: dict[str, Color] | Color | Sequence[Color] | None = None,
+) -> dict[str, Color] | list[Color]:
"""
Get standard colors based on `colormap`, `color_type` or `color` inputs.
diff --git a/pandas/plotting/_matplotlib/tools.py b/pandas/plotting/_matplotlib/tools.py
index 898b5b25e7b01..89a8a7cf79719 100644
--- a/pandas/plotting/_matplotlib/tools.py
+++ b/pandas/plotting/_matplotlib/tools.py
@@ -19,10 +19,7 @@
)
if TYPE_CHECKING:
- from collections.abc import (
- Iterable,
- Sequence,
- )
+ from collections.abc import Iterable
from matplotlib.axes import Axes
from matplotlib.axis import Axis
@@ -442,7 +439,7 @@ def handle_shared_axes(
_remove_labels_from_axis(ax.yaxis)
-def flatten_axes(axes: Axes | Sequence[Axes]) -> np.ndarray:
+def flatten_axes(axes: Axes | Iterable[Axes]) -> np.ndarray:
if not is_list_like(axes):
return np.array([axes])
elif isinstance(axes, (np.ndarray, ABCIndex)):
@@ -451,7 +448,7 @@ def flatten_axes(axes: Axes | Sequence[Axes]) -> np.ndarray:
def set_ticks_props(
- axes: Axes | Sequence[Axes],
+ axes: Axes | Iterable[Axes],
xlabelsize: int | None = None,
xrot=None,
ylabelsize: int | None = None,
diff --git a/pyright_reportGeneralTypeIssues.json b/pyright_reportGeneralTypeIssues.json
index a38343d6198ae..da27906e041cf 100644
--- a/pyright_reportGeneralTypeIssues.json
+++ b/pyright_reportGeneralTypeIssues.json
@@ -99,11 +99,11 @@
"pandas/io/parsers/base_parser.py",
"pandas/io/parsers/c_parser_wrapper.py",
"pandas/io/pytables.py",
- "pandas/io/sas/sas_xport.py",
"pandas/io/sql.py",
"pandas/io/stata.py",
"pandas/plotting/_matplotlib/boxplot.py",
"pandas/plotting/_matplotlib/core.py",
+ "pandas/plotting/_matplotlib/misc.py",
"pandas/plotting/_matplotlib/timeseries.py",
"pandas/plotting/_matplotlib/tools.py",
"pandas/tseries/frequencies.py",
| and a few misc types. | https://api.github.com/repos/pandas-dev/pandas/pulls/56643 | 2023-12-27T21:32:44Z | 2023-12-27T23:38:23Z | 2023-12-27T23:38:23Z | 2024-01-17T02:49:50Z |
DOC: Add optional dependencies table in 2.2 whatsnew | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5b955aa45219a..5a3a7c8a30e9f 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -416,15 +416,63 @@ Backwards incompatible API changes
Increased minimum versions for dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
-The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
-Optional libraries below the lowest tested version may still work, but are not considered supported.
-
-+-----------------+-----------------+---------+
-| Package | Minimum Version | Changed |
-+=================+=================+=========+
-| mypy (dev) | 1.8.0 | X |
-+-----------------+-----------------+---------+
+For `optional dependencies <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
+Optional dependencies below the lowest tested version may still work but are not considered supported.
+The following table lists the optional dependencies that have had their minimum tested version increased.
+
++-----------------+---------------------+
+| Package | New Minimum Version |
++=================+=====================+
+| beautifulsoup4 | 4.11.2 |
++-----------------+---------------------+
+| blosc | 1.21.3 |
++-----------------+---------------------+
+| bottleneck | 1.3.6 |
++-----------------+---------------------+
+| fastparquet | 2022.12.0 |
++-----------------+---------------------+
+| fsspec | 2022.11.0 |
++-----------------+---------------------+
+| gcsfs | 2022.11.0 |
++-----------------+---------------------+
+| lxml | 4.9.2 |
++-----------------+---------------------+
+| matplotlib | 3.6.3 |
++-----------------+---------------------+
+| numba | 0.56.4 |
++-----------------+---------------------+
+| numexpr | 2.8.4 |
++-----------------+---------------------+
+| qtpy | 2.3.0 |
++-----------------+---------------------+
+| openpyxl | 3.1.0 |
++-----------------+---------------------+
+| psycopg2 | 2.9.6 |
++-----------------+---------------------+
+| pyreadstat | 1.2.0 |
++-----------------+---------------------+
+| pytables | 3.8.0 |
++-----------------+---------------------+
+| pyxlsb | 1.0.10 |
++-----------------+---------------------+
+| s3fs | 2022.11.0 |
++-----------------+---------------------+
+| scipy | 1.10.0 |
++-----------------+---------------------+
+| sqlalchemy | 2.0.0 |
++-----------------+---------------------+
+| tabulate | 0.9.0 |
++-----------------+---------------------+
+| xarray | 2022.12.0 |
++-----------------+---------------------+
+| xlsxwriter | 3.0.5 |
++-----------------+---------------------+
+| zstandard | 0.19.0 |
++-----------------+---------------------+
+| pyqt5 | 5.15.8 |
++-----------------+---------------------+
+| tzdata | 2022.7 |
++-----------------+---------------------+
See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56641 | 2023-12-27T19:54:04Z | 2023-12-28T19:03:52Z | 2023-12-28T19:03:52Z | 2023-12-28T19:03:55Z |
Backport PR #56632 on branch 2.2.x (DOC: Minor fixups for 2.2.0 whatsnew) | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 5ee94b74c527e..5b955aa45219a 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -123,7 +123,7 @@ nullability handling.
with pg_dbapi.connect(uri) as conn:
df.to_sql("pandas_table", conn, index=False)
- # for roundtripping
+ # for round-tripping
with pg_dbapi.connect(uri) as conn:
df2 = pd.read_sql("pandas_table", conn)
@@ -176,7 +176,7 @@ leverage the ``dtype_backend="pyarrow"`` argument of :func:`~pandas.read_sql`
.. code-block:: ipython
- # for roundtripping
+ # for round-tripping
with pg_dbapi.connect(uri) as conn:
df2 = pd.read_sql("pandas_table", conn, dtype_backend="pyarrow")
@@ -306,22 +306,21 @@ Other enhancements
- :meth:`~DataFrame.to_sql` with method parameter set to ``multi`` works with Oracle on the backend
- :attr:`Series.attrs` / :attr:`DataFrame.attrs` now uses a deepcopy for propagating ``attrs`` (:issue:`54134`).
- :func:`get_dummies` now returning extension dtypes ``boolean`` or ``bool[pyarrow]`` that are compatible with the input dtype (:issue:`56273`)
-- :func:`read_csv` now supports ``on_bad_lines`` parameter with ``engine="pyarrow"``. (:issue:`54480`)
+- :func:`read_csv` now supports ``on_bad_lines`` parameter with ``engine="pyarrow"`` (:issue:`54480`)
- :func:`read_sas` returns ``datetime64`` dtypes with resolutions better matching those stored natively in SAS, and avoids returning object-dtype in cases that cannot be stored with ``datetime64[ns]`` dtype (:issue:`56127`)
-- :func:`read_spss` now returns a :class:`DataFrame` that stores the metadata in :attr:`DataFrame.attrs`. (:issue:`54264`)
+- :func:`read_spss` now returns a :class:`DataFrame` that stores the metadata in :attr:`DataFrame.attrs` (:issue:`54264`)
- :func:`tseries.api.guess_datetime_format` is now part of the public API (:issue:`54727`)
+- :meth:`DataFrame.apply` now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
- :meth:`ExtensionArray._explode` interface method added to allow extension type implementations of the ``explode`` method (:issue:`54833`)
- :meth:`ExtensionArray.duplicated` added to allow extension type implementations of the ``duplicated`` method (:issue:`55255`)
- :meth:`Series.ffill`, :meth:`Series.bfill`, :meth:`DataFrame.ffill`, and :meth:`DataFrame.bfill` have gained the argument ``limit_area`` (:issue:`56492`)
- Allow passing ``read_only``, ``data_only`` and ``keep_links`` arguments to openpyxl using ``engine_kwargs`` of :func:`read_excel` (:issue:`55027`)
-- DataFrame.apply now allows the usage of numba (via ``engine="numba"``) to JIT compile the passed function, allowing for potential speedups (:issue:`54666`)
- Implement masked algorithms for :meth:`Series.value_counts` (:issue:`54984`)
- Implemented :meth:`Series.str.extract` for :class:`ArrowDtype` (:issue:`56268`)
-- Improved error message that appears in :meth:`DatetimeIndex.to_period` with frequencies which are not supported as period frequencies, such as "BMS" (:issue:`56243`)
-- Improved error message when constructing :class:`Period` with invalid offsets such as "QS" (:issue:`55785`)
+- Improved error message that appears in :meth:`DatetimeIndex.to_period` with frequencies which are not supported as period frequencies, such as ``"BMS"`` (:issue:`56243`)
+- Improved error message when constructing :class:`Period` with invalid offsets such as ``"QS"`` (:issue:`55785`)
- The dtypes ``string[pyarrow]`` and ``string[pyarrow_numpy]`` now both utilize the ``large_string`` type from PyArrow to avoid overflow for long columns (:issue:`56259`)
-
.. ---------------------------------------------------------------------------
.. _whatsnew_220.notable_bug_fixes:
@@ -386,6 +385,8 @@ index levels when joining on two indexes with different levels (:issue:`34133`).
left = pd.DataFrame({"left": 1}, index=pd.MultiIndex.from_tuples([("x", 1), ("x", 2)], names=["A", "B"]))
right = pd.DataFrame({"right": 2}, index=pd.MultiIndex.from_tuples([(1, 1), (2, 2)], names=["B", "C"]))
+ left
+ right
result = left.join(right)
*Old Behavior*
@@ -415,15 +416,6 @@ Backwards incompatible API changes
Increased minimum versions for dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Some minimum supported versions of dependencies were updated.
-If installed, we now require:
-
-+-----------------+-----------------+----------+---------+
-| Package | Minimum Version | Required | Changed |
-+=================+=================+==========+=========+
-| | | X | X |
-+-----------------+-----------------+----------+---------+
-
For `optional libraries <https://pandas.pydata.org/docs/getting_started/install.html>`_ the general recommendation is to use the latest version.
The following table lists the lowest version per library that is currently being tested throughout the development of pandas.
Optional libraries below the lowest tested version may still work, but are not considered supported.
@@ -433,8 +425,6 @@ Optional libraries below the lowest tested version may still work, but are not c
+=================+=================+=========+
| mypy (dev) | 1.8.0 | X |
+-----------------+-----------------+---------+
-| | | X |
-+-----------------+-----------------+---------+
See :ref:`install.dependencies` and :ref:`install.optional_dependencies` for more.
@@ -606,20 +596,20 @@ Other Deprecations
- Deprecated ``year``, ``month``, ``quarter``, ``day``, ``hour``, ``minute``, and ``second`` keywords in the :class:`PeriodIndex` constructor, use :meth:`PeriodIndex.from_fields` instead (:issue:`55960`)
- Deprecated accepting a type as an argument in :meth:`Index.view`, call without any arguments instead (:issue:`55709`)
- Deprecated allowing non-integer ``periods`` argument in :func:`date_range`, :func:`timedelta_range`, :func:`period_range`, and :func:`interval_range` (:issue:`56036`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_clipboard`. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_dict`. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_excel` except ``excel_writer``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_gbq` except ``destination_table``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_markdown` except ``buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_parquet` except ``path``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf``. (:issue:`54229`)
-- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_xml` except ``path_or_buffer``. (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_clipboard` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_csv` except ``path_or_buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_dict` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_excel` except ``excel_writer`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_gbq` except ``destination_table`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_hdf` except ``path_or_buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_html` except ``buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_json` except ``path_or_buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_latex` except ``buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_markdown` except ``buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_parquet` except ``path`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_pickle` except ``path`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_string` except ``buf`` (:issue:`54229`)
+- Deprecated allowing non-keyword arguments in :meth:`DataFrame.to_xml` except ``path_or_buffer`` (:issue:`54229`)
- Deprecated allowing passing :class:`BlockManager` objects to :class:`DataFrame` or :class:`SingleBlockManager` objects to :class:`Series` (:issue:`52419`)
- Deprecated behavior of :meth:`Index.insert` with an object-dtype index silently performing type inference on the result, explicitly call ``result.infer_objects(copy=False)`` for the old behavior instead (:issue:`51363`)
- Deprecated casting non-datetimelike values (mainly strings) in :meth:`Series.isin` and :meth:`Index.isin` with ``datetime64``, ``timedelta64``, and :class:`PeriodDtype` dtypes (:issue:`53111`)
@@ -692,31 +682,30 @@ Bug fixes
Categorical
^^^^^^^^^^^
- :meth:`Categorical.isin` raising ``InvalidIndexError`` for categorical containing overlapping :class:`Interval` values (:issue:`34974`)
-- Bug in :meth:`CategoricalDtype.__eq__` returning false for unordered categorical data with mixed types (:issue:`55468`)
--
+- Bug in :meth:`CategoricalDtype.__eq__` returning ``False`` for unordered categorical data with mixed types (:issue:`55468`)
Datetimelike
^^^^^^^^^^^^
- Bug in :class:`DatetimeIndex` construction when passing both a ``tz`` and either ``dayfirst`` or ``yearfirst`` ignoring dayfirst/yearfirst (:issue:`55813`)
- Bug in :class:`DatetimeIndex` when passing an object-dtype ndarray of float objects and a ``tz`` incorrectly localizing the result (:issue:`55780`)
- Bug in :func:`Series.isin` with :class:`DatetimeTZDtype` dtype and comparison values that are all ``NaT`` incorrectly returning all-``False`` even if the series contains ``NaT`` entries (:issue:`56427`)
-- Bug in :func:`concat` raising ``AttributeError`` when concatenating all-NA DataFrame with :class:`DatetimeTZDtype` dtype DataFrame. (:issue:`52093`)
+- Bug in :func:`concat` raising ``AttributeError`` when concatenating all-NA DataFrame with :class:`DatetimeTZDtype` dtype DataFrame (:issue:`52093`)
- Bug in :func:`testing.assert_extension_array_equal` that could use the wrong unit when comparing resolutions (:issue:`55730`)
- Bug in :func:`to_datetime` and :class:`DatetimeIndex` when passing a list of mixed-string-and-numeric types incorrectly raising (:issue:`55780`)
- Bug in :func:`to_datetime` and :class:`DatetimeIndex` when passing mixed-type objects with a mix of timezones or mix of timezone-awareness failing to raise ``ValueError`` (:issue:`55693`)
+- Bug in :meth:`.Tick.delta` with very large ticks raising ``OverflowError`` instead of ``OutOfBoundsTimedelta`` (:issue:`55503`)
- Bug in :meth:`DatetimeIndex.shift` with non-nanosecond resolution incorrectly returning with nanosecond resolution (:issue:`56117`)
- Bug in :meth:`DatetimeIndex.union` returning object dtype for tz-aware indexes with the same timezone but different units (:issue:`55238`)
- Bug in :meth:`Index.is_monotonic_increasing` and :meth:`Index.is_monotonic_decreasing` always caching :meth:`Index.is_unique` as ``True`` when first value in index is ``NaT`` (:issue:`55755`)
- Bug in :meth:`Index.view` to a datetime64 dtype with non-supported resolution incorrectly raising (:issue:`55710`)
- Bug in :meth:`Series.dt.round` with non-nanosecond resolution and ``NaT`` entries incorrectly raising ``OverflowError`` (:issue:`56158`)
- Bug in :meth:`Series.fillna` with non-nanosecond resolution dtypes and higher-resolution vector values returning incorrect (internally-corrupted) results (:issue:`56410`)
-- Bug in :meth:`Tick.delta` with very large ticks raising ``OverflowError`` instead of ``OutOfBoundsTimedelta`` (:issue:`55503`)
- Bug in :meth:`Timestamp.unit` being inferred incorrectly from an ISO8601 format string with minute or hour resolution and a timezone offset (:issue:`56208`)
-- Bug in ``.astype`` converting from a higher-resolution ``datetime64`` dtype to a lower-resolution ``datetime64`` dtype (e.g. ``datetime64[us]->datetim64[ms]``) silently overflowing with values near the lower implementation bound (:issue:`55979`)
+- Bug in ``.astype`` converting from a higher-resolution ``datetime64`` dtype to a lower-resolution ``datetime64`` dtype (e.g. ``datetime64[us]->datetime64[ms]``) silently overflowing with values near the lower implementation bound (:issue:`55979`)
- Bug in adding or subtracting a :class:`Week` offset to a ``datetime64`` :class:`Series`, :class:`Index`, or :class:`DataFrame` column with non-nanosecond resolution returning incorrect results (:issue:`55583`)
- Bug in addition or subtraction of :class:`BusinessDay` offset with ``offset`` attribute to non-nanosecond :class:`Index`, :class:`Series`, or :class:`DataFrame` column giving incorrect results (:issue:`55608`)
- Bug in addition or subtraction of :class:`DateOffset` objects with microsecond components to ``datetime64`` :class:`Index`, :class:`Series`, or :class:`DataFrame` columns with non-nanosecond resolution (:issue:`55595`)
-- Bug in addition or subtraction of very large :class:`Tick` objects with :class:`Timestamp` or :class:`Timedelta` objects raising ``OverflowError`` instead of ``OutOfBoundsTimedelta`` (:issue:`55503`)
+- Bug in addition or subtraction of very large :class:`.Tick` objects with :class:`Timestamp` or :class:`Timedelta` objects raising ``OverflowError`` instead of ``OutOfBoundsTimedelta`` (:issue:`55503`)
- Bug in creating a :class:`Index`, :class:`Series`, or :class:`DataFrame` with a non-nanosecond :class:`DatetimeTZDtype` and inputs that would be out of bounds with nanosecond resolution incorrectly raising ``OutOfBoundsDatetime`` (:issue:`54620`)
- Bug in creating a :class:`Index`, :class:`Series`, or :class:`DataFrame` with a non-nanosecond ``datetime64`` (or :class:`DatetimeTZDtype`) from mixed-numeric inputs treating those as nanoseconds instead of as multiples of the dtype's unit (which would happen with non-mixed numeric inputs) (:issue:`56004`)
- Bug in creating a :class:`Index`, :class:`Series`, or :class:`DataFrame` with a non-nanosecond ``datetime64`` dtype and inputs that would be out of bounds for a ``datetime64[ns]`` incorrectly raising ``OutOfBoundsDatetime`` (:issue:`55756`)
@@ -739,14 +728,12 @@ Numeric
^^^^^^^
- Bug in :func:`read_csv` with ``engine="pyarrow"`` causing rounding errors for large integers (:issue:`52505`)
- Bug in :meth:`Series.pow` not filling missing values correctly (:issue:`55512`)
--
Conversion
^^^^^^^^^^
- Bug in :meth:`DataFrame.astype` when called with ``str`` on unpickled array - the array might change in-place (:issue:`54654`)
- Bug in :meth:`DataFrame.astype` where ``errors="ignore"`` had no effect for extension types (:issue:`54654`)
- Bug in :meth:`Series.convert_dtypes` not converting all NA column to ``null[pyarrow]`` (:issue:`55346`)
--
Strings
^^^^^^^
@@ -763,13 +750,12 @@ Strings
Interval
^^^^^^^^
-- Bug in :class:`Interval` ``__repr__`` not displaying UTC offsets for :class:`Timestamp` bounds. Additionally the hour, minute and second components will now be shown. (:issue:`55015`)
+- Bug in :class:`Interval` ``__repr__`` not displaying UTC offsets for :class:`Timestamp` bounds. Additionally the hour, minute and second components will now be shown (:issue:`55015`)
- Bug in :meth:`IntervalIndex.factorize` and :meth:`Series.factorize` with :class:`IntervalDtype` with datetime64 or timedelta64 intervals not preserving non-nanosecond units (:issue:`56099`)
- Bug in :meth:`IntervalIndex.from_arrays` when passed ``datetime64`` or ``timedelta64`` arrays with mismatched resolutions constructing an invalid ``IntervalArray`` object (:issue:`55714`)
- Bug in :meth:`IntervalIndex.get_indexer` with datetime or timedelta intervals incorrectly matching on integer targets (:issue:`47772`)
- Bug in :meth:`IntervalIndex.get_indexer` with timezone-aware datetime intervals incorrectly matching on a sequence of timezone-naive targets (:issue:`47772`)
- Bug in setting values on a :class:`Series` with an :class:`IntervalIndex` using a slice incorrectly raising (:issue:`54722`)
--
Indexing
^^^^^^^^
@@ -781,25 +767,23 @@ Indexing
Missing
^^^^^^^
- Bug in :meth:`DataFrame.update` wasn't updating in-place for tz-aware datetime64 dtypes (:issue:`56227`)
--
MultiIndex
^^^^^^^^^^
- Bug in :meth:`MultiIndex.get_indexer` not raising ``ValueError`` when ``method`` provided and index is non-monotonic (:issue:`53452`)
--
I/O
^^^
-- Bug in :func:`read_csv` where ``engine="python"`` did not respect ``chunksize`` arg when ``skiprows`` was specified. (:issue:`56323`)
-- Bug in :func:`read_csv` where ``engine="python"`` was causing a ``TypeError`` when a callable ``skiprows`` and a chunk size was specified. (:issue:`55677`)
-- Bug in :func:`read_csv` where ``on_bad_lines="warn"`` would write to ``stderr`` instead of raise a Python warning. This now yields a :class:`.errors.ParserWarning` (:issue:`54296`)
+- Bug in :func:`read_csv` where ``engine="python"`` did not respect ``chunksize`` arg when ``skiprows`` was specified (:issue:`56323`)
+- Bug in :func:`read_csv` where ``engine="python"`` was causing a ``TypeError`` when a callable ``skiprows`` and a chunk size was specified (:issue:`55677`)
+- Bug in :func:`read_csv` where ``on_bad_lines="warn"`` would write to ``stderr`` instead of raising a Python warning; this now yields a :class:`.errors.ParserWarning` (:issue:`54296`)
- Bug in :func:`read_csv` with ``engine="pyarrow"`` where ``quotechar`` was ignored (:issue:`52266`)
-- Bug in :func:`read_csv` with ``engine="pyarrow"`` where ``usecols`` wasn't working with a csv with no headers (:issue:`54459`)
-- Bug in :func:`read_excel`, with ``engine="xlrd"`` (``xls`` files) erroring when file contains NaNs/Infs (:issue:`54564`)
+- Bug in :func:`read_csv` with ``engine="pyarrow"`` where ``usecols`` wasn't working with a CSV with no headers (:issue:`54459`)
+- Bug in :func:`read_excel`, with ``engine="xlrd"`` (``xls`` files) erroring when the file contains ``NaN`` or ``Inf`` (:issue:`54564`)
- Bug in :func:`read_json` not handling dtype conversion properly if ``infer_string`` is set (:issue:`56195`)
-- Bug in :meth:`DataFrame.to_excel`, with ``OdsWriter`` (``ods`` files) writing boolean/string value (:issue:`54994`)
+- Bug in :meth:`DataFrame.to_excel`, with ``OdsWriter`` (``ods`` files) writing Boolean/string value (:issue:`54994`)
- Bug in :meth:`DataFrame.to_hdf` and :func:`read_hdf` with ``datetime64`` dtypes with non-nanosecond resolution failing to round-trip correctly (:issue:`55622`)
-- Bug in :meth:`~pandas.read_excel` with ``engine="odf"`` (``ods`` files) when string contains annotation (:issue:`55200`)
+- Bug in :meth:`~pandas.read_excel` with ``engine="odf"`` (``ods`` files) when a string cell contains an annotation (:issue:`55200`)
- Bug in :meth:`~pandas.read_excel` with an ODS file without cached formatted cell for float values (:issue:`55219`)
- Bug where :meth:`DataFrame.to_json` would raise an ``OverflowError`` instead of a ``TypeError`` with unsupported NumPy types (:issue:`55403`)
@@ -808,12 +792,11 @@ Period
- Bug in :class:`PeriodIndex` construction when more than one of ``data``, ``ordinal`` and ``**fields`` are passed failing to raise ``ValueError`` (:issue:`55961`)
- Bug in :class:`Period` addition silently wrapping around instead of raising ``OverflowError`` (:issue:`55503`)
- Bug in casting from :class:`PeriodDtype` with ``astype`` to ``datetime64`` or :class:`DatetimeTZDtype` with non-nanosecond unit incorrectly returning with nanosecond unit (:issue:`55958`)
--
Plotting
^^^^^^^^
-- Bug in :meth:`DataFrame.plot.box` with ``vert=False`` and a matplotlib ``Axes`` created with ``sharey=True`` (:issue:`54941`)
-- Bug in :meth:`DataFrame.plot.scatter` discaring string columns (:issue:`56142`)
+- Bug in :meth:`DataFrame.plot.box` with ``vert=False`` and a Matplotlib ``Axes`` created with ``sharey=True`` (:issue:`54941`)
+- Bug in :meth:`DataFrame.plot.scatter` discarding string columns (:issue:`56142`)
- Bug in :meth:`Series.plot` when reusing an ``ax`` object failing to raise when a ``how`` keyword is passed (:issue:`55953`)
Groupby/resample/rolling
@@ -821,9 +804,9 @@ Groupby/resample/rolling
- Bug in :class:`.Rolling` where duplicate datetimelike indexes are treated as consecutive rather than equal with ``closed='left'`` and ``closed='neither'`` (:issue:`20712`)
- Bug in :meth:`.DataFrameGroupBy.idxmin`, :meth:`.DataFrameGroupBy.idxmax`, :meth:`.SeriesGroupBy.idxmin`, and :meth:`.SeriesGroupBy.idxmax` would not retain :class:`.Categorical` dtype when the index was a :class:`.CategoricalIndex` that contained NA values (:issue:`54234`)
- Bug in :meth:`.DataFrameGroupBy.transform` and :meth:`.SeriesGroupBy.transform` when ``observed=False`` and ``f="idxmin"`` or ``f="idxmax"`` would incorrectly raise on unobserved categories (:issue:`54234`)
-- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_count` could result in incorrect sorting if the columns of the DataFrame or name of the Series are integers (:issue:`55951`)
-- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_count` would not respect ``sort=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` (:issue:`55951`)
-- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_count` would sort by proportions rather than frequencies when ``sort=True`` and ``normalize=True`` (:issue:`55951`)
+- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_counts` could result in incorrect sorting if the columns of the DataFrame or name of the Series are integers (:issue:`55951`)
+- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_counts` would not respect ``sort=False`` in :meth:`DataFrame.groupby` and :meth:`Series.groupby` (:issue:`55951`)
+- Bug in :meth:`.DataFrameGroupBy.value_counts` and :meth:`.SeriesGroupBy.value_counts` would sort by proportions rather than frequencies when ``sort=True`` and ``normalize=True`` (:issue:`55951`)
- Bug in :meth:`DataFrame.asfreq` and :meth:`Series.asfreq` with a :class:`DatetimeIndex` with non-nanosecond resolution incorrectly converting to nanosecond resolution (:issue:`55958`)
- Bug in :meth:`DataFrame.ewm` when passed ``times`` with non-nanosecond ``datetime64`` or :class:`DatetimeTZDtype` dtype (:issue:`56262`)
- Bug in :meth:`DataFrame.groupby` and :meth:`Series.groupby` where grouping by a combination of ``Decimal`` and NA values would fail when ``sort=True`` (:issue:`54847`)
@@ -845,22 +828,11 @@ Reshaping
- Bug in :meth:`DataFrame.melt` where it would not preserve the datetime (:issue:`55254`)
- Bug in :meth:`DataFrame.pivot_table` where the row margin is incorrect when the columns have numeric names (:issue:`26568`)
- Bug in :meth:`DataFrame.pivot` with numeric columns and extension dtype for data (:issue:`56528`)
-- Bug in :meth:`DataFrame.stack` and :meth:`Series.stack` with ``future_stack=True`` would not preserve NA values in the index (:issue:`56573`)
+- Bug in :meth:`DataFrame.stack` with ``future_stack=True`` would not preserve NA values in the index (:issue:`56573`)
Sparse
^^^^^^
- Bug in :meth:`SparseArray.take` when using a different fill value than the array's fill value (:issue:`55181`)
--
-
-ExtensionArray
-^^^^^^^^^^^^^^
--
--
-
-Styler
-^^^^^^
--
--
Other
^^^^^
@@ -871,15 +843,11 @@ Other
- Bug in :meth:`DataFrame.apply` where passing ``raw=True`` ignored ``args`` passed to the applied function (:issue:`55009`)
- Bug in :meth:`DataFrame.from_dict` which would always sort the rows of the created :class:`DataFrame`. (:issue:`55683`)
- Bug in :meth:`DataFrame.sort_index` when passing ``axis="columns"`` and ``ignore_index=True`` raising a ``ValueError`` (:issue:`56478`)
-- Bug in rendering ``inf`` values inside a a :class:`DataFrame` with the ``use_inf_as_na`` option enabled (:issue:`55483`)
+- Bug in rendering ``inf`` values inside a :class:`DataFrame` with the ``use_inf_as_na`` option enabled (:issue:`55483`)
- Bug in rendering a :class:`Series` with a :class:`MultiIndex` when one of the index level's names is 0 not having that name displayed (:issue:`55415`)
- Bug in the error message when assigning an empty :class:`DataFrame` to a column (:issue:`55956`)
- Bug when time-like strings were being cast to :class:`ArrowDtype` with ``pyarrow.time64`` type (:issue:`56463`)
-.. ***DO NOT USE THIS SECTION***
-
--
--
.. ---------------------------------------------------------------------------
.. _whatsnew_220.contributors:
| Backport PR #56632: DOC: Minor fixups for 2.2.0 whatsnew | https://api.github.com/repos/pandas-dev/pandas/pulls/56640 | 2023-12-27T19:19:32Z | 2023-12-27T19:55:20Z | 2023-12-27T19:55:20Z | 2023-12-27T19:55:20Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.